id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.02446 | Vulnerable Source Code Detection using SonarCloud Code Analysis | In Software Development Life Cycle (SDLC), security vulnerabilities are one
of the points introduced during the construction stage. Failure to detect
software defects earlier after releasing the product to the market causes
higher repair costs for the company. So, it decreases the company's reputation,
violates user privacy, and causes an unrepairable issue for the application.
The introduction of vulnerability detection enables reducing the number of
false alerts to focus the limited testing efforts on potentially vulnerable
files. UMKM Masa Kini (UMI) is a Point of Sales application to sell any Micro,
Small, and Medium Enterprises Product (UMKM). Therefore, in the current work,
we analyze the suitability of these metrics to create Machine Learning based
software vulnerability detectors for UMI applications. Code is generated using
a commercial tool, SonarCloud. Experimental result shows that there are 3,285
vulnerable rules detected. | Alifia Puspaningrum, Muhammad Anis Al Hilmi, Darsih, Muhamad Mustamiin, Maulana Ilham Ginanjar | 2023-07-05T17:15:15Z | http://arxiv.org/abs/2307.02446v1 | # Vulnerable Source Code Detection using SonarCloud Code Analysis
###### Abstract
In Software Development Life Cycle (SDLC), security vulnerabilities are one of the points introduced during the construction stage. Failure to detect software defects earlier after releasing the product to the market causes higher repair costs for the company. So, it decreases the company's reputation, violates user privacy, and causes an unrepairable issue for the application. The introduction of vulnerability detection enables reducing the number of false alerts to focus the limited testing efforts on potentially vulnerable files. UMKM Masa Kini (UMI) is a _Point of Sales_ application to sell any Micro, Small, and Medium Enterprises Product (UMKM). Therefore, in the current work, we analyze the suitability of these metrics to create Machine Learning based software vulnerability detectors for UMI applications. Code is generated using a commercial tool, SonarCloud. Experimental result shows that there are 3,285 vulnerable rules detected.
Keywords: Source Code, Detection, Vulnerability, Code analysis, SonarCloud
## 1 Introduction
In Software Development Life Cycle (SDLC), security vulnerabilities being one of point introduced during construction stage. However, vulnerabilities being one of issue which difficult to be detected until it becomes apparent as security failures in the operational stage of SDLC [1]. Failure to detect software defect earlier after releasing product to the market causes higher repairing cost for the company. So, it decreases company reputation, violate user privacy, and cause unrepairable issue for the application [2]. In addition, techniques to detect software vulnerabilities are needed before releasing product [3]. To solve those problems, dedicated tools are available on the market: e.g., Veracode [4] and SonarCode [5]. The introduction of vulnerability detection (usually a binary classification of vulnerable and neutral parts of the source code) enables reducing the number of false alerts to focus the limited testing efforts on potentially vulnerable files [6].
UMKM Masa Kini (UMI) is a _Point of Sales_ application to sell any kind Micro, Small and Medium Enterprises Product (UMKM). Not only selling the products, UMI also provides facilities for offline transaction and facilities which can support the development of UMKM. However, in construction process, automated testing to detect vulnerable code is a good way to save money and time. Therefore, in the current work, we perform an analysis of the suitability of these metrics to create Machine Learning based software vulnerability detectors for UMI application. Code is generated using a commercial tool, SonarCloud.
## 2 Literature Review
### Software Testing
Testing is an activity to evaluate software quality and to improve it [7]. In general, testing divided into two namely black box and white box testing. White box is a testing technique that uses Software Under Test (SUT) or the software being tested as a test guide to be carried out [8]. In addition, Black Box Testing is not an alternative solution to White Box Testing but is more of a complement to testing things that are not covered by White Box Testing.
### Software Quality ISO 25010
Software quality can be assessed through certain metrics, methods, as well as through software tests. One of benchmarks for software quality is (International Organization for Standardization) ISO 25010. ISO/IEC 25010 has eight characteristics to measure software quality, including portability, performance efficiency, reliability, security usability, maintainability, compatibility, and functional suitability.
Figure 1 is characteristics and sub-characteristics of Software product quality model which consists of:
1. Functional Suitability is a characteristic to measure which system provides functions according to its requirement when used in certain conditions.
2. Performance Efficiency is a characteristic to calculate relative performance of the resources when used in certain conditions.
3. Compatibility is a characteristic to measure how a system share an information to other systems and execute required functions while sharing same hardware or software environment.
4. Usability is a characteristic to measure which system can be used to achieve the specified goals effectively and efficiently.
5. Reliability is a characteristic to measure how reliable a system can execute functions under specified conditions for a certain period.
6. Security is a characteristic to measure a system in protecting information and data, so that the system has a data access level according to the type and level of authorization.
7. Maintainability is a characteristic to represent the level of effectiveness and efficiency in the modification process for system improvement in accordance with adjustments and changes in the operational environment.
8. Portability is a characteristic to represent the level of effectiveness and efficiency of the system in transferring from one device to another.
### SonarCloud
SonarCloud is an online automated software quality analysis platform delivered by SonarQube, which is used to collect code quality analysis data on linked software from GitHub [9] SonarCloud calculates several metrics such as number of lines of code and code complexity and verifies code compliance with a specific set of "coding rules" defined for most common development languages.
If the analyzed source code violates the coding rules or if the metric falls outside a predefined threshold, SonarCloud generates an "issue". SonarCloud includes 3 quality models from ISO 25010 namely Reliability, Maintainability and Security.
SonarCloud also classifies rules into five levels of code severity, namely Blocker, Critical, Major, Minor, and Info.
* Blocker: a bug with a high probability of affecting the use of the application in production.
* Critical: a bug with a low probability of affecting application use in production or a weak level of security.
* Major: lack of quality which can have a huge impact on developer productivity. Like code snippets not found and unused parameters.
* Minor: lack of quality which can slightly impact developer productivity. Like too long lines of code and minimal statements
* Info: not a bug or lack of quality but just a finding from SonarCloud.
Figure 1: Product Quality Model ISO/IEC 25010
Method
### SonarCloud Pipeline
For experimental analysis, SonarCloud is used to collect vulnerable source code to obtain analysis of vulnerable code. Figure 2. show how SonarCloud detects insecure in given code.
The first step is request SonarCloud REST API. In the second phase, the crawlers collect source code of the application. In the next phase, SonarCloud manipulate by adding vulnerable lines as comments at the end of the source code file. After that, SonarCloud stores the result in the local file of the system. The last step is releasing the dataset to the community.
SonarCloud inspects the model according to ISO 25010 Standard, namely: _reliability, maintainability, security_ and classify into five categories then.
### Dataset
Overall code rating (reliability, maintainability, security). The detail information of the report is shown in Table 1.
Overall code rating (reliability, maintainability, security). The detail information of the report is shown in Table 1.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Type** & **Severity** & **Rules** \\ \hline \multirow{5}{*}{Bug} & All & 2,370 \\ & Info & - \\ & Minor & 2,000 \\ & Major & 364 \\ & Critical & 6 \\ & Blocker & - \\ \hline \multirow{3}{*}{Code Smell} & All & 665 \\ & Info & - \\ \cline{1-1} & Minor & 74 \\ \cline{1-1} & Major & 356 \\ \hline \end{tabular}
\end{table}
Table 1: UMI Severity Level
Figure 4: Analyzing Project Process
Figure 5: SonarCloud Report Analysis
Figure 3: Landing Page of UMI Website
Figure 2: SonarCloud Block Diagram
After retrieving the source code, for each vulnerability, a line is appended to the original source. Result shows that there are 3.285 rules detected as vulnerable code. Each issue namely bug, code smell, and vulnerability are categorized into severity level, namely: blocker, critical, major, minor, info. So that, developer can repair the code to get a safe code.
There are three rules detected in vulnerability issue. As shown as Figure 6, three rules are reported and completed with each severity level. For each rule, SonarCloud describes the issue as shown as Figure 7.
For other issues such as code smell and bug category are also analyzed as shown as Figure 8 and Figure 9.
Every report can help developer to do maintenance more efficient. However, one of limited of SonarCloud's web API is the one commonly known as the 10,000-issue limit. This constraint implies that every single request made will be responded to only with the first 10,000 results [10].
However, SonarCloud reports starting and ending line and starting and ending offset (column) of the vulnerability instead highlighting the vulnerable code.
## 5 Conclusions
This paper analyzes vulnerable for UMI application source code by using SonarCloud. Experimental result shows that there are 3,285 vulnerable rules detected. So that, developer can repair the code to get a safe code. For the future work, highlighting the vulnerable code instead of starting and ending line and starting and ending offset (column) of the vulnerability.
## Acknowledgements
This research has been supported by Pusat Penelitian dan Pengabdian Masyarakat, Politeknik Negeri Indramayu, as Penelitian Kerjasama Perguruan Tinggi 2022.
|
2304.14774 | A feature selection method based on Shapley values robust to concept
shift in regression | Feature selection is one of the most relevant processes in any methodology
for creating a statistical learning model. Usually, existing algorithms
establish some criterion to select the most influential variables, discarding
those that do not contribute to the model with any relevant information. This
methodology makes sense in a static situation where the joint distribution of
the data does not vary over time. However, when dealing with real data, it is
common to encounter the problem of the dataset shift and, specifically, changes
in the relationships between variables (concept shift). In this case, the
influence of a variable cannot be the only indicator of its quality as a
regressor of the model, since the relationship learned in the training phase
may not correspond to the current situation. In tackling this problem, our
approach establishes a direct relationship between the Shapley values and
prediction errors, operating at a more local level to effectively detect the
individual biases introduced by each variable. The proposed methodology is
evaluated through various examples, including synthetic scenarios mimicking
sudden and incremental shift situations, as well as two real-world cases
characterized by concept shifts. Additionally, we perform three analyses of
standard situations to assess the algorithm's robustness in the absence of
shifts. The results demonstrate that our proposed algorithm significantly
outperforms state-of-the-art feature selection methods in concept shift
scenarios, while matching the performance of existing methodologies in static
situations. | Carlos Sebastián, Carlos E. González-Guillén | 2023-04-28T11:34:59Z | http://arxiv.org/abs/2304.14774v3 | # A feature selection method based on Shapley values robust for concept shift in regression
###### Abstract
Feature selection is one of the most relevant processes in any methodology for creating a statistical learning model. Usually, existing algorithms establish some criterion to select the most influential variables, discarding those that do not contribute to the model with any relevant information. This methodology makes sense in a static situation where the joint distribution of the data does not vary over time. However, when dealing with real data, it is common to encounter the problem of the dataset shift and, specifically, changes in the relationships between variables (concept shift). In this case, the influence of a variable cannot be the only indicator of its quality as a regressor of the model, since the relationship learned in the training phase may not correspond to the current situation. In tackling this problem, our approach establishes a direct relationship between the Shapley values and prediction errors, operating at a more local level to effectively detect the individual biases introduced by each variable. The proposed methodology is evaluated through various examples, including synthetic scenarios mimicking sudden and incremental shift situations, as well as two real-world cases characterized by concept
shifts. Additionally, we perform three analyses of standard situations to assess the algorithm's robustness in the absence of shifts. The results demonstrate that our proposed algorithm significantly outperforms state-of-the-art feature selection methods in concept shift scenarios, while matching the performance of existing methodologies in static situations.
Keywords:Concept shift, Feature selection, Regression, Shapley values
## 1 Introduction
Artificial Intelligence and Machine Learning in particular are becoming increasingly valuable nowadays. The understanding of the vast amount of data available being generated every day in a connected world, offers many possibilities like Natural Language Processing systems, recommendation engines, medical diagnosis or forecasting models. Thus, the use of historical data to build models capable of capturing relationships between different variables is becoming more and more frequent, either for explanatory or predictive purposes. However, different issues may arise when modelling a problem: the tradeoff between the explainability and accuracy of different algorithms, computational complexity... A common problem to all fields is the selection of the correct information in high-dimensional contexts. Failure to perform this task properly can lead to problems of overfitting, that is, of poor generalisation.
Dimension reduction is generally performed in two ways: feature extraction (FE) or feature selection (FS). FE consists of projecting the feature space in a high dimensional space to a smaller one, while FS chooses a subset of the original features [1]. In a large number of applications, transforming the original variables can make it difficult to analyse the results. This is why feature selection is generally preferred [2]. Moreover, even when the dimension of the data is not so high, it is important to follow the principle of parsimony or Ockham's razor, as the models built will be more explainable and, in case they are used as predictive models, more general and robust.
When fitting a model with real data, a very common problem is the so-called dataset shift. This occurs when the statistical properties of the target variable change over time [3]. This shift may occur for seemingly arbitrary reasons, but it may also be due to changes in the explanatory variables. If there is not sufficiently long history of data available since the change occurred and the affected variables are in the explanatory variables of the model, the relationship learned by the model through these features may be erroneous.
Feature selection algorithms do not detect this kind of problem, as they usually take as reference a measure of the degree of importance of the variable
to determine whether it is relevant or not. However, they do not observe whether the effect that each variable has on the predictions is the desired one, regardless of the magnitude of the effect.
We introduce a new method for variable selection in regression problems that is robust to the presence of characteristics whose behaviour has changed over time and have an undesirable effect on predictions in the current context. This selection is done using Shapley values, as previously considered in [4], [5], [6] or [7]. While all these works consider a global treatment of each feature to asses their importance, our method relates Shapley values to the errors of the predictions in order to pay special attention to the local effects of each variable for each prediction.
It is important to note that we do not propose a method to detect or characterise a dataset shift, but rather that in the event of any variable undergoing some sort of shift that negatively influences the performance of the model, this variable is a candidate for elimination, regardless of the overall level of influence it has on the behaviour of the model.
The rest of the paper is organised as follows: Section 2 describes the types of feature selection algorithms. Section 2.1 introduces Shapley values in the context of Machine Learning, discussing their inclusion in different variable selection algorithms. Section 2.2 introduces the dataset shift problem and its importance in the production of predictive models in real applications. In Section 3 the new proposed algorithm is explained in detail, distinguishing the results obtained in Section 4. Finally, the work is concluded in Section 5, opening also new possible lines of research.
## 2 Previous work
Feature selection is a complicated task when there is no help from experts in the field. The selection in such cases has to be made directly from the data and optionally by using statistical models. Establishing the state of the art in variable selection methods is a difficult task, as it depends considerably on the task to be performed (classification or regression) and the dataset employed. However, there is a clear classification of the different types of existing methods: filters, wrappers and embedded methods.
Filters
Filters make a selection of features using only the relationships that exist between the data, they do not rely on the use of any model. Thus, some criterion is used to establish a ranking among the variables and those that exceed a certain threshold in this ranking are chosen as relevant. Examples of different criteria are: correlation with the target variable, information gain,
Fisher score, variance threshold or the chi-square test [1]. Since they do not rely on a model, they tend to be computationally very efficient methods. However, they require making assumptions about the data that are not always met, leading to a selection that is not necessarily optimal under the criteria used.
#### 2.1.1 Wrappers
Wrapper methods, popularised by [8], are those that interact with a predefined model to assess the quality of the selected feature set. The methodology is divided in two steps: finding a feature set and assessing the quality of the feature set [2]. Three common strategies when proposing a dataset are forward selection, backward selection and stepwise selection [9]. The use of metaheuristics, typically bio-inspired [10], is also common. There are methods that were specifically designed for certain models, such as the Boruta method [11] for Random Forest models. The constant interaction with a model makes them computationally more expensive than filter methods, although the results are usually better [1].
#### 2.1.2 Embedded methods
Embedded methods are those that incorporate variable selection into the learning phase of the model. In this way they are computationally efficient and take advantage of the benefits of interacting with a model. The most common are those that add some kind of regularisation to the learning process, so that the coefficients of certain features are forced to be zero or close to zero [2]. Although there are complex methods, it is very common to use Lasso regression to select variables, obtaining more than acceptable results on many occasions, such as in time series forecasting [12].
### Shapley values in the context of feature selection
Shapley values [13] are a game theory concept to explain the importance of an individual player in a collaborative team. The idea behind the notion is based on distributing the total gain among the players according to the relative importance of their contributions. Specifically, the Shapley value for player \(i\) is defined as the average marginal contribution over all possible coalitions. This is
\[\phi_{i}=\sum_{\mathcal{S}\subseteq\mathcal{N}\setminus\{i\}}\frac{| \mathcal{S}|!\;(|\mathcal{N}|-|\mathcal{S}|-1)!}{|\mathcal{N}|!}\left(v( \mathcal{S}\cup\{i\})-v(\mathcal{S})\right)\]
where \(\mathcal{N}\) is the set of players and \(v:\mathcal{P}(\mathcal{N})\longrightarrow\mathbb{R}\) a function that returns the gain of a coalition.
This idea can be transferred to the explainability of Machine Learning models [14]. Specifically, the game would be the task of predicting an instance of the dataset, the gain would be the difference of the prediction with the average prediction of all instances, and the players would be the values of the different variables that collaborate to obtain the prediction. In this way,
Shapley values allow us to measure the influence that each variable has on a prediction, distinguishing the characteristics that have a higher or lower impact on the predictions made by a model. However, for high dimensional data the calculation of the Shapley values in an exact way is not feasible, as the computational complexity is \(\mathcal{O}(2^{|\mathcal{N}|})\).
The use of Shapley values as a local model-agnostic technique for the explainability of Machine Learning models became popular with the introduction of SHAP (SHapley Additive exPlanations) [15] and in particular with the appearance of TreeSHAP [16], which allows the efficient calculation of an approximation of Shapley values for models based on decision trees. Although this approximation presents problems estimating non-zero Shapley values in features that are not relevant but have a high correlation with variables that are influential [17; 18], it works well in practice.
For other types of models, such as neural networks, there are techniques such as the one presented in [19] that allow the calculation of an approximation of the Shapley value in polynomial time. This approximation is based on random sampling from the equivalent definition of Shapley value
\[\phi_{i}=\frac{1}{|\Pi(\mathcal{N})|}\sum_{\pi\in\Pi(\mathcal{N})}\left(v( \mathcal{P}_{i}^{\pi}\cup\{i\})-v(\mathcal{P}_{i}^{\pi})\right)\]
where \(\Pi(\mathcal{N})\) denotes the set of permutations of \(\mathcal{N}\) y \(\mathcal{P}_{i}^{\pi}=\{j\in\mathcal{N}\,:\,\pi(j)<\pi(i)\}\) is the so-called predecessor set of the player \(i\), that takes the position \(\pi(i)\) in the permutation \(\pi\in\Pi(\mathcal{N})\).
Taking into account that the Shapley value can be used to measure the influence of a characteristic on a prediction, an overall measure of the influence of a variable can be defined by taking the mean of the absolute values of the Shapley values for each observation. This measure can be used to filter by a minimum overall influence or to establish a ranking and select the \(k\) most important variables [4]. In practice this method works well, although the idea of using Shapley values within the variable selection field can be refined considerably.
A first idea is the variation of the Boruta [11] method, Boruta-Shap [5]. Boruta-Shap, like Boruta, is based on the use of shadow variables, which are copies of the original variables with the values randomly permuted. The general idea is that a variable is relevant if its importance measure is greater than that of the best shadow variable. Boruta-Shap modifies the algorithm by using the previously described global influence as a measure of importance and making technical modifications to speed up the process, establishing that not only better feature sets are obtained than through the original algorithm,
but also in less time.
Another modification is Shapicant [6], which is inspired by the Permutation Importance (PIMP) method, [20]. In PIMP, a model is trained on the original data and an importance measure is obtained for each variable. Then, a permutation of the target variable is made and the model is re-trained to obtain another importance score for each of the variables. The process is repeated several times and if a variable is significantly more relevant on average over the original dataset than over the randomised set, then it is considered to be relevant. Shapicant uses the Shapley values as a measure of importance, but separating positive and negative values.
A new original method was proposed by [7]: Powershap. This algorithm is divided into two phases: in the _Explain_ component, a uniform random variable is added to the dataset, a given model is trained and the overall influence measure is calculated based on Shapley values for each of the variables, including the random variable. This procedure is repeated for a fixed number of iterations but varying the random variable associated with the model to obtain different results. In the _Core_ part, the performance of all the features is compared with that of the random variable and the most important variables are determined through a hypothesis test. Furthermore, it proposes an automatic method to optimise the hyperparameter associated with the number of iterations of the first component while keeping fixed the threshold for the p-value.
It is noteworthy that all these algorithms presented include the Shapley values through the mean of the absolute values for each variable as a measure of global influence.
### Dataset shift
The concept of dataset shift was introduced in [21], where it is described as a phenomenon in which the joint distribution of the explanatory variables and the target variable is different between the training set and the test set used in the creation of a statistical learning model. More formally, given a time period \([0,\,t]\), a set of samples denoted \(S_{0,t}=\{d_{0},\ldots,d_{t}\}\), where \(d_{i}=(X_{i},y_{i})\) with \(X_{i}\) the vector of explanatory variables and \(y_{i}\) the target variable. Let \(F_{0,t}(X,y)\) be the distribution following \(S_{0,t}\), a dataset shift is said to occur at time \(t+1\) if \(F_{0,t}(X,y)\neq F_{t+1,\infty}(X,y)\), i.e. \(\exists\,t\,:\,P_{t}(X,y)\neq P_{t+1}(X,y)\)[22].
Independently of the mode used, a typical data analysis scheme assumes that the distribution of the data is static for the model to be valid. If there is a variation in the distribution, that change must be modelled. Such changes are very common in real problems, such as economic, political, social, regulatory, etc., reasons that affect the behaviour of many phenomena. This is the reason
why the dataset shift problem must be addressed.
Let \(t+1\) be the instant at which the dataset shift occurs, there are three possible reasons why the joint probability distribution is different [23]:
1. _Covariate shift_ which is probably the most studied type of shift and happens when \(P_{t}(y|X)=P_{t+1}(y|X)\) but \(P_{t}(X)\neq P_{t+1}(X)\).
2. _Prior probability shift_, when \(P_{t}(X|y)=P_{t+1}(X|y)\) but \(P_{t}(y)\neq P_{t+1}(y)\)
3. _Concept shift_, which occurs when the relationship between the explanatory and target variables change, that is, when \(P_{t}(y|X)\neq P_{t+1}(y|X)\) but \(P_{t}(X)=P_{t+1}(X)\) or when \(P_{t}(X|y)\neq P_{t+1}(X|y)\) but \(P_{t}(y)=P_{t+1}(y)\). It is the most complex type of shift. An example in a classification problem is shown in Figure 1.
Much of the related research in this field focuses on shift detection (whether shift occurs or not), on understanding why it occurs (when, how and where) and on shift adaptation (reacting to change). It is also often treated from the perspective of classification problems, while the field of regression has not been explored in great depth [24]. Most strategies designed to react to the presence of the shift are based on retraining the models with more current data or with data similar to those occurring in the current context, although other strategies are possible [22]. The implementation of such strategies can be complicated and requires constant maintenance. In addition, many algorithms require a large amount of data to perform satisfactorily, which severely limits the use of data related to the current paradigm.
In the particular case of a concept shift, where the relationship between the explanatory variables and the target variable changes, the situation where only one group of variables is responsible for the shift may arise. If the elimination of this group of variables still allows the construction of a
Figure 1: In **a)** the learned decision frontier is observed with respect to the training data. In **b)** it can be seen that the relationship between the variables has changed and that the decision frontier learned in training is not valid for the test data.
good statistical model, the full use of the dataset and a much simpler model maintenance can be achieved, as the shift will no longer be present.
## 3 A new feature selection algorithm for regression problems
Generally, feature selection algorithms quantify the importance of a variable and, based on some criterion, determine a minimum threshold of importance to be considered. However, they do not take into account the effect that a variable has, i.e. a feature may have a large impact on the decisions a model makes, but it does not necessarily need to have the right influence. This should not be the case in a standard static situation where the learned behaviour directly corresponds to the actual behaviour, but under a concept shift setting this situation is quite common.
Here, we propose a new variable selection algorithm that is robust to these situations. It is able to detect the features that cause the shift in case they are among the possible explanatory variables and, in situations where there is no shift, performs similar or better than the state of the art. To achieve this goal, the proposed algorithm works on a more local level than the others, by using Shapley values to observe the effect that each variable has on certain predictions. Specifically, the Shapley values show, in essence, how much a variable moves a prediction up or down, while the prediction error tells how much the prediction should be moved to give the exact result.
At a high level, the algorithm classifies the observations based on whether their predictions result in under predicted, over predicted or well predicted. This categorization is achieved by employing various quantiles across the distribution of prediction errors. Subsequently, both groups of wrongly predicted observations are analyzed separately to investigate the impact of each feature on each group. In essence, the study focuses on identifying variables that contribute to the observed biases within each group. Ultimately, if a variable is found to introduce significant bias, it is considered for elimination.
As previously discussed, the consideration of Shapley values within feature selection methods is not novel; however, their use at a more local level and their association with prediction errors represents a distinctive approach.
Together with this local use of Shapley values, the groups of well classified over predicted and under predicted observations are crucial for the algorithm. While we opted to employ quantiles as a means to establish these groups, it is important to note that this approach is not mandatory. There may exist other criteria that are equally suitable for creating this distinction among the
observations. Alternative methods could be explored to effectively generate these groups, potentially providing additional insights for the analysis.
The starting point of the algorithm is a given model and a set of training and validation data. All variables are considered and a backward selection strategy will be established, eliminating variables sequentially.
The first step is to train the model and obtain predictions on the validation set. Working individually with each of these predictions is not feasible, so three groups of observations are constructed based on the prediction error. For this purpose, two user-selected parameters are used, \(q_{low},q_{high}\in[0,1]\), which correspond to two quantiles.
**Definition 1**: Let \(\mathbf{x}\) be the vector of explanatory variables of an observation \((\mathbf{x},y)\) of the validation set, \(\text{err}(\mathbf{x}\text{,}y)=y-\hat{y}(\mathbf{x})\) be the error of its prediction with the model considered, \(\mathbf{err}\) be the vector of errors of all predictions, \(Q_{low}=\text{Quantile}(\mathbf{err},q_{low})\) and \(Q_{high}\) the analogue1. Let \(q^{*}\) be the quantile such that \(P(\mathbf{err}\leq 0)=q^{*}\), \(Q^{*}\) the value such that \(\text{Quantile}(\mathbf{err},q^{*})=Q^{*}\) (note that \(Q^{*}\) is not necessarily 0). The following are defined as
Footnote 1: Lower case is used for the quantile and upper case for the value associated with the quantile.
\[Q^{*}_{low}=\begin{cases}Q_{low}&\text{if}\,0\in\left[Q_{low},Q_{ high}\right]\\ Q^{*}&\text{if}\,Q_{high}<0\\ Q_{low}-(Q_{high}-Q^{*})&\text{if}\,Q_{low}>0\end{cases}\] \[Q^{*}_{high}=\begin{cases}Q_{high}&\text{if}\,0\in\left[Q_{low},Q_{ high}\right]\\ Q_{high}-(Q_{low}-Q^{*})&\text{if}\,Q_{high}<0\\ Q^{*}&\text{if}\,Q_{low}>0\end{cases}\]
It is said that:
* \(\mathbf{x}\) is correctly predicted if \(\text{err}(\mathbf{x}\text{,}y)\in\left[Q^{*}_{low},Q^{*}_{high}\right]\)
* \(\mathbf{x}\) is under predicted if \(\text{err}(\mathbf{x}\text{,}y)\in\left(Q^{*}_{high},+\infty\right)\)
* \(\mathbf{x}\) is over predicted if \(\text{err}(\mathbf{x}\text{,}y)\in\left(-\infty,Q^{*}_{low}\right)\)
In this way, it is considered that \(\left[Q^{*}_{low},Q^{*}_{high}\right]\) is the correctly predicted space of errors while the complementary is wrongly predicted.
Note that in the previous definition, in case that \(0\not\in[Q_{low},Q_{high}]\), we are simply doing a translation of the quantiles in case the model is highly biased. Thus, the bias of the model is included in the poorly predicted space but keeping the same width between boundaries as in the original proposal. In Figure 2 it is shown graphically over the distribution of the errors.
**Notation 1**: Let \((\mathbf{x},y)\) be an observation from the validation set, \(\hat{y}\) the model's prediction of that observation and \(\mathrm{SHAP}_{\mathrm{var}}\) a function that returns the Shapley value of a variable for an observation. The effect of a variable, \(\mathrm{var}\), on an observation is denoted as
\[\mathrm{Effect}_{\mathrm{var},\mathbf{x}}=\mathrm{sgn}(\mathrm{SHAP}_{\mathrm{ var}}(\mathbf{x},y,\hat{y}))\cdot\mathrm{SHAP}_{\mathrm{var}}(\mathbf{x},y,\hat{y})^{2}\]
Note that the Shapley value could actually be taken directly, but taking the square increases the effect of the most influential characteristics.
The effect of a variable on a group (correctly predicted, over predicted, under predicted) can be derived as:
\[\mathrm{Effect}_{\mathrm{var},\mathrm{group}}\equiv\mathrm{Ef}_{\mathrm{var}, \mathrm{group}}=\sum_{\mathbf{x}\in\mathrm{group}}\mathrm{Effect}_{\mathrm{ var},\mathbf{x}}\]
**Definition 2**: Let \(\mathrm{var}\) be a variable, \(\mathbf{err}\) the vector of errors of the predictions on the validation set and let C.P, O.P and U.P be the three groups described above, we
Figure 2: The left-hand side shows that no translation is necessary since, given the quantiles chosen, the model does not show any significant bias. On the right, the model tends to overpredict, so this bias is penalised by shifting the quantiles to define the correctly predicted region.
define the negative influence, neg inf\({}_{\text{var}}\), of var as \[\text{neg inf}_{\text{var}}=\begin{cases}+\infty&\text{if }\sum_{group}|\text{ Ef}_{\text{var},\text{group}}|=0\\ |\text{ Ef}_{\text{var},\text{O.P}}|-\left(|\text{ Ef}_{\text{var},\text{U.P}}|+|\text{ Ef}_{\text{var},\text{C.P}}|\right)&\text{if }\ q_{2}(\text{\bf{err}})<0,\text{ Ef}_{\text{var},\text{O.P}}>0,\\ &\text{ Ef}_{\text{var},\text{U.P}}>0,\\ &|\text{ Ef}_{\text{var},\text{O.P}}|>|\text{ Ef}_{\text{var},\text{U.P}}|+| \text{ Ef}_{\text{var},\text{C.P}}|\\ &|\text{ Ef}_{\text{var},\text{U.P}}|-\left(|\text{ Ef}_{\text{var},\text{O.P}}|+|\text{ Ef}_{\text{var},\text{C.P}}|\right)&\text{if }\ q_{2}(\text{\bf{err}})>0,\text{ Ef}_{\text{var},\text{O.P}}>0,\\ &\text{ Ef}_{\text{var},\text{U.P}}>0,\\ &|\text{ Ef}_{\text{var},\text{U.P}}|>|\text{ Ef}_{\text{var},\text{O.P}}|+| \text{ Ef}_{\text{var},\text{C.P}}|\\ &|\text{ Ef}_{\text{var},\text{U.P}}|+|\text{ Ef}_{\text{var},\text{O.P}}|-| \text{ Ef}_{\text{var},\text{C.P}}|&\text{ if }\ \text{ Ef}_{\text{var},\text{O.P}}>0,\text{ Ef}_{\text{var},\text{U.P}}<0,\\ &|\text{ Ef}_{\text{var},\text{U.P}}|+|\text{ Ef}_{\text{var},\text{O.P}}|>| \text{ Ef}_{\text{var},\text{C.P}}|\\ 0&\text{ otherwise}\\ \end{cases}\]
The general idea of the previous definition is to study the effect that each variable has on each group of observations. Each of the cases encapsulates the following corresponding idea:
1. If a variable has no effect on the predictions, its negative influence is defined as infinite, as we are adding a variable to the model that does not contribute with any value and thus this variable is a candidate for overfitting.
2. In case the model is biased towards over predictions and the variable increases the value of over predictions (undesirable effect) and increases the value of under predictions (desirable effect), the negative influence is the difference of the absolute values of the undesirable minus the desirable effects. The correctly predicted group is always considered as a desired effect.
3. This is symmetrical to the previous case for the group of those under predicted.
4. In case the variable increases over predictions and decreases under predictions, the variable has an undesired effect for all but the correctly predicted ones.
5. In any other case, the variable has no negative influence.
With these concepts defined, the algorithm can be fully described (Algorithm 1)
```
Input: \((\mathbf{X},\mathbf{y})_{\text{train}}\), \((\mathbf{X},\mathbf{y})_{\text{val}}\), \(model\), \(n\_iter\_prev\), \(q_{low}\), \(q_{high}\), \(metric\) Output: Set of selected features ifn_iter_prev \(>0\)then Introduce a random variable to the training and validation sets for iteration in 1:\(n\_iter\_prev\)do Train a model with the new dataset Compute the global influence of each variable and store it for each iteration endfor Calculate the average of the global influences of the previous \(n\_iter\_prev\) iterations Remove the random variable and the characteristics that have less overall influence than the random variable endif whilelen(features selected) > 0and len(features to remove) > 0do Train the model with the features selected up to now Compute Shapley values for all variables and predictions in the validation set Classify the observations of the validation set into correctly predicted, over predicted or under predicted Compute the effect of each variable on each observation of the validation set Compute the effect of each variable on each group of observations Establish the negative influence of each variable Compute the chosen metric on the validation set ifThere is a variable with infinite negative influence then Delete all variables with infinite negative influence else ifThere is a variable with a non-zero negative influence then Delete the variable with the greatest negative influence endif endif endwhile
```
**Algorithm 1** New feature selection algorithm for regression problems
The algorithm is divided into two phases, a preliminary and optional phase and the main component. In the first preprocessing phase, a random variable is introduced into the dataset, specifically, a permutation of the most influential variable according to the mean of the absolute values of the Shapley values between the original variables. Once included in the dataset, the global influence of each variable is recalculated a certain number of times specified by the user, where in each iteration the random seed that determines the learning process of the algorithm is modified to obtain different influences. At the end of the process, all those variables that have a global influence less than or
equal to that of the new variable introduced are removed. Including this first phase seems to improve the results when the number of variables eliminated is not too large, since it deletes variables that only seem to introduce noise. In case a large number of variables are excluded in this phase, it is recommended not to apply it, since the importance or role of each variable in the decisions of the model may not be entirely clear.
The second component corresponds to the core part of the algorithm, where the effect of each variable is analyzed when making predictions. This is a backward feature selection scheme, i.e., variables are sequentially removed until a criterion is no longer met. To eliminate a variable, first the predictions in the validation set are classified according to the error made and the \(q_{low}\) and \(q_{high}\) quantiles chosen (Definition 1). Then, according to the effect of each variable on each group (Notation 1), the negative influence is computed (Definition 2). If there are variables that have no effect on the predictions, that is, with infinite negative influence, in this iteration all those that meet this property are eliminated. If all the variables have some effect on the prediction, the variable with the greatest negative influence is eliminated. The process ends when there is no variable with non-zero negative influence or when there are no more features left. At the end of each iteration a metric (MAE, MSE, R\({}^{2}\), etc.) can be computed on the predictions of the validation set in an informative way, although it is not used to make decisions during the process. The feature set that obtained the best value of the metric is returned. Supposing that a value very similar to the best could be obtained with a notably smaller number of variables, which would also constitute a reasonable final feature set, the user could decide to consider it.
As previously mentioned, the shift is not detected explicitly, but in the presence of a change with a negative influence on a variable, the proposed algorithm is able to detect its undesired effect and evaluate whether the preservation of the characteristic in the model is really beneficial.
Let us stress why this algorithm is designed to be robust to concept shift situations. The variable importance is not the only metric that is taken into account. In fact, the possible bias generated by a variable is the main focus of the algorithm. If a variable is constantly generating a bias in the model predictions, which is common in concept shift contexts, then that variable is actually discarded from the possible explanatory variables, even if it has a great impact on the model.
## 4 Experiments
To analyze the effectiveness of the algorithm, the proposed method was compared with other feature selection algorithms using different
configurations. Specifically, the quantile configurations \((0.25,0.75)\), \((0.2,0.8)\), \((0.15,0.85)\), \((0.1,0.9)\), \((0.05,0.95)\), which correspond to considering from \(50\%\) of poorly predicted observations in the development of the algorithm to \(10\%\), were analyzed. In addition, \(30\) iterations of the preprocessing phase were applied and the MAE used to select the final set of features.
The other algorithms evaluated are considered as references in the variable selection methods: Boruta, PIMP and Lasso as traditional methods and Boruta-Shap, Shapicant and Powershap based on Shapley values. In case they use validation and training sets to make the variable selection, the same are introduced in all cases, if only one dataset is used, validation and training are introduced as one. In the particular case of the Lasso regularization, the variables are previously normalized by the maximum and minimum, and four different values of the regularization constant are considered: \(0.01\); \(0.001\); \(0.0001\); \(0.00001\).
Following [7], a CatBoost estimator [25] with \(250\) iterations is applied on all datasets and on all variable selection algorithms, except Lasso regression to select features. On the test set, we analyze the MAE, RMSE and R\({}^{2}\) of \(50\) different runs where the seed is varied to obtain different results and to observe the stability of the results with the selected variables. In contrast to the Verhaeghe's methodology, the \(50\) seeds are fixed in advance and the order of the variables is always entered alphabetically, otherwise different results could be obtained with the same set of features2.
Footnote 2: To ensure the veracity and reproducibility of the results, the different databases, the algorithm code and the results can be found at [https://github.com/CCaribe9/SHAPEffects](https://github.com/CCaribe9/SHAPEffects)
Two different situations are analyzed: selection of variables with the presence of concept shift and selection of variables in standard situations.
### Concept shift
#### Synthetic experiments
We believe that analizying the method in fully controlled toy datasets is necessary to test the effectiveness of the method. To this end, two of the most typical dataset shift situations have been recreated: a sudden shift and an incremental shift [22], represented in Figure 3.
We will try to fit the following objective function:
\[f(\mathbf{x}_{t}) =2x_{1,t}+\lambda_{1}x_{2,t}^{2}+3\sin(2\pi x_{3,t})-0.4x_{4,t}+ \lambda_{2}x_{5,t}^{2}\] \[+2x_{1,t-1}+\lambda_{1}x_{2,t-1}^{2}+3\sin(2\pi x_{3,t-1})-0.4x_{4,t-1}+\lambda_{2}x_{5,t-1}^{2}\] \[+\varepsilon_{t}\]
where \(\mathbf{x}_{t}=(x_{1,t},\ldots,x_{5,t},\ldots,x_{10,t})\in\mathbb{R}^{10}, \lambda_{1},\lambda_{2}\in\mathbb{R}\). Note that only the first five variables are informative. The scalars \(\lambda_{1}\) and \(\lambda_{2}\) are employed to generate diverse shift scenarios, while \(\varepsilon_{t}\) represents noise. The specification of this function is random, nonlinearities have simply been added to make prediction more difficult. Furthermore, an autoregressive component has been included, aligning with typical time series problems.
We generate 30000 samples of \(x_{i}\sim U(0,1),\,i=1,\ldots,10\). We model the noise as a \(N(0,0.01)\). We also include as a possible explanatory variable the first lag of the objective variable.
Let \(\lambda_{1}^{a}\in\{-10,-1,-0.1\}\,,\,\lambda_{1}^{b}\in\{-4,-0.4,-0.04\}\,,\, \lambda_{2}^{a}\in\{10,1,0.1\}\,,\,\lambda_{2}^{b}\in\{-25,-2.5,-0.25\}\). In order to recreate a sudden concept shift, we define
\[\lambda_{1}=\begin{cases}\lambda_{1}^{a},&\text{for the first 2000 samples}\\ \lambda_{1}^{b},&\text{for the last 1000 samples}\end{cases}\]
\[\lambda_{2}=\begin{cases}\lambda_{2}^{a},&\text{for the first 2000 samples}\\ \lambda_{2}^{b},&\text{for the last 1000 samples}\end{cases}\]
Figure 3: **Above** is a sudden shift situation. **Below** is an incremental shift situation.
To recreate an incremental concept shift situation, if we call index to the sample number, we define:
\[\lambda_{1}=\begin{cases}\lambda_{1}^{a},&\text{for the first 20000 samples}\\ \frac{(\lambda_{1}^{b}-\lambda_{1}^{a})(\text{index}-20000)+10000\lambda_{1}^{a }}{10000},&\text{if index }\in(20000,25000)\\ \lambda_{1}^{b},&\text{for the last 5000 samples}\\ \end{cases}\]
\[\lambda_{2}=\begin{cases}\lambda_{2}^{a},&\text{for the first 20000 samples}\\ \frac{(\lambda_{2}^{b}-\lambda_{2}^{a})(\text{index}-20000)+10000\lambda_{2}^{a }}{10000},&\text{if index }\in(20000,25000)\\ \lambda_{2}^{b},&\text{for the last 5000 samples}\\ \end{cases}\]
Every combination of \(\lambda_{1}^{a},\lambda_{1}^{b},\lambda_{2}^{a},\lambda_{2}^{b}\) is taken into account, resulting in a total of 81 potential scenarios for both types of shifts. This aspect holds particular interest as it enables the analysis of the system's behavior when the impacted variables exhibit varying degrees of significance.
For both cases, the first 20000 samples are used for training, the next 5000 for validation and the last 5000 as test. We compute the difference between the mean MAE/RMSE/R\({}^{2}\) of the proposed algorithm with every other method, although the standard deviation, the minimum value and the maximum value are also measured.
Results
The results are graphically described by the histogram of each one of these differences in Figures 4 and 5.
Only MAE outcomes are presented, as the results for RMSE and R\({}^{2}\) are equivalent. The obtained results are highly encouraging, given that the difference is predominantly negative (smaller MAE). In fact, when that difference is positive, it is practically negligible. These instances arise when the importance of one or both variables subjected to the shift is almost insignificant, indicated by comparable small coefficient values. The analysis reveals that the disparity between the two types of shifts is minimal. Moreover, the proposed algorithm consistently achieves superior results compared to other methods. The marginal discrepancies observed between the two shift types arise when the influence of the variables is relatively limited. In the case of incremental shifts, where a gradual change occurs, the proposed algorithm may not find beneficial to eliminate some of the variables that are undergoing the shift in the validation data, as at the beginning of the shift the variables may be contributing correctly to the outcomes. This behavior aligns with the approach followed by the other algorithms, contributing to the small differences observed (see Tables 5 and 6).
For a more individualized evaluation rather than a collective comparison, the mean MAE of the optimal SHAPEffects configuration is compared with the mean MAE of each alternative method, and subsequently visualized on a scatter plot (Figures 6 and 7). Similarly, in the case of the Lasso, the best configuration is selected, which from the previous figures is \(\lambda=0.0001\).
It can be seen that when the MAE is low (when the coefficients of the changing variables are very small) all methods exhibit similar performance, as previously stated out. However, as the error increases, which corresponds
Figure 4: Histograms (for the 81 cases) of the difference of the mean MAE of the proposed method with every other algorithm for the sudden shift case
Figure 5: Histograms (for the 81 cases) of the difference of the mean MAE of the proposed method with every other algorithm for the incremental shift case
to a greater increase in the influence of the variables that undergo the change in behaviour, there are situations in which the MAE is notably greater than that obtained by our proposal for all the algorithms, which can be seen when we see that no method exceeds the line \(y=x\). These behaviours are more evident in the case of the sudden shift, although they are also visible in the incremental case.
It is important to note that the cases in which the proposed algorithm performs notably better than the other algorithms (when the points are separated from the diagonal), is when one of the two variables undergoing
Figure 6: Mean MAE of the best SHAPEffects configuration vs mean MAE of every other method for the sudden shift case
Figure 7: Mean MAE of the best SHAPEffects configuration vs mean MAE of every other method for the incremental shift case
the change is removed. Specifically, when \(x_{5}\) is eliminated, which is the most relevant one. The rest of the algorithms do not drop these variables when they take on a notable importance (Tables 1, 2, 3 and 4).
Furthermore, we look in detail at what we believe to be the three most representative cases:
1. \(\lambda_{1}^{a}=-10,\lambda_{1}^{b}=-4,\lambda_{2}^{a}=10,\lambda_{2}^{b}=-25\) (Table 1 and 2)
2. \(\lambda_{1}^{a}=-1,\lambda_{1}^{b}=-0.4,\lambda_{2}^{a}=1,\lambda_{2}^{b}=-2.5\) (Table 3 and 4)
3. \(\lambda_{1}^{a}=-0.1,\lambda_{1}^{b}=-0.04,\lambda_{2}^{a}=0.1,\lambda_{2}^{b}= -0.25\) (Table 5 and 6)
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c}
**Algorithm** & \multicolumn{4}{c|}{**MAE**} & \multicolumn{4}{c|}{**RMSE**} & \multicolumn{4}{c}{**R**} \\
**Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** \\ \hline Fowershap & 13.44 & 3.99e-02 & 13.54 & 13.31 & 17.18 & 4.96e-02 & 17.30 & 17.05 & -1.41 & 1.38e-02 & -1.38 & -1.45 \\ Boruta-Shap & 13.41 & 2.70e-02 & 13.47 & 13.36 & 17.14 & 3.95e-02 & 17.24 & 17.06 & -1.40 & 1.11e-02 & -1.38 & -1.43 \\ Shpizcant & 19.27 & 3.94e-03 & 19.27 & 19.26 & 22.30 & 1.02e-02 & 22.32 & 22.27 & -30.7 & 3.75e-03 & -3.06 & -3.07 \\ Boruta & 13.41 & 2.70e-02 & 13.47 & 13.36 & 17.14 & 3.95e-02 & 17.24 & 17.06 & -1.40 & 1.11e-02 & -1.38 & -1.43 \\ PIMP & 13.39 & 2.84e-02 & 13.46 & 13.33 & 17.11 & 3.64e-02 & 17.21 & 17.02 & -1.39 & 1.02e-02 & -1.37 & -1.42 \\ Best Lasso & (0.001) & 13.41 & 2.70e-02 & 13.47 & 13.36 & 17.14 & 0.04 & 17.24 & 17.06 & -1.40 & 1.11e-02 & -1.38 & -1.43 \\ SHAPEflects & & & & & & & & & & & & \\ (0.25-075) & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ (0.2-08) & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ (0.1-0.9) & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ (0.05-095) & & & & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 1: Test results on the first case for the sudden shift context
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c}
**Algorithm** & \multicolumn{4}{c|}{**MAE**} & \multicolumn{4}{c|}{**RMSE**} & \multicolumn{4}{c}{**R**} \\
**Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** \\ \hline Fowershap & 13.71 & 1.96e-02 & 13.75 & 13.66 & 17.52 & 0.03 & 17.58 & 17.48 & -1.51 & 7.25e-03 & -1.50 & -1.53 \\ Boruta-Shap & 13.71 & 1.96e-02 & 13.75 & 13.66 & 17.52 & 0.03 & 17.58 & 17.48 & -1.51 & 7.25e-03 & -1.50 & -1.53 \\ Shapicant & 19.27 & 4.07e-03 & 19.27 & 19.26 & 22.30 & 0.01 & 22.32 & 22.27 & -3.07 & 3.83e-03 & -3.06 & -3.07 \\ Boruta & 13.68 & 2.07e-02 & 13.71 & 13.61 & 17.48 & 0.03 & 17.53 & 17.38 & -1.50 & 7.83e-03 & -1.47 & -1.51 \\ PIMP & 13.65 & 1.84e-02 & 13.69 & 13.61 & 17.44 & 0.03 & 17.49 & 17.38 & -1.49 & 7.24e-03 & -1.47 & -1.50 \\ Best Lasso & (0.001) & 13.68 & 2.07e-02 & 13.71 & 13.61 & 17.48 & 0.03 & 17.53 & 17.38 & -1.50 & 7.83e-03 & -1.47 & -1.51 \\ SHAPEflects & & & & & & & & & & & \\ (0.25-075) & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ (0.2-08) & & & & & & & & & & & & \\ (0.2-08) & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & \\ (0.1-0.9) & & & & & & & & & & & & \\ SHAPEflects & & & & & & & & & & & & & \\ (0.05-095) & & & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 2: Test results on the first case for the incremental shift context
In the scenarios characterized by substantial or moderate influence of the variables, our proposed methodology clearly demonstrates its advantage, yielding superior results across all provided configurations. On the contrary, when the influence of the variables is minimal, the algorithms exhibit a similar behavior. Moreover, no significant distinction is observed between the results obtained in sudden shift cases and incremental shift cases, emphasizing the
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c c} & \multicolumn{4}{c|}{**MAE**} & \multicolumn{4}{c|}{**RMSE**} & \multicolumn{4}{c}{**R**} \\
**Algorithm** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** \\ \hline Fowerslap & 1.79 & 4.44e-03 & 1.80 & 1.78 & 2.23 & 3.76e-03 & 2.24 & 2.22 & 0.53 & 1.60e-03 & 0.53 & 0.52 \\ Boruta-Shap & 1.79 & 4.44e-03 & 1.80 & 1.78 & 2.23 & 3.76e-03 & 2.24 & 2.22 & 0.53 & 1.60e-03 & 0.53 & 0.52 \\ Shapciant & 1.79 & 4.10e-03 & 1.80 & 1.78 & 2.23 & 3.04e-03 & 2.24 & 2.23 & 0.53 & 1.29e-03 & 0.53 & 0.52 \\ Boruta & 1.79 & 4.10e-03 & 1.80 & 1.78 & 2.23 & 3.04e-03 & 2.24 & 2.23 & 0.53 & 1.29e-03 & 0.53 & 0.52 \\ Bruta & 1.79 & 4.10e-03 & 1.80 & 1.78 & 2.23 & 3.04e-03 & 2.24 & 2.23 & 0.53 & 1.29e-03 & 0.53 & 0.52 \\ PHMP & 1.74 & 1.49e-03 & 1.74 & 1.73 & 2.16 & 1.39e-03 & 2.16 & 2.15 & 0.56 & 5.71e-04 & 0.56 & 0.56 \\ Best Lasso & & & & & & & & & & & \\ SHAPEffects & 1.79 & 4.44e-03 & 1.80 & 1.78 & 2.23 & 3.76e-03 & 2.24 & 2.22 & 0.53 & 1.60e-03 & 0.53 & 0.52 \\ SHAPEffects & & & & & & & & & & & \\ (0.2-0.75) & 1.70 & 2.92e-03 & 1.71 & **1.69** & 2.11 & 2.87e-03 & **2.11** & 2.10 & **0.58** & 1.15e-03 & **0.58** & **0.58** \\ \((0.2-0.8)\) & & & & & & & & & & & & \\ (0.2-0.8) & & & & & & & & & & & \\ (0.1-0.85) & & & & & & & & & & & \\ SHAPEffects & 1.79 & 4.44e-03 & 1.80 & 1.78 & 2.23 & 3.76e-03 & 2.24 & 2.22 & 0.53 & 1.60e-03 & 0.53 & 0.52 \\ \((0.0-0.95)\) & & & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 4: Test results on the second case for the incremental shift context
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c c} & \multicolumn{4}{c|}{**MAE**} & \multicolumn{4}{c|}{**RMSE**} & \multicolumn{4}{c}{**R**} \\
**Algorithm** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** \\ \hline Fowerslap & 1.79 & 2.63e-03 & 1.79 & 1.78 & 2.23 & 2.79e-03 & 2.23 & 2.22 & 0.53 & 1.18e-03 & 0.53 & 0.53 \\ Boruta-Shap & 1.79 & 2.43e-03 & 1.80 & 1.78 & 2.23 & 2.79e-03 & 2.23 & 2.22 & 0.53 & 1.18e-03 & 0.53 & 0.53 \\ Shapciant & 1.79 & 2.43e-03 & 1.80 & 1.78 & 2.23 & 2.33e-03 & 2.24 & 2.22 & 0.53 & 9.90e-04 & 0.53 & 0.52 \\ Bruta & 1.79 & 2.44e-03 & 1.80 & 1.78 & 2.23 & 2.33e-03 & 2.24 & 2.22 & 0.53 & 9.90e-04 & 0.53 & 0.52 \\ PHMP & 1.74 & 1.55e-03 & 1.74 & 1.73 & 2.16 & 1.60e-03 & 2.16 & 2.15 & 0.56 & 6.59e-04 & 0.56 & 0.55 \\ Best Lasso & & & & & & & & & & & \\ (0.001) & 1.79 & 2.63e-03 & 1.79 & 1.78 & 2.23 & 2.79e-03 & 2.23 & 2.22 & 0.53 & 1.18e-03 & 0.53 & 0.53 \\ \((0.2-0.75)\) & & & & & & & & & & & \\ SHAPEffects & 1.79 & 2.63e-03 & 1.79 & 1.78 & 2.23 & 2.79e-03 & 2.23 & 2.22 & 0.53 & 1.18e-03 & 0.53 & 0.53 \\ \((0.2-0.8)\) & & & & & & & & & & & & \\ SHAPEffects & & & & & & & & & & & & \\ (0.1-0.9) & 1.79 & 4.44e-03 & 1.80 & 1.78 & 2.23 & 3.76e-03 & 2.24 & 2.22 & 0.53 & 1.18e-03 & 0.53 & 0.52 \\ \((0.0-0.95)\) & & & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 5: Test results on the third case for the sudden shift context
robustness of our algorithm.
In this synthetic example, we have exclusively examined a concept shift that affects the entire validation set. It is worth noting that if the concept shift were to be incorporated into the training set, the results across all methodologies could potentially improve, depending on the model's ability to effectively learn from this shift. Conversely, if the concept shift were to occur later and later in the validation data, at a certain point our algorithm would no longer eliminate the variables, resulting in comparable outcomes with the other methods, in a similar way to what happened in the previous examples with the incremental concept shift and small coefficient.
#### 4.1.2 Electricity Price Forecasting (EPF)
The literature related to electricity price forecasting is extensive, with day-ahead (12 to 36 hours) forecasting prior to the Day-Ahead Market being the most studied case from a variety of perspectives [26, 27, 28]. However, the electricity market is constantly subject to regulatory changes and is highly influenced by political and economic situations. Specifically, in the Iberian market, given the crisis related to the gas price in mid-2022, a regulatory measure related to the gas price subvention to thermal power plants was established on June 15 2022, changing the dynamics of the electricity market. Predictive models typically use the price of different fuels as regressors, among them the price of gas [29, 30, 31], so this change of behaviour has created a situation of uncertainty around the models. Figures 8 and 9 graphically show the change in the relationship between the variables. There are two reasons for studying this particular case: firstly, the issue at hand is considerably familiar to the authors, which makes it easier and safer to interpret. Secondly, we know that a concept shift situation exists and at least one of the variables that is causing it, which allows us to work in a controlled environment as well as with real data.
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c c}
**Algorithm** & \multicolumn{4}{c|}{**MAE**} & \multicolumn{4}{c|}{**RMSE**} & \multicolumn{4}{c}{**R**} \\ & **Mean** & **std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** & **Mean** & **Std** & **Max** & **Min** \\ \hline Powership & 1.291 & 0.002 & 1.295 & 1.286 & 1.585 & 0.092 & 1.589 & 1.581 & 0.782 & 7.1010-04 & 0.733 & 0.730 \\ Boruta-Shap & **1.289** & 0.002 & **1.294** & **1.284** & **1.583** & 0.002 & **1.588** & **1.578** & **0.733** & 7.1790-04 & **0.734** & **0.731** \\ Shapicart & 1.291 & 0.002 & 1.295 & 1.288 & 1.587 & 0.002 & 1.592 & 1.584 & 0.731 & 6.1036-04 & 0.732 & 0.730 \\ Boruta & 1.291 & 0.002 & 1.295 & 1.288 & 1.587 & 0.002 & 1.502 & 1.584 & 0.731 & 6.1036-04 & 0.732 & 0.730 \\ PIMP & 1.351 & 0.002 & 1.355 & 1.347 & 1.669 & 0.002 & 1.672 & 1.665 & 0.703 & 5.940-04 & 0.704 & 0.702 \\ Best Lasso & 1.289 & 0.002 & **1.294** & **1.284** & **1.583** & 0.002 & **1.588** & **1.578** & **0.733** & 7.1790-04 & **0.734** & **0.731** \\ (0.001) & & & & & & & & & & & & \\ SHAPEffects & & & & & & & & & & & & \\ (0.2-0.8) & 1.293 & 0.002 & 1.298 & 1.287 & 1.588 & 0.002 & 1.594 & 1.582 & 0.731 & 7.552-04 & 0.733 & 0.729 \\ SHAPEffects & & & & & & & & & & & & \\ (0.15-0.85) & & & & & & & & & & & & \\ SHAPEffects & **1.289** & 0.002 & **1.294** & **1.284** & **1.583** & 0.002 & **1.588** & **1.578** & **0.733** & 7.179-04 & **0.734** & **0.731** \\ SHAPEffects & & & & & & & & & & & & \\ (0.1-0.9) & 1.294 & 0.002 & 1.300 & 1.290 & 1.589 & 0.003 & 1.505 & 1.582 & 0.730 & 8.828-04 & 0.733 & 0.728 \\ SHAPEffects & & & & & & & & & & & & \\ (0.05-0.95) & & & & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 6: Test results on the third case for the incremental shift context
The question that arises in this case is whether keeping variables related with the gas price in the models is useful to predict the current situation. A dataset with 335 explanatory variables linked to the electricity market, several connected to the gas price, is available to predict the price of the Day-Ahead Market. The data considered is publicly available, mostly obtained from ESIOS portal, which is an information system developed by Red Electrica de Espana, and the data related to the gas price is obtained from the Iberian Gas Market (MIBGAS). The main variables considered are forecasts related to the gas price, and the data related to the gas price. The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to the gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Iberian Gas Market (MIBGAS). The data related to gas price are obtained from the Henri Gas Market (MIBGAS).
to renewable energy generation, demand forecasts, prices from the different markets, prices of the previous Daily Markets in France and lagged values of the series to be predicted. To describe the current behavior of the market with respect to each of the characteristics, statistics of some of these same variables during the last week are also included. These are the mean, standard deviation, median, minimum, maximum, first quartile, third quartile, skewness and kurtosis. The data start in January 2021 and end in August 2022, both included. The data are divided into: January 1, 2021 to May 31, 2022 for training, June 1, 2022 to July 31, 2022 as validation, and August 2022 as test.
#### Results
The proposed method is called SHAPEffects to describe the results. Following the methodology described above, the results obtained are shown in Table 7.
The advantage of the proposed method is evident in all aspects. All the proposed configurations achieve better results than any other method analyzed. The configurations \((0.25,0.75)\) and \((0.2,0.8)\) eliminate all the variables related to the gas price, obtaining the best results. Moreover, in these two configurations the worst-case iterations (Max column for MAE and RMSE and Min column for R\({}^{2}\)) are better than the average of the other variable selection methods. In addition, not only better results are obtained, but they are also more stable: the standard deviation is noticeably smaller in general than with the other algorithms. In particular, the best configuration is the one with the lowest variance. The number of variables selected seems to be higher than with the other methods; however, the improvement in the quality of the predictions justifies this selection. Recall that the selected variables are those that obtain the best MAE in the validation (Figure 10).
Another possible selection, as discussed above, would be the use of a smaller number of variables with a slightly higher MAE, since it can be observed that
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c}
**Algorithm** & \multicolumn{3}{c|}{**MAE**} & \multicolumn{3}{c|}{**RMSE**} & \multicolumn{3}{c|}{**R\({}^{2}\)**} & \multicolumn{3}{c}{**Number of variables**} \\
**Mean** & **Std.** & **Max** & **Min** & **Mean** & **Std.** & **Max** & **Min** & **Mean** & **Std.** & **Max** & **Min** & **variables** \\ \hline Powershape & 23,13 & 2,23 & 318, 198 & 29,37 & 2,39 & 38,76 & 25,25 & 0,09 & 0.16 & 0.33 & -0.37 & 107 \\ Borat-Ship & 23,2 & 2,06 & 30,57 & 19,65 & 29,58 & 2 & 36,14 & 25,79 & 0,08 & 0.13 & 0.31 & -0.36 & 53 \\ Shapjeant & 22,04 & 1,5 & 25,85 & 19,26 & 28,24 & 1,63 & 32,49 & 24,94 & 0.16 & 0.1 & 0.35 & -0.1 & 11 \\ Borat & 23,49 & 3,16 & 33,86 & 18,35 & 29,84 & 3,25 & 40,55 & 23,72 & 0.06 & 0.22 & 0.41 & -0.72 & 68 \\ PIMP & 23,31 & 1,73 & 27,22 & 20,11 & 29,49 & 1,75 & 35,96 & 28,19 & 0,09 & 0.11 & 0.28 & -0.2 & 56 \\ Bent Laose (0.0001) & 22,79 & 1,81 & 30,01 & 19,5 & 29,22 & 1,96 & 36,03 & 25,96 & 0,1 & 0.13 & 0.3 & -0.36 & 47 \\ SHAPEffects & 18,27 & 0,99 & 20,27 & 16,47 & 23,36 & 1,13 & 26,48 & 21,19 & 0,43 & 0.06 & 0.53 & 0.27 & 122 \\ SHAPEffects & **16,97** & 0,77 & **19,32** & **15,77** & **21,5** & 0,86 & **23,36** & **20,02** & 0,52 & **0,04** & **0,58** & **0,43** & 111 \\ SHAPEffects & 19,01 & 1,69 & 23,88 & 16,4 & 23,78 & 1,83 & 28,83 & 21 & 0,41 & 0,09 & 0.54 & 0.13 & 136 \\ SHAPEffects & 20,57 & 1,88 & 27,72 & 16,7 & 25,77 & 2,1 & 33,51 & 21,51 & 0,3 & 0,12 & 0.52 & -0.17 & 78 \\ SHAPEffects & 21 & 1,5 & 24,21 & 18,5 & 27,19 & 1,68 & 31,34 & 23,87 & 0,23 & 0,1 & 0,4 & -0.03 & 133 \\ \end{tabular}
\end{table}
Table 7: Daily Market price prediction test results
the elimination of the last variables does not produce a significant increase in MAE. It is also possible to observe a considerable increase in the MAE seven iterations later from the optimal MAE obtained during the process. It could be studied which variable causes this increase and if its elimination is really advantageous.
#### 4.1.3 Another real world example
In order to validate the results, we analyzed one more real world data example: the Sberbank Russian Housing Market dataset [32]. It corresponds to data from a Kaggle competition with the objective of predicting the price of different houses. As the objective is not to obtain the best result in this dataset, only the training data have been used, which have been divided into three different sets chronologically, previously eliminating all the null data. As explanatory variables we have different variables related to information about the area in which each property is located. On the Kaggle competition website itself, a concept shift situation is described: '_Although the housing market is relatively stable in Russia, the country's volatile economy makes forecasting prices as a function of apartment characteristics a unique challenge_'. This perfectly describes a common situation in which we know that we are a in a concept shift scenario, but our knowledge is limited regarding the presence of any potentially responsible feature, and, if such a feature exists, which specific one it may be.
Results
The results are shown in Table 8.
Once more, a notable disparity becomes apparent between each configuration of the proposed method and the remaining algorithms, with
Figure 10: Variation of MAE in the validation set over the iterations of the proposed variable selection procedure. The configuration shown is \((0.2,0.8)\)
the former standing out prominently. The algorithm that achieves closer results is the Lasso, but uses five times more variables than the best provided configuration. In this particular instance, the specific variables that were excluded, resulting in the algorithm's superior performance, remain unidentified. As mentioned earlier, it is important to note that the intention of the proposed algorithm is not to detect these features but rather to eliminate them if they are present.
### Static scenarios
Other datasets presenting a more static situation between training, validation and test sets are now analyzed. It is also relevant to study the behaviour in this context, because we would like the algorithm to behave as good as possible in all kinds of scenarios. They are all available in Kaggle or in the UCI Machine Learning repository [33]:
* **CAT Scan Localization4**. This dataset consists of 384 features extracted from Computed Tomography (CT) images. The target variable denotes the relative location of the CT slice on the axial axis of the human body. The data were obtained from a set of 53500 CT images from 74 different patients. Each CT slice is described by two histograms in polar space. The first histogram describes the location of bony structures in the image, the second one the location of air inclusions inside the body. Both histograms are concatenated to form the final feature vector. Bins outside the image are marked with the value -0.25. The target variable (relative location of an image on the axial axis) was constructed by manually annotating up to 10 different landmarks in each CT volume with known location. The location of the slices between landmarks was interpolated. The division into training, validation and test is done randomly. Footnote 4: [https://archive.ics.uci.edu/ml/datasets/Relative+location+of+CT+slices+on+axial+axis](https://archive.ics.uci.edu/ml/datasets/Relative+location+of+CT+slices+on+axial+axis)
* **Appliances Energy Prediction5**[34]. The aim in this dataset is to predict the energy consumption of household appliances in a low energy building. For this purpose, the consumption of the appliances every 10 minutes has been stored for 4 and a half months. There are 28 possible explanatory variables, including temperature, humidity and different weather conditions of nearby places of interest. There are also two random variables in the dataset. The division into training, validation and test is random. Footnote 5: [https://archive.ics.uci.edu/ml/datasets/Appliances+energy+prediction](https://archive.ics.uci.edu/ml/datasets/Appliances+energy+prediction) Footnote 6: [https://www.bgc-jena.mpg.de/weutter/](https://www.bgc-jena.mpg.de/weutter/)
* **Max Planck Weather Dataset**6. It is a dataset with 12 atmospheric characteristics over time taken every 10 minutes. The variable to predict is the wind speed after first filtering only by hourly values and eliminating data with negative wind speed. In order to have more variables, lags from one day to one week are considered for each of the variables, resulting in 80 possible explanatory variables, also considering month and time. The division is also
done chronologically: from 2009 to 2015 for training, 2015 for validation and 2016 for testing.
#### Results
All results are shown in the Tables 9 - 11.
In general, very close results can be observed among all methods. For the CAT Scan Localization dataset (Table 9), the selection of variables through Lasso regression seems to obtain the best results. The proposed method behaves similarly to the other algorithms, with very similar results for all five configurations. A similar situation occurs with the Appliances Energy Prediction dataset (Table 10), where Powershap, Boruta and Boruta-Shap obtain the best result. The proposed method is slightly worse than them and better than the other methods. In the remaining dataset (Table 11), the described methodology obtains the best results, selection via Lasso regression performs similarly but with a much smaller number of variables. Thus, our methodology got equivalent results to the other methods in the three datasets with the metrics considered.
## 5 Conclusions and future lines of research
A new feature selection method for regression problems has been developed. The algorithm is based on the idea of observing how the variables of a certain model influence when making predictions, independently of the global influence they have. Specifically, a relationship is established between the model errors and the Shapley values, allowing for a more local analysis than the other feature selection algorithms. On the one hand, in situations where there is a concept shift, the algorithm is able to find the features that undergo this change of behavior in case they are available and have a negative influence on the predictions made by the model. Therefore, these variables are eliminated, leading to higher predictive performance and easier maintenance of the model after it has been put into production. On the other hand, in static situations, the model is able to detect the variables that cause overfitting obtaining comparable results with the state of the art. These variables create forced relationships in training, hence the effects they produce in the validation stage result in an undesired negative influence and, thus, are eliminated.
There are two clear problems to tackle as a continuation of this work. An interesting future line of research would be the omission of the parameters \(q_{low}\) and \(q_{high}\). As observed, these parameters of the algorithm can change the development of the algorithm iterations, producing different results in each case. Automatic selection of such quantiles in advance or varying between iterations in an optimal way would facilitate the use of the method. From a broader perspective, the generalization of our methodology to classification problems would be a natural extension of this research, contributing to the different studies that already exist on concept shift and feature selection in this area but with a different perspective.
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c|c}
**Algorithm** & \multicolumn{3}{c|}{**MAE**} & \multicolumn{3}{c|}{**RMSE**} & \multicolumn{3}{c|}{**R\({}^{2}\)**} & \multicolumn{3}{c}{**Number of**} \\ & **Mean** & **Std.** & **Max** & **Min** & **Mean** & **Std.** & **Max** & **Min** & **Mean** & **Std.** & **Max** & **Min** & **variables** \\ \hline Forwardship & 1,044 & 0,003 & 1,053 & 1,037 & 1,402 & 0,004 & 1,411 & 1,395 & 0.168 & 0,004 & 0,176 & 0.158 & 79 \\ Barutta-Samp & 1,044 & 0,003 & 1,051 & 1,038 & 1,402 & 0,003 & 1,408 & 1,398 & 0.168 & 0,003 & 0,173 & 0.161 & 33 \\ Shapircant & 1,049 & 0,003 & 1,054 & 1,041 & 1,407 & 0,002 & 1,412 & 1,403 & 0.161 & 0,003 & 0,167 & 0.156 & 13 \\ Boruta & 1,044 & 0,003 & 1,051 & 1,037 & 1,401 & 0,003 & 1,407 & 1,395 & 0.169 & 0,003 & 0,176 & 0.162 & 48 \\ PIMP & 1,045 & 0,004 & 1,061 & 1,034 & 1,406 & 0,007 & 1,436 & 1,397 & 0,163 & 0,009 & 0,174 & 0.327 & 18 \\ Bont Luono (0.001) & 1,042 & 0,001 & 1,946 & 1,039 & **1,394** & 0,001 & **1,397** & 1,391 & **0,177** & 0,002 & 0,18 & **0,174** & 15 \\ SHAPErlecta & & & & & & & & & & & & & \\
0.2-0.75 & **1,039** & 0,002 & **1,045** & **1,034** & **1,394** & 0,003 & 1, **4** & **1,389** & **0,177** & 0,003 & **0,184** & 0,17 & 75 \\ SHAPErlecta & & & & & & & & & & & & & \\
0.2-0.8 & 1,042 & 0,003 & 1,054 & 1,037 & 1,399 & 0,004 & 1,413 & 1,39 & 0,172 & 0,005 & 0,182 & 0,155 & 74 \\ SHAPErlecta & & & & & & & & & & & & & \\
0.15-0.85 & 1,042 & 0,003 & 1,051 & 1,038 & 1,399 & 0,003 & 1,411 & 1,393 & 0,171 & 0,004 & 0,179 & 0,157 & 79 \\ SHAPErlecta & & & & & & & & & & & & & \\
0.1-0.9 & 1,057 & 0,003 & 1,062 & 1,049 & 1,416 & 0,004 & 1,428 & 1,406 & 0,151 & 0,005 & 0,163 & 0,137 & 70 \\ SHAPErlecta & & & & & & & & & & & & & \\
0.05-0.95 & 1,043 & 0,003 & 1,051 & 1,037 & 1,398 & 0,003 & 1,407 & 1,393 & 0,173 & 0,003 & 0,179 & 0,162 & 63 \\ \end{tabular}
\end{table}
Table 11: Test results in the Max Planck Weather Dataset
## Declarations
**Acknowledgments** We thank Jesus Juan Ruiz for his insight into the problem and help.
**Funding** This work has been funded by grant MIG-20211033 from Centro para el Desarrollo Tecnologico Industrial, Ministerio de Universidades, and European Union-NextGenerationEU.
**Conflict of interest** The authors have nothing to declare.
**Ethics approval** Not applicable.
**Consent to participate** Not applicable.
**Consent for publication** Not applicable.
**Availability of data and material** All data is available in the links provided in the text.
**Code availability** All code is available in the link provided in [https://github.com/CCaribe9/SHAPEffects](https://github.com/CCaribe9/SHAPEffects)
**Authors' contributions** All authors contributed to the study conception and design, methodology, analysis, investigation and writing of the manuscript. The full code was written by Carlos Sebastian.
|
2305.03121 | Extension properties for monotone and sublinear operators | In this paper we obtain several extension properties for monotone and
sublinear operators. The results obtained generalize those known for positive
and linear operators. | Sorin G. Gal | 2023-05-04T19:45:37Z | http://arxiv.org/abs/2305.03121v1 | # Extension properties for monotone and sublinear operators
###### Abstract.
In this paper we obtain several extension properties for monotone and sublinear operators. The results obtained generalize those known for positive and linear operators.
Key words and phrases:Monotone operator, sublinear operator, extension theorems, Kantorovich-type theorems, Hahn-Banach type theorem, Choquet's integral 2
## 3. Extension properties
Firstly, for our purpose, we recall that a Riesz space (or a vector lattice) is an ordered vector space \(E\) with the additional property that for each pair of vectors \(x,y\in E\), the supremum and the infimum of the set \(\{x,y\}\) both exist in \(E\). In a Riesz space, two elements \(x\) and \(y\) are said to be disjoint (in symbols \(x{\bot}y\)) whenever \(|x|\wedge|y|=0\) holds. A Riesz space is called Dedekind complete whenever every nonempty bounded above subset has a supremum (or, equivalently, whenever every nonempty bounded below subset has an infimum). Also, a Riesz space is said to be Dedekind \(\sigma\)-complete if every countable subset that is bounded above has a supremum.
For details concerning linear ordered spaces see, e.g., Chapter 1, Section 1.1 in the book of Aliprantis-Burkinshaw [1].
We start with the following generalization of the fundamental extension theorem of L. V. Kantorovich in [7] (see also Aliprantis-Burkinshaw [1], Theorem 1.10, p. 9) proved there for positive linear operators.
**Theorem 3.1.**_Suppose that \(E\) and \(F\) are two Riesz spaces. Assume also that \(T:E^{+}\to F^{+}\) is a monotone sublinear mapping. Then, \(S(x)=T(x^{+}),x\in E\) is a monotone sublinear extension of \(T\) on \(E\)._
**Proof.** Since \((x+y)^{+}\leq x^{+}+y^{+}\), by the monotonicity of \(T\) it follows \(T((x+y)^{+})\leq T(x^{+})+T(y^{+})\), which immediately leads to
\[S(x+y)\leq S(x)+S(y),\]
that is \(S\) is subadditive on \(E\). Also, for \(\alpha\geq 0\), from the positive homogeneity of \(T\) it easily follows that \(S\) is positive homogeneous too.
We prove now that \(S\) is monotone on \(E\). Thus, let \(x,y\in E\) with \(y\leq x\). It follows \(y^{+}=y\lor 0\leq x\lor 0=x^{+}\), which implies \(S(y)=T(y^{+})\leq T(x^{+})=S(x)\).
Evidently that \(S(x)=T(x)\) for all \(x\in E^{+}\) and \(S(x)\geq 0\), for all \(x\in E\). \(\square\)
**Remark 3.1.** Notice that in the case of positive linear operators in Aliprantis-Burkinshaw [1], Theorem 1.10, p. 9, the proof is completely different, where the extension is given by \(S(x)=S(x^{+})-S(x^{-})\).
As an application of Theorem 3.1, the next result presents an interesting local approximation property of monotone and sublinear operators.
**Theorem 3.2.**_Let \(T:E\to F\) be a monotone sublinear operator between two Riesz spaces with \(F\) Dedekind \(\sigma\)-complete. Then for each \(x\in E^{+}\) there exists a monotone sublinear operator \(S:E\to F\) depending on \(x\), such that_
\[0\leq S(y)\leq T(y^{+})\mbox{ for all }y\in E;\qquad S(x)=T(x);\qquad\quad S(y)=0 \mbox{ for all }y{\bot}x.\]
**Proof.** Let \(x\in E^{+}\) be fixed. As in the proof of Theorem 1.22, pp. 19-20 in Aliprantis-Burkinshaw [1], let us define \(S:E^{+}\to F^{+}\) defined for fixed \(x\in E^{+}\) by \(S(y)=\sup\{T(y\wedge nx):n\in\mathbb{N}\}\).
(The supremum exists since \(F\) is Dedekind \(\sigma\)-complete and the sequence \(T(y\wedge nx)\) is bounded above in \(F\) by \(T(y)\).)
We claim that \(S\) is subadditive. To see this, let \(y,z\in E^{+}\). From \((y+z)\wedge nx\leq y\wedge nx+z\wedge nx\), we get
\[T((y+z)\wedge nx)\leq T(y\wedge nx)+T(z\wedge nx),\]
which by passing to supremum after \(n\in\mathbb{N}\) immediately implies \(S(y+z)\leq S(y)+S(z)\).
Then, for \(\lambda\geq 0\) we get
\[S(\lambda y)=\sup\{T(\lambda y\wedge nx);n\in\mathbb{N}\}=\lambda\sup\{T(y+\frac{n }{\lambda}x);n\in\mathbb{N}\}.\]
But for each \(n\in\mathbb{N}\), there exits \(m\in\mathbb{N}\) with \(\frac{n}{\lambda}\leq m\), which implies \(y+\frac{n}{\lambda}x\leq y+mx\), \(T(y+\frac{n}{\lambda}x)\leq T(y+mx)\), \(\sup\{T(y+\frac{n}{\lambda}x);n\in\mathbb{N}\}=\sup\{T(y+mx);m\in\mathbb{N}\}\) and therefore we immediately obtain \(S(\lambda y)=\lambda S(y)\).
Now, if \(y_{1}\leq y_{2}\), by the monotonicity of \(T\) it easily follows \(S(y_{1})\leq S(y_{2})\), that is \(S\) is monotone too.
By Theorem 3.1 the mapping \(S\) extends on \(E\) to a monotone sublinear operator.
Finally, it remains to prove the properties of \(S\) from the end of the statement. Indeed, from the formula of definition for \(S\), since \(y\wedge nx\leq y\) it easily follows that \(0\leq S\leq T\) for all \(y\in E^{+}\). Also, by \(x\wedge nx=x\) for all \(n\in\mathbb{N}\), it immediately follows that \(S(x)=T(x)\). Also, if \(y\bot x\), it follows \(y\wedge nx=0\) and therefore \(S(y)=0\).
**Remark 3.2.** Theorem 3.2 is a generalization of Theorem 1.22 in [1] valid for positive linear operators.
In what follows we prove for sublinear operators a variant of the classical Hahn-Banach theorem for linear operators.
**Theorem 3.3.**_Let \(X\) be a real vector space, \(Y\subset X\) a vector subspace of \(X\), \(F\) a Dedekind complete Riesz space, \(S:Y\to F\) sublinear operator on \(Y\) and \(p:X\to F\) a sublinear operator on \(X\), such that \(S(u)\leq p(u)\), for all \(u\in Y\). Then there exists \(T:X\to F\), such that \(T(y)=S(y)\) for all \(y\in Y\), \(T\) is sublinear on \(X\) and \(T(x)\leq p(x)\), for all \(x\in X\)._
**Proof.** The proof is a bit different from that in the linear case. Thus, for a fixed \(x\in X\setminus Y\), define \(Z\) as direct sum of \(Y\) and \(\mathbb{R}_{-}x\), that is \(Z=\{z=y+\lambda x;y\in Y,\lambda\leq 0\}\). Clearly that \(Z\) is a cone in \(X\).
Firstly we prove that there exists \(T:Z\to F\), such that \(T\) is sublinear on \(Z\) and \(T(z)\leq p(z)\), for all \(z\in Z\). For this purpose, for any \(u\in Y\), by the hypothesis we get \(S(u)\leq p(u)\leq p(u-x)+p(x)\), which implies \(S(u)-p(u-x)\leq p(x),u\in Y\).
Therefore, taking here a supremum of the left-hand side over \(u\) denoted by \(m\), we find that \(m\leq p(x)\). For a fixed \(c\in[m,p(x)]\) define \(T(z)=T(y+\lambda x)=S(y)+\lambda c\), \(\lambda\leq 0\).
Evidently that \(Y\subset Z\) and \(T(y)=S(y)\) for all \(y\in Y\). Then,
\[T(z_{1}+z_{2})=T(y_{1}+y_{2}+(\lambda_{1}+\lambda_{2})x)=S(y_{1}+y_{2})+( \lambda_{1}+\lambda_{2})c\]
\[\leq S(y_{1})+S(y_{2})+\lambda_{1}c+\lambda_{2}c=T(z_{1})+T(z_{2}),\]
which shows that \(T\) is subadditive on \(Z\).
Also, for any \(\alpha\in\mathbb{R}_{+}\) and \(z=y+\lambda x\) with \(\lambda\leq 0\), we get
\[T(\alpha z)=T(\alpha y+\alpha\lambda x)=S(\alpha y)+\alpha\lambda c=\alpha(S(y )+\lambda c)=\alpha T(z),\]
which shows that \(T\) is positively homogeneous on \(Z\).
In what follows we show that \(T(z)\leq p(z)\) for all \(z=u+\lambda x\in Z\) with \(\lambda<0\).
By the definition of \(c\), we get \(S(u)-p(u-x)\leq c\leq p(x)\), for all \(u\in Y\). For \(\lambda<0\), replacing in the left-hand side \(u\) by \(-\lambda^{-1}u\), we get
\[c\geq S(-\lambda^{-1}u)-p(-\lambda^{-1}(u+\lambda x))=-\lambda^{-1}(S(u)-p(u+ \lambda x)),\]
which immediately implies that \(T(z)\leq p(z)\).
Finally, a standard application of Zorn's lemma as in the classical linear case, guarantees the existence of an extension of \(S\) to \(X\) with the desired properties.
Indeed, denote by \(\mathcal{F}\) the set of all \(T:Z\to F\); \(Y\) subspace of \(Z\), \(T\) is sublinear on \(Z\), \(T(z)=S(z),z\in Y\), \(T(z)\leq p(z),z\in Z\).
If we consider on \(\mathcal{F}\) the order \(T_{1},T_{2}\in\mathcal{F}\), \(T_{1}\leq T_{2}\) if and only if \(T_{2}\) is an extension of \(T_{1}\), then \((\mathcal{F},\leq)\) is partially ordered and by Zorn's lemma it has a maximal element, let us denote it by \(T^{*}:Y^{*}\to F\). We have here \(Y^{*}=X\), because if we suppose that there exists \(x\in X\setminus Y^{*}\), by the previous reasonings there exists \(T^{**}:Z^{*}\to F\), with \(Z^{*}=Y^{*}+\mathbb{R}_{-}x\) such that \(T^{**}(z)=T^{*}(z)\), for all \(z\in Z^{*}\). But since \(T^{**}\in\mathcal{F}\), it follows that \(T^{*}\leq T^{**}\), which by the maximality of \(T^{*}\) implies in fact that \(T^{*}=T^{**}\) and therefore \(Y^{*}=Z^{*}\), i.e. \(x\in Y^{*}\), which is a contradiction. \(\square\)
**Remark 3.3.** If \(S\) is linear, this results was obtained by Theorem 1.25 in [1].
As a consequence of Theorem 3.3, we get the following Hahn-Banach type theorem for monotone/positive sublinear operators.
**Theorem 3.4.**_Let \(T:E\to F\) be a positive sublinear operator between two Riesz spaces with \(F\) Dedekind complete. Assume also that \(G\) is a Riesz subspace of \(E\) and that \(S:G\to F\) is a monotone sublinear operator satisfying \(0\leq S(x)\leq T(x)\), for all \(x\) in \(G^{+}\). Then \(S\) can be extended to a positive sublinear operator \(Q:E\to F\) such that \(0\leq Q(x)\leq T(x)\) holds for all \(x\in E^{+}\)._
**Proof.** As in the linear case, define \(p:E\to F\) by \(p(x)=T(x^{+})\) and note firstly that \(p\) is sublinear. Indeed, this is immediate from \((x+y)^{+}\leq x^{+}+y^{+}\) and \((\lambda x)^{+}=\lambda x^{+}\), for all \(x,y\in E\) and \(\lambda\geq 0\). Also, since \(x\leq x^{+}\), for all \(x\in G\) we have
\[S(x)=S(x^{+}-x^{-})\leq S(x^{+})\leq T(x^{+})=p(x).\]
Therefore, by Theorem 3.3, there exists a sublinear extension of \(S\) to all of \(E\) (which we denote by \(Q\) ) satisfying \(Q(x)\leq p(x)\), for all \(x\in E\).
By \(0=Q(x-x)\leq Q(x)+Q(-x)\), it follows that \(-Q(x)\leq Q(-x)\) and for \(x\in E^{+}\) we get
\[-Q(x)\leq Q(-x)\leq p(-x)=T((-x)^{+})=T(0)=0\]
and so \(0\leq Q(x)\leq p(x)=T(x)\) holds for all \(x\in E^{+}\), as desired. \(\square\)
**Remark 3.4.** Theorem 3.4 is a generalization of Theorem 1.26 in [1], p. 24.
Recall now that a subset \(A\) of a Riesz space \(E\) is said to be order bounded if there exists \(m,M\in E\) such that \(m\leq x\leq M\) for all \(x\in A\). Also, an operator \(T:E\to F\) between two Riesz spaces is said to be order bounded if it maps order bounded subsets of \(E\) to order bounded subsets of \(F\).
**Theorem 3.5.**_Let \(E\) and \(F\) be Riesz spaces with \(F\) Dedekind complete, \(G\) a Riesz subspace of \(E\) and \(T:G\to F\) be a monotone sublinear operator. Then for the following statements_
_(1) \(T\) extends to a monotone sublinear operator from \(E\) to \(F\)._
_(2) \(T\) extends to an order bounded sublinear monotone operator \(S\) from \(E\) to \(F\)._
_(3) There exists a monotone sublinear mapping \(p:E\to F\) satisfying \(T(x)\leq p(x)\) for all \(x\in G\)._
_(4) \(T\) extends to a positive sublinear operator from \(E\) to \(F\),_
_the following implications hold :_
\((1)\Longrightarrow(2)\Longrightarrow(3)\Longrightarrow(4)\)_._
**Proof.** \((1)\Longrightarrow(2)\). Is obvious.
\((2)\Longrightarrow(3)\). Let \(S\) be an order bounded sublinear monotone operator satisfying \(S(x)=T(x)\) for all \(x\in G\). Then the mapping \(p:E\to F\) defined by \(p(x)=|S(x^{+})|\)
is monotone, sublinear and satisfies
\[T(x)\leq T(x^{+})=S(x^{+})\leq|S(x^{+})|=p(x),\]
for all \(x\in G\).
(3) \(\Longrightarrow\) (4). Let \(p:E\to F\) be a monotone sublinear mapping satisfying \(T(x)\leq p(x)\) for all \(x\in G\). Then the formula \(q(x)=p(x^{+})\) defines a sublinear mapping from \(E\) to \(F\) such that we have
\[T(x)\leq T(x^{+})\leq p(x^{+})=q(x),\]
for all \(x\in G\). Thus, by the Hahn-Banach extension Theorem 3.3, there exists a sublinear extension \(R\) of \(T\) on \(E\), satisfying \(R(x)\leq q(x)\), for all \(x\in E\). In particular, if \(x\in E^{+}\), then by the relation \(0=R(x-x)\leq R(x)+R(-x)\), we get
\[-R(x)\leq R(-x)\leq q(-x)=(p(-x)^{+})=p(0)=0,\]
which implies \(R(x)\geq 0\). That is, \(R\) is a positive sublinear extension of \(T\) on \(E\), and the proof is finished. \(\square\)
**Remark 3.5.** If in the above theorem \(T\) is a positive linear operators, then we recapture Theorem 1.27 in Aliprantis-Burkinshaw [1].
A subset \(A\) of a Riesz space is called solid whenever \(|x|\leq|y|\) and \(y\in A\) imply \(x\in A\). A solid vector subspace of a Riesz space is called an ideal. From the lattice identity \(x\lor y=\frac{1}{2}(x+y+|x-y|)\), it follows immediately that every ideal is a Riesz subspace.
The next result deals with restrictions of monotone sublinear operators to ideals.
**Theorem 3.6.**_If \(T:E\to F\) is a monotone sublinear operator between two Riesz spaces with \(F\) Dedekind complete, then for every ideal \(A\) of \(E\) the formula_
\[T_{A}(x)=\sup\{T(y):y\in A\text{ and }0\leq y\leq x\},x\in E^{+},\]
_defines (by its extension on \(E\) denoted also by \(T_{A}\) and proved to exist by Theorem 3.1) a monotone sublinear operator from \(E\) to \(F\). Moreover, we have :_
_(a) \(0\leq T_{A}(y)\leq T(y^{+})\), for all \(y\in E\)._
_(b) \(T_{A}=T\) on \(A\) and \(T_{A}=0\) on \(A^{d}:=\{x\in E;x\bot y\text{ for all }y\in A\}\)._
_(c) If \(B\) is another ideal with \(A\subset B\), then \(T_{A}\leq T_{B}\) holds._
**Proof.** According to Theorem 3.1, we need to show that \(T_{A}\) is monotone and sublinear on \(E^{+}\). The monotonicity immediately follows from the definition of \(T_{A}\).
It remains to prove the sublinearity on \(E^{+}\). For this purpose, firstly it is easy to see that
\[T_{A}(x)=\sup\{T(x\wedge y);y\in A^{+}\}\]
holds for all \(x\in E^{+}\).
Let \(x,y\in E^{+}\). If \(z\in A^{+}\), then the inequality
\[(x+y)\wedge z\leq x\wedge z+y\wedge z\]
implies that
\[T((x+y)\wedge z)\leq T(x\wedge z)+T(y\wedge z)\leq T_{A}(x)+T_{A}(y),\]
and hence
\[T_{A}(x+y)\leq T_{A}(x)+T_{A}(y),\]
that is the subadditivity.
Now, since for \(\lambda>0\) and \(y\in A^{+}\) we have \(\frac{y}{\lambda}\in A^{+}\), we immediately get
\[T_{A}(\lambda x)=\sup\{T(\lambda x\wedge y);y\in A^{+}\}=\lambda\sup\{T(x \wedge\frac{y}{\lambda});y\in A^{+}\}\]
\[=\lambda\sup\{T(x\wedge y);y\in A^{+}\}=\lambda T_{A}(x),\]
which shows that \(T_{A}\) is positive homogeneous too.
Now, the properties \((a)-(b)\) are easy consequences of Theorem 3.2, while the property (c) is an easy consequence of the formula defining \(T_{A}\).
**Remark 3.6.** Theorem 3.6 generalizes Theorem 1.28, pp. 25-26 in [1].
In what follows, one says that a vector subspace \(G\) of an ordered vector space \(E\) is majorizing \(E\), if for each \(x\in E\), there exists some \(y\in G\) with \(x\leq y\) (or, equivalently, if for each \(x\in E\), there exists some \(y\in G\) with \(y\leq x\)).
**Theorem 3.7.**_Let \(E\) and \(F\) be two ordered vector spaces with \(F\) a Dedekind complete Riesz space. If \(G\) is a majorizing vector subspace of \(E\) and \(T:G\to F\) is a monotone sublinear operator, then \(T\) has a positive sublinear extension to all of \(E\)._
**Proof.** Fix \(x\in E\) and let \(y\in G\) satisfy \(x\leq y\). Since \(G\) is majorizing, there exists a vector \(u\in G\) with \(u\leq x\). Hence, \(u\leq y\) and the monotonicity of \(T\) implies \(T(u)\leq T(y)\) for all \(y\in G\) with \(x\leq y\). In particular, it follows that \(\inf\{T(y):y\in G\text{ and }x\leq y\}\) exists in \(F\) for each \(x\in E\) and we can define the mapping \(p:E\to F\) by the formula
\[p(x)=\inf\{T(y):y\in G\text{ and }x\leq y\},x\in E.\]
For each \(x\in G\) we have \(p(x)=T(x)\). Then, for \(\lambda>0\), we get
\[p(\lambda x)=\inf\{T(y);y\in G\text{ and }\lambda x\leq y\}=\inf\{T(\lambda \cdot y/\lambda);y\in G\text{ and }x\leq y/\lambda\}\]
\[=\lambda\inf\{T(y/\lambda);y\in G\text{ and }x\leq y/\lambda\}=\lambda p(x).\]
Also, let \(x_{1}\) and \(x_{2}\) be arbitrary in \(E\). Then for any \(y_{1},y_{2}\in G\) with \(x_{1}\leq y_{1}\) and \(x_{2}\leq y_{2}\), we have \(x_{1}+x_{2}\leq y_{1}+y_{2}\) and therefore
\[p\left(x_{1}+x_{2}\right)\leq T(y_{1}+y_{2})\leq T(y_{1})+T(y_{2}).\]
Passing to infimum successively with \(y_{1}\) and \(y_{2}\), it follows that \(p\left(x_{1}+x_{2}\right)\leq p(x_{1})+p(x_{2})\), i.e., \(p\) is sublinear.
Now, by Theorem 3.3, the operator \(T\) has a sublinear extension \(S\) on \(E\), satisfying \(S(z)\leq p(z)\) for all \(z\in E\). If \(z\in E^{+}\), then \(-z\leq 0\), and so from \(0=S(z-z)\leq S(z)+S(-z)\) it follows
\[-S(z)\leq S(-z)\leq p(-z)\leq T(0)=0,\]
and we see that \(S(z)\geq 0\). This shows that \(S\) is a positive sublinear extension of \(T\) on \(E\).
**Remark 3.7.** Theorem 3.7 generalizes Theorem 1.32 in [1] (see also Kantorovich [8]).
Now, for a monotone sublinear operator \(T:G\to F\), where \(G\) is a vector subspace of an ordered vector space \(E\) and \(F\) is a Dedekind complete Riesz space, let us denote by \(\mathcal{E}_{sub}(T)\), the collection of all monotone and sublinear extensions of \(T\) on \(E\). Namely,
\[\mathcal{E}_{sub}(T):=\{S\text{ monotone, sublinear and }S=T\text{ on }G\}.\]
Clearly the set \(\mathcal{E}_{sub}(T)\) is a convex subset, that is \(\lambda S+(1-\lambda)R\in\mathcal{E}_{sub}(T)\) for all \(S,R\in\mathcal{E}_{sub}(T)\) and all \(\lambda\in[0,1]\).
In what follows we prove that any extendable monotone sublinear operator whose domain is an ideal has a smallest extension.
**Theorem 3.8.**_Let \(E\) and \(F\) be two Riesz spaces with \(F\) Dedekind complete, let \(A\) be an ideal of \(E\), and let \(T:A\to F\) be a monotone sublinear operator. If \(\mathcal{E}_{sub}(T)\neq\emptyset\), then \(T\) has a smallest extension. Moreover, if in this case \(S=\min\mathcal{E}_{sub}(T)\), then_
\[S(x)=\sup\{T(y):y\in A\text{ and }0\leq y\leq x\},\text{ for all }x\in E^{+}.\]
**Proof.** Since \(\mathcal{E}_{sub}(T)\neq\emptyset\), \(T\) has (at least) one monotone and sublinear extension, the formula
\[T_{A}(x)=\sup\{T(y):y\in A\text{ and }0\leq y\leq x\},\]
\(x\in E^{+}\), by Theorem 3.6 extends to a monotone sublinear operator from \(E\) to \(F\) (denoted here also by \(T_{A}\)), satisfying \(T_{A}=T\) on \(A\), and so \(T_{A}\in\mathcal{E}_{sub}(T)\). Now if \(S\in\mathcal{E}_{sub}(T)\), then \(S=T\) holds on \(A\), and hence \(T_{A}=S_{A}\leq S\). Therefore, \(T_{A}=\min\mathcal{E}_{sub}(T)\) holds, as desired. \(\square\)
**Remark 3.8.** Theorem 3.8 generalizes Theorem 1.30, pp. 27 in [1].
In what follows, recall that a vector \(e\) of a convex set \(C\) is called an extreme point of \(C\) if \(e=\lambda x+(1-\lambda)y\) with \(x,y\in C\) and \(0<\lambda<1\) implies \(x=y=e\).
**Theorem 3.9.**_Let \(E\) and \(F\) be two Riesz spaces with \(F\) Dedekind complete, \(G\) a vector subspace of \(E\) and \(T:G\to F\) be a monotone sublinear operator._
_If \(S\in\mathcal{E}_{sub}(T)\), satisfies the condition that for all \(x\in E\) we have \(\inf\{S(|x-y|);y\in G\}=0\), then \(S\) is an extreme point of \(\mathcal{E}_{sub}(T)\)._
**Proof.** Let \(S\in\mathcal{E}_{sub}(T)\) satisfying the hypothesis and assume that \(S=\lambda Q+(1-\lambda)R\) with \(Q,R\in\mathcal{E}_{sub}(T)\) and \(0<\lambda<1\). Then for each \(x,y\in E\) we have
\[|Q(x)-Q(y)|\leq Q(|x-y|)=(\frac{1}{\lambda}S-\frac{1-\lambda}{\lambda}R)(|x-y| )\leq\frac{1}{\lambda}S(|x-y|).\]
Here the inequality \(|Q(x)-Q(y)|\leq Q(|x-y|)\) follows immediately applying the monotonicity of \(Q\) to both inequalities \(x-y\leq|x-y|\) and \(y-x\leq|x-y|\).
In particular, if \(x\in E\) and \(y\in G\), then from \(S(y)=Q(y)=T(y)\) it follows that
\[|S(x)-Q(x)|\leq|S(x)-S(y)|+|Q(y)-Q(x)|\leq(1+1/\lambda)S(|x-y|).\]
Taking into account our hypothesis, the last inequality yields \(S(x)=Q(x)\) for each \(x\in E\), and this shows that \(S\) is an extreme point of \(\mathcal{E}_{sub}(T)\). \(\square\)
**Remark 3.9.** For positive linear operators, the converse of the statement holds too, see Lipecki-Plachky-Thomsen [9] (see also Theorem 1.31, p. 27 in [1]).
**Acknowledgement.** The author thanks Constantin P. Niculescu for an useful discussion on the proof of Theorem 3.7.
|
2303.11873 | A Tale of Two Circuits: Grokking as Competition of Sparse and Dense
Subnetworks | Grokking is a phenomenon where a model trained on an algorithmic task first
overfits but, then, after a large amount of additional training, undergoes a
phase transition to generalize perfectly. We empirically study the internal
structure of networks undergoing grokking on the sparse parity task, and find
that the grokking phase transition corresponds to the emergence of a sparse
subnetwork that dominates model predictions. On an optimization level, we find
that this subnetwork arises when a small subset of neurons undergoes rapid norm
growth, whereas the other neurons in the network decay slowly in norm. Thus, we
suggest that the grokking phase transition can be understood to emerge from
competition of two largely distinct subnetworks: a dense one that dominates
before the transition and generalizes poorly, and a sparse one that dominates
afterwards. | William Merrill, Nikolaos Tsilivis, Aman Shukla | 2023-03-21T14:17:29Z | http://arxiv.org/abs/2303.11873v1 | # A Tale of Two Circuits: Grokking as Competition of Sparse and Dense Subnetworks
###### Abstract
Grokking is a phenomenon where a model trained on an algorithmic task first overfits but, then, after a large amount of additional training, undergoes a phase transition to generalize perfectly. We empirically study the internal structure of networks undergoing grokking on the sparse parity task, and find that the grokking phase transition corresponds to the emergence of a sparse subnetwork that dominates model predictions. On an optimization level, we find that this subnetwork arises when a small subset of neurons undergoes rapid norm growth, whereas the other neurons in the network decay slowly in norm. Thus, we suggest that the grokking phase transition can be understood to emerge from _competition_ of two largely distinct subnetworks: a dense one that dominates before the transition and generalizes poorly, and a sparse one that dominates afterwards.
## 1 Introduction
Grokking (Power et al., 2022; Barak et al., 2022) is a curious generalization trend for neural networks trained on certain algorithmic tasks. Under grokking, the network's accuracy (and loss) plot displays two phases. Early in training, the training accuracy goes to \(100\%\), while the generalization accuracy remains near chance. Since the network appears to be simply memorizing the data in this phase, we refer to this as the _memorization phase_. Significantly later in training, the generalization accuracy spikes suddenly to \(100\%\), which we call the _grokking transition_.
This mysterious pattern defies conventional machine learning wisdom: after initially overfitting, the model is somehow learning the correct, generalizing behavior without any disambiguating evidence from the data. Accounting for this strange behavior motivates developing a theory of grokking rooted in optimization. Moreover, grokking resembles so-called emergent behavior in large language models (Zoph et al., 2022), where performance on some (often algorithmic) capability remains at chance below a critical scale threshold, but, with enough scale, shows roughly monotonic improvement. We thus might view grokking as a controlled test bed for emergence in large language models, and hope that understanding the dynamics of grokking could lead to hypotheses for analyzing such emergent capabilities. Ideally, an effective theory for such phenomena should be able to understand the causal mechanisms behind the phase transitions, predict on which downstream tasks they could happen, and disentangle the statistical (number of data) from the computational (compute time, size of network) aspects of the problem.
While grokking was originally identified on algorithmic tasks, Liu et al. (2023) show it can be induced on natural tasks from other domains with the right hyperparameters. Additionally, grokking-like phase transitions have long been studied in the statistical physics community (Engel and Van den Broeck, 2001), albeit in a slightly different setting (online gradient descent, large limits of model parameters and amount of data etc.). Past work analyzing grokking has reverse-engineered the network behavior in Fourier space (Nanda et al., 2023) and found measures of progress towards generalization before the grokking transition (Barak et al., 2022). Thilak et al. (2022) observe a "slingshot" pattern during grokking: the final layer weight norm follows a roughly sigmoidal growth trend around the grokking phase transition. This suggests grokking is related to the magnitude of neurons within the network, though without a clear theoretical explanation or account of individual neuron behavior.
In this work, we aim to better understand grokking on sparse parity (Barak et al., 2022) by studying the _sparsity_ and _computational structure_ of the model over time. We empirically demonstrate a connection between grokking, emergent sparsity, and competition between different structures inside the model (Figure 1). We first show that, after grokking, network behavior is controlled by a sparse subnetwork (but by a dense one before the transition). Aiming to better understand this sparse subnetwork, we then demonstrate that the grokking phase transition corresponds to accelerated norm growth in a _specific_ set of neurons, and norm decay elsewhere. After this norm growth, we find that the targeted neurons quickly begin to dominate network predictions, leading to the emergence of the sparse subnetwork. We also find that the size of the sparse subnetwork corresponds to the size of a disjunctive normal form circuit for computing parity, suggesting this may be what the model is doing. Taken together, our results suggest grokking arises from targeted norm growth of specific neurons within the network. This targeted norm growth sparsifies the network, potentially enabling generalizing discrete behavior that is useful for algorithmic tasks.
## 2 Tasks, Models, and Methods
Sparse Parity Function.We focus on analyzing grokking in the problem of learning a sparse \((n,k)\)-parity function (Barak et al., 2022). A \((n,k)\)-parity function takes as input a string \(x\in\{\pm 1\}^{n}\) returns \(\pi(x)=\prod_{i\in S}x_{i}\in\{\pm 1\}\), where \(S\) is a fixed, _hidden_ set of \(k\) indices. The training set consists of \(N\) i.i.d. samples of \(x\) and \(\pi(x)\). We call the \((n,k)\)-parity problem _sparse_ when \(k\ll n\), which is satisfied by our choice of \(n=40\), \(k=3\), and \(N=1000\).
Network Architecture.Following Barak et al. (2022), we use a \(1\)-layer ReLU net:
\[f(x) =u^{\intercal}\sigma(Wx+b)\] \[\tilde{y} =\operatorname{sgn}\left(f(x)\right),\]
where \(\sigma(x)=\max(0,x)\) is ReLU, \(u\in\mathbb{R}^{p},W\in\mathbb{R}^{p\times n}\), and \(b\in\mathbb{R}^{p}\). We minimize the hinge loss \(\ell(x,y)=\max(0,1-f(x)y)\), using stochastic gradient descent (batch size \(B\)) with constant learning rate \(\eta\) and (potential) weight decay of strength \(\lambda\) (that is, we minimize the regularized loss \(\ell(x,y)+\lambda\|\theta\|_{2}\), where \(\theta\) denotes all the parameters of the model). Unless stated otherwise, we use weight decay \(\lambda=0.01\), learning rate \(\eta=0.1\) batch size \(B=32\), and hidden size \(p=1000\). We train each network 5 times, varying the random seed for generating the train set and training the model, but keeping the test set of \(100\) points fixed.
### Active Subnetworks and Effective Sparsity
We use a variant of weight magnitude pruning (Mozer and Smolensky, 1989) to find active subnetworks that control the full network predictions. The method assumes a given support of input data \(\mathcal{X}\). Let \(f\) be the network prediction function and \(f_{k}\) be the prediction where the \(p-k\) neurons with the
Figure 1: An illustration of the structure of a neural network during training in algorithmic tasks. Neural networks often exhibit a memorization phase that corresponds to a dense network, followed by the generalization phase where a sparse, largely disjoint to the prior one, model takes over.
least-magnitude incoming edges have been pruned. We define the _active subnetwork_ of \(f\) as \(f_{k}\) where \(k\) is minimal such that, for all \(x\in\mathcal{X}\), \(f(x)=f_{k}(x)\).
We will use the active subnetwork to identify important structures within a network during grokking. We can also naturally use it to measure sparsity: we define the _effective sparsity_ of \(f\) as the number of neurons in the hidden layer of the active subnetwork of \(f\).
## 3 Results
We see in Figure 2 that our sparse parity task indeed displays grokking, both in accuracy and loss. We now turn to analyzing the internal network structure before, during, and after grokking. We refer to Appendix C for additional configurations (smaller weight decay or larger parity size) that support our findings1.
Footnote 1: Code available on [https://github.com/Tsili42/parity-nn](https://github.com/Tsili42/parity-nn)
### Grokking Corresponds to Sparsification
Figure 2 (right) shows the effective sparsity (number of active neurons; cf. Section 2.1) of the network over time. Noticeably, it becomes orders of magnitude sparser as it starts generalizing to the test set, and crucially, this phase transition happens at the same time as the loss phase transition. This can be directly attributed to the norm regularization being applied in the loss function, as it kicks in right after we reach (almost) zero in the data-fidelity part of the loss. Interestingly, this phase transition can be calculated solely from the training data but correlates with the phase transition in the test accuracy.
Nanda et al. (2023) observe sparsity in the Fourier domain after grokking, whereas we have found it in the conventional network structure as well. Motivated by the discovery of this sparse subnetwork, we now turn our attention to understanding why this subnetwork emerges and its structure.
### Selective Norm Growth Induces Sparsity During Grokking
Having identified a sparse subnetwork of neurons that emerges to control network behavior after the grokking transition, we study the properties of these neurons throughout training (before the formation of the sparse subnetwork). Figure 3 (left) plots the average neuron norm for three sets of neurons: the neurons that end up in the sparse subnetwork, the complement of those neurons, and a set of random neurons with the same size as the sparse subnetwork. We find that the 3 networks have similar average norm up to a point slightly before the grokking phase transition, at which the generalizing subnetwork norm begins to grow rapidly.
In Figure 3 (right), we measure the faithfulness of the neurons in the sparse subnetwork over time: in other words, the ability of these neurons alone to reconstruct the full network predictions on the test set, measured as accuracy. The grokking phase transition corresponds to these networks emerging
Figure 2: Accuracy (left), Average Loss (middle) and Effective Sparsity (right) during training of an FC network on \((40,3)\) parity. Generalization accuracy suddenly jumps from random chance to flawless prediction concurrent with sparsification of the model. Shaded areas show randomness over the training dataset sampling, model initialization, and stochasticity of SGD (5 random seeds).
to fully explain network predictions, and we believe this is likely a causal effect of norm growth.2 The fact that the performance of its complement degrades after grokking supports the conclusion that the sparse network is "competing" with another network to inform model predictions, and that the grokking transition corresponds to a sudden switch where the sparse network dominates model output.
Footnote 2: Conventional machine learning wisdom associates small weight norm with sparsity, so it may appear counterintuitive that _growing_ norm induces sparsity. We note that the growth of selective weights can lead to effective sparsity because the large weights dominate linear layers (Merrill et al., 2021).
The element of competition between the different circuits is further evident when plotting the norm of individual neurons over time. Figure 5 in the Appendix shows that neurons active during the memorization phase slightly grow in norm before grokking but then "die out", while the the neurons of sparse subnetwork are inactive during memorization and then explode in norm. The fact that the model is overparameterized allows this kind of competition to take place.
### Subnetwork Computational Structure
Sparse Subnetwork.Across \(5\) runs, the sparse subnetwork has size \(\{6,6,6,8,8\}\). This suggests that the network may be computing the parity via a representation resembling disjunctive normal form (DNF), via the following argument. A standard DNF construction uses \(2^{k}=8\) neurons to compute the parity of \(k=3\) bits (Proposition 1). We also derive a modified DNF net that uses only \(6\) neurons to compute the parity of \(3\) bits (Proposition 2). Since our sparse subnetwork always contains either \(6\) or \(8\) neuron, we speculate it may always be implementing a variant of these constructions. However, there is an even smaller network computing an \((n,3)\)-parity with only 4 neurons via a threshold-gate construction, but it does not appear to be found by our networks (Proposition 3).
Dense Subnetwork.The network active during the so-called memorization phase is not exactly memorizing. Evidence for this claim comes from observing grokking on the binary operator task of Power et al. (2022). For the originally reported division operator task, the network obtains near zero generalization prior to grokking (Figure 4, right). However, switching the operator to addition, the generalization accuracy is above chance before grokking (Figure 4, left). We hypothesize this is because the network, even pre-grokking, can generalize to unseen data since addition (unlike division) is commutative. In this sense, it is not strictly memorizing the training data.
## 4 Conclusion
We have shown empirically that the grokking phase transition, at least in a specific setting, arises from competition between a sparse subnetwork and the (dense) rest of the network. Moreover, grokking seems to arise from selective norm growth of this subnetwork's neurons. As a result, the sparse subnetwork is largely inactive during the memorization phase, but soon after grokking, fully controls model prediction.
Figure 3: Left: Average norm of different subnetworks during training. Right: Agreement between the predictions of a subnetwork and the full network on the test set. The _generalizing_ subnetwork is the final sparse net, the _complementary_ subnetwork is its complement, and the _control_ subnetwork is a random network with the same size as the generalizing one.
We speculate that norm growth and sparsity may facilitate emergent behavior in large language models similar to their role in grokking. As preliminary evidence, Merrill et al. (2021) observed monotonic norm growth of the parameters in T5, leading to "saturation" of the network in function space.3 More promisingly, Dettmers et al. (2022) observe that a targeted subset of weights in pretrained language models have high magnitude, and that these weights overwhelmingly explain model predictions. It would also be interesting to extend our analysis of grokking to large language models: specifically, does targeted norm growth subnetworks of large language models (Dettmers et al., 2022) facilitate emergent behavior?
Footnote 3: Saturation measures the discreteness of the network function, but may relate to effective sparsity.
## 5 Acknowledgements
This material is based upon work supported by the National Science Foundation under NSF Award 1922658.
|
2307.12655 | Domino Snake Problems on Groups | In this article we study domino snake problems on finitely generated groups.
We provide general properties of these problems and introduce new tools for
their study. The first is the use of symbolic dynamics to understand the set of
all possible snakes. Using this approach we solve many variations of the
infinite snake problem including the geodesic snake problem for certain classes
of groups. Next, we introduce a notion of embedding that allows us to reduce
the decidability of snake problems from one group to another. This notion
enable us to establish the undecidability of the infinite snake and ouroboros
problems on nilpotent groups for any generating set, given that we add a
well-chosen element. Finally, we make use of monadic second order logic to
prove that domino snake problems are decidable on virtually free groups for all
generating sets. | Nathalie Aubrun, Nicolas Bitar | 2023-07-24T09:54:25Z | http://arxiv.org/abs/2307.12655v1 | # Domino Snake Problems on Groups
###### Abstract
In this article we study domino snake problems on finitely generated groups. We provide general properties of these problems and introduce new tools for their study. The first is the use of symbolic dynamics to understand the set of all possible snakes. Using this approach we solve many variations of the infinite snake problem including the geodesic snake problem for certain classes of groups. Next, we introduce a notion of embedding that allows us to reduce the decidability of snake problems from one group to another. This notion enable us to establish the undecidability of the infinite snake and ouroboros problems on nilpotent groups for any generating set, given that we add a well-chosen element. Finally, we make use of monadic second order logic to prove that domino snake problems are decidable on virtually free groups for all generating sets.
Keywords:Domino Snake Problems Computability Theory Symbolic Dynamics Combinatorial Group Theory MSO logic.
## 1 Introduction
Since their introduction more than 60 years ago [26], domino problems have had a long history of providing complexity lower bounds and undecidability of numerous decision problems [5, 15, 10, 14]. The input to these problems is a set of _Wang tiles_: unit square tiles with colored edges and fixed orientation. The decision problems follow the same global structure; given a finite set of Wang tiles, is there an algorithm to determine if they tile a particular shape or subset of the infinite grid such that adjacent tiles share the same color along their common border? An interesting variant of this general formula are domino snake problems. First introduced by Myers in 1979 [23], snake problems ask for threads -or snakes- of tiles that satisfy certain constraints. In particular, three of them stand-out. The infinite snake problem asks is there exists a tiling of a self-avoiding bi-infinite path on the grid, the ouroboros problem asks if there exists a non-trivial cycle on the grid, and the snake reachability problem asks if there exists a tiling of a self-avoiding path between two prescribed points. Adjacency rules are only required to be respected along the path. These problems have had their computability completely classified [12, 13, 8, 9, 1, 18] (see Theorem 1). In this article, we expand the scope of domino snake problems to finitely generated groups, as has been done for other domino problems [3], to understand how the underlying structure affects computability.
We present three novel ways in which to approach these problems. The first is the use of symbolic dynamics to understand the set of all possible snakes. Theorem 1.1 states that when this set is defined through a regular language of forbidden patterns, the infinite snake problem becomes decidable. Using this approach we solve many variations of the infinite snake problem including the geodesic snake problem for some classes of groups. Next, we introduce a notion of embedding that allows us to reduce the decidability of snake problems from one group to another. This notion enable us to establish the undecidability of the infinite snake and ouroboros problems on a large class of groups -that most notably include nilpotent groups- for any generating set, provided that we add a central torsion-free element. Finally, to tackle virtually free groups, we express the three snake problems in the language of Monadic Second Order logic. Because for this class of groups this fraction of logic is decidable, we show that our three decision problems are decidable independently of the generating set.
## 2 Preliminaries
Given a finite alphabet \(A\), we denote by \(A^{n}\) the set of words on \(A\) of length \(n\), and \(A^{*}\) the set of all finite length words including the empty word \(\epsilon\). Furthermore, we denote by \(A^{+}=A^{*}\setminus\{\epsilon\}\) the set of non-empty finite words over \(A\). A factor \(v\) of a word \(w\) is a contiguous subword; we denote this by \(v\sqsubseteq w\). We denote discrete intervals by \(\llbracket n,m\rrbracket=\{n,n+1,...,m-1,m\}\). We also denote the free group defined from the free generating set \(S\) by \(\mathbb{F}_{S}\). The proofs missing in the main text are contained in the Appendix.
### Symbolic Dynamics
Given a finite alphabet \(A\), we define the _full-shift_ over \(A\) as the set of configurations \(A^{\mathbb{Z}}=\{x:\mathbb{Z}\to A\}\). There is a natural \(\mathbb{Z}\)-action on the full-shift called the _shift_, \(\sigma:A^{\mathbb{Z}}\to A^{\mathbb{Z}}\), given by \(\sigma(x)_{i}=x_{i+1}\). The full-shift is also endowed with the prodiscrete topology, making it a compact space.
Let \(F\subseteq\mathbb{Z}\) be a finite subset. A _pattern_ of support \(F\) is an element \(p\in A^{F}\). We say a pattern \(p\in A^{F}\) appears in a configuration \(x\in A^{\mathbb{Z}}\), denoted \(p\sqsubseteq x\), if there exists \(k\in\mathbb{Z}\) such that \(x_{k+i}=p_{i}\) for all \(i\in F\). Given a set of patterns \(\mathcal{F}\), we can define the set of configurations where no pattern from \(\mathcal{F}\) appears,
\[X_{\mathcal{F}}\coloneqq\{x\in A^{\mathbb{Z}}\mid\forall p\in\mathcal{F},\ p\text{ does not appear in }x\}.\]
A _subshift_ is a subset of the full-shift \(X\subseteq A^{\mathbb{Z}}\) such that there exists a set of patterns \(\mathcal{F}\) that verifies \(X=X_{\mathcal{F}}\). A classic result shows subshifts can be equivalently defined as closed \(\sigma\)-invariant subsets of the full-shift. We say a subshift \(X_{\mathcal{F}}\) is
* a _subshift of finite type_ (SFT) if \(\mathcal{F}\) is finite,
* _sofic_ if \(\mathcal{F}\) is a regular language,
* _effective_ if \(\mathcal{F}\) is a decidable language.
Each class is strictly contained within the next. Sofic subshifts can be equivalently defined as the set of bi-infinite walks on a labeled finite graph. It is therefore decidable, given the automaton or graph that defines the subshift, to determine if the subshift is empty. Similarly, given a Turing machine for the forbidden language of an effective subshift, we have a semi-algorithm to determine if the subshift is empty.
Lastly, we say a configuration \(x\in X\) is _periodic_ if there exists \(k\in\mathbb{Z}^{*}\) such that \(\sigma^{k}(x)=x\). We say the subshift \(X\) is _aperiodic_ if it contains no periodic configurations. For a comprehensive introduction to one-dimensional symbolic dynamics we refer the reader to [20].
### Combinatorial Group Theory
Let \(G\) be a finitely generated (f.g.) group and \(S\) a finite generating set. Elements in the group are represented as words on the alphabet \(S\cup S^{-1}\) through the evaluation function \(w\mapsto\overline{w}\). Two words \(w\) and \(v\) represent the same element in \(G\) when \(\overline{w}=\overline{v}\), and we denote it by \(w=_{G}v\). We say a word is _reduced_ if it contains no factor of the form \(ss^{-1}\) or \(s^{-1}s\) with \(s\in S\).
Definition 1: Let \(G\) be a group, \(S\) a subset of \(G\) and \(R\) a language on \(S\cup S^{-1}\). We say \((S,R)\) is a _presentation_ of \(G\), denoted \(G=\langle S\mid R\rangle\), if the group is isomorphic to \(\langle S\mid R\rangle=\mathbb{F}_{S}/\langle\langle R\rangle\rangle\), where \(\langle\langle R\rangle\rangle\) is the normal closure of \(R\), i.e. the smallest normal subgroup containing \(R\). We say \(G\) is _recursively presented_ if there exists a presentation \((S,R)\) such that \(S\) is finite and \(R\) is recursively enumerable.
For a group \(G\) and a generating set \(S\), we define:
\[\mathrm{WP}(G,S)\coloneqq\{w\in(S\cup S^{-1})^{*}\ |\ \overline{w}=1_{G}\}.\]
Definition 2: The _word problem_ (WP) of a group \(G\) with respect to a set of generators \(S\) asks to determine, given a word \(w\in(S\cup S^{-1})^{*}\), if \(w\in\mathrm{WP}(G,S)\).
We say a word \(w\in(S\cup S^{-1})^{+}\) is _G-reduced_ if \(w\) contains no factor in \(\mathrm{WP}(G,S)\). We say a word \(w\in(S\cup S^{-1})^{*}\) is a _geodesic_ if for all words \(v\in(S\cup S^{-1})^{*}\) such that \(\overline{w}=\overline{v}\) we have \(|w|\leq|v|\). For a given group \(G\) and generating set \(S\), we denote its language of geodesics by \(\mathrm{Geo}(G,S)\).
We say an element \(g\in G\) has _torsion_ if there exists \(n\geq 1\) such that \(g^{n}=1_{G}\). If there is no such \(n\), we say \(g\) is _torsion-free_. Analogously, we say \(G\) is a torsion group if all of its elements have torsion. Otherwise if the only element of finite order is \(1_{G}\) we say the group is torsion-free.
Finally, let \(\mathcal{P}\) be a class of groups (for example abelian groups, free groups, etc). We say a group \(G\) is _virtually_\(\mathcal{P}\), if there exists a finite index subgroup \(H\leq G\) that is in \(\mathcal{P}\).
## 3 Snake Behaviour
Although the original snake problems were posed for the Wang tile model, we make use of a different, yet equivalent, formalism (see the Appendix for a proof of the equivalence of the two models).
Definition 3: A tileset graph (or tileset) for a f.g. group \((G,S)\) is a finite multigraph \(\Gamma=(A,B)\) such that each edge is uniquely determined by \((a,a^{\prime},s)\) where \(s\in S\cup S^{-1}\) and \(a,a^{\prime}\in A\) are its initial and final vertices respectively, and if \((a,a^{\prime},s)\in B\) then \((a^{\prime},a,s^{-1})\in B\).
In what follows, \(I\) denotes \(\mathbb{Z}\), \(\mathbb{N}\), or a discrete interval \(\llbracket n,m\rrbracket\), depending on the context.
Definition 4: Let \((G,S)\) be a f.g. group, and \(\Gamma=(A,B)\) a tileset for the pair. A _snake_ or \(\Gamma\)-snake is a pair of functions \((\omega,\zeta)\), where \(\omega:I\to G\) is an injective function, referred to as the snake's skeleton and \(\zeta:I\to A\) the snake's _scales_. These pairs must satisfy that \(d\omega_{i}:=\omega(i)^{-1}\omega(i+1)\in S\cup S^{-1}\) and \((\zeta(i),\zeta(i+1))\) must be an edge in \(\Gamma\) labeled by \(d\omega_{i}\).
We say a snake \((\omega,\zeta)\) connects the points \(p,q\in G\) if there exists a \(n\in\mathbb{N}\) such that \((\omega,\zeta)\) is defined over \(\llbracket 0,n\rrbracket\), \(\omega(0)=p\) and \(\omega(n)=q\). We say a snake is bi-infinite if its domain is \(\mathbb{Z}\). A \(\Gamma\)-ouroboros is a \(\Gamma\)-snake defined over \(\llbracket 0,n\rrbracket\), with \(n\geq 2\), that is injective except for \(\omega(0)=\omega(n)\). In other words, a \(\Gamma\)-ouroboros is a well-tiled simple non-trivial cycle. We study the following three decision problems:
Definition 5: Let \((G,S)\) be a f.g. group. Given a tileset \(\Gamma\) for \((G,S)\) and two points \(p,q\in G\),
* the _infinite snake problem_ asks if there exists a bi-infinite \(\Gamma\)-snake,
* the _ouroboros problem_ asks if there exists a \(\Gamma\)-ouroboros,
* the _snake reachability problem_ asks if there exists a \(\Gamma\)-snake connecting \(p\) and \(q\).
Figure 1: Two Wang tiles that do not tile \(\mathbb{Z}^{2}\), the corresponding graph \(\Gamma\) for the generating set \(\{(1,0),(0,1)\}\) (edges labeled with generator inverses are omitted for more readability) and a \(\Gamma\)-snake with \(I=\mathbb{N}\). In the Wang tile model, two tiles can be placed next to each other if they have the same color on the shared side.
We can also talk about the _seeded_ variants of these three problems. In these versions, we add a selected tile \(a_{0}\in A\) to our input and ask for the corresponding snake/ouroboros to satisfy \(\zeta(0)=a_{0}\).
All of these problems have been studied and classified for \(\mathbb{Z}^{2}\) with its standard generating set \(\{(1,0),(0,1)\}\).
Theorem 3.1: _Let \(S\) be the standard generating set for \(\mathbb{Z}^{2}\). Then,_
1. _The snake reachability problem for_ \((\mathbb{Z}^{2},S)\) _is PSPACE-complete_ _[_13_]__,_
2. _The infinite snake problem for_ \((\mathbb{Z}^{2},S)\) _is_ \(\Pi^{0}_{1}\)_-complete_ _[_1_]__,_
3. _The ouroboros problem for_ \((\mathbb{Z}^{2},S)\) _is_ \(\Sigma^{0}_{1}\)_-complete_ _[_8, 18_]__._
_In addition, the seeded variants of these problems are undecidable [8]._
Our aim is to extend these results to larger classes of groups and different generating sets.
### General Properties
Let \((G,S)\) be a f.g. group and \(\Gamma\) a tileset. If there exists a snake \((\omega,\zeta)\), then for every \(g\in G\), \((g\omega,\zeta)\) is a snake. If we define \(\tilde{\omega}(i)=g\omega(i)\), then \(d\tilde{\omega}=d\omega\), as the adjacency of \(\zeta\) in \(\Gamma\) remains unchanged. In particular, there exists a snake \((\omega^{\prime},\zeta)\) such that \(\omega^{\prime}(0)=1_{G}\), i.e. we may assume that a snake starts at the identity \(1_{G}\).
The next result is a generalization of a result due to Kari for \(\mathbb{Z}^{2}\). Although, the proof is essentially the same, we provide it in the Appendix for completion.
Proposition 1: _Let \(\Gamma\) be a tileset for a f.g. group \((G,S)\). Then, the following are equivalent:_
1. \(\Gamma\) _admits a bi-infinite snake,_
2. \(\Gamma\) _admits a one-way infinite snake,_
3. \(\Gamma\) _admits a snake of every length._
This result implies that a tileset that admits no snakes will fail to tile any snake bigger than a certain length. Therefore, if we have a procedure to test snakes of increasing length, we have a semi-algorithm to test if a tileset does not admit an infinite snake.
Corollary 1: _If \(G\) has decidable WP, the infinite snake problem is in \(\Pi^{0}_{1}\)._
A similar process can be done for the ouroboros problem.
Proposition 2: _If \(G\) has decidable WP, the ouroboros problem is in \(\Sigma^{0}_{1}\)._
Proof: Let \(\Gamma\) be a tileset graph for \((G,S)\). For each \(n\geq 1\), we test each word of length \(n\) to see if it defines a simple loop and if it admits a valid tiling. More precisely, for \(w\in(S\cup S^{-1})^{n}\), we use the word problem algorithm to check if \(w\) is \(G\)-reduced and evaluates to \(1_{G}\). If it is reduced, we test all possible tilings by \(\Gamma\) of the path defined by following the generators in \(w\). If we find a valid tiling,
we accept. If not, we keep iterating with the next word length \(n\) and eventually with words of length \(n+1\).
If there is a \(\Gamma\)-ouroboros, this process with halt and accept. Similarly, if the process halts we have found a \(\Gamma\)-ouroboros. Finally, if there is no \(\Gamma\)-ouroboros the process continues indefinitely.
We also state reductions when working with subgroups or adding generators.
Lemma 1: _Let \((G,S)\) be a f.g. group, \((H,T)\) a f.g. subgroup of \(G\) and \(w\in(S\cup S^{-1})^{+}\). Then,_
* _The infinite snake, ouroboros and reachability problems in_ \((H,T)\) _many one-reduce to their respective analogues in_ \((G,S\cup T)\)_._
* _The infinite snake, ouroboros and reachability problems in_ \((G,S)\) _many one-reduce to their respective analogues in_ \((G,S\cup\{w\})\)_._
Proof: Any tileset graph for \((H,T)\) is a tileset graph for \((G,S\cup T)\), and any tileset graph for \((G,S)\) is a tileset graph for \((G,S\cup\{w\})\).
## 4 Ossuary
Much of the complexity of snakes comes from the paths they define on the underlying group. It stands to reason that understanding the structure of all possible injective bi-infinite paths on the group can shed light on the computability of the infinite snake problem. Let \(G\) be a f.g. group with \(S\) a set of generators. The _skeleton_ subshift of the pair \((G,S)\) is defined as
\[X_{G,S}\coloneqq\{x\in(S\cup S^{-1})^{\mathbb{Z}}\mid\forall w\sqsubseteq x, \ w\not\in\mathrm{WP}(G,S)\}.\]
This subshift is the set of all possible skeletons: recall from Definition 4 that for any skeleton \(\omega\), we can define \(d\omega:\mathbb{Z}\to S\cup S^{-1}\) as \(d\omega_{i}=\omega(i)^{-1}\omega(i+1)\). Thus, for any infinite snake \((\omega,\zeta)\colon d\omega\in X_{G,S}\).
This formalism allows us to introduce variations of the infinite snake problem where we ask for additional properties on the skeleton. We say a subset \(Y\subseteq X_{G,S}\) is _skeletal_ if it is shift-invariant. In particular, all subshifts of \(X_{G,S}\) are skeletal.
Definition 6: Let \(Y\) be a skeletal subset. The \(Y\)-snake problem asks, given a tileset \(\Gamma\), does there exist a bi-infinite \(\Gamma\)-snake \((\omega,\zeta)\) such that \(d\omega\in Y\)?
### Skeletons and Decidability
A snake \((\omega,\zeta)\) can be seen as two walks that follow the same labels: \(\omega\) is a self-avoiding walk on the Cayley graph of the group, and \(\zeta\) a walk on the tileset graph. The next result, which is a direct consequence of the definitions, uses this fact to construct a single object that captures both walks.
For \((G,S)\) a f.g. group, \(\Gamma\) a tileset graph, denote \(X_{\Gamma}\subseteq\left(S\cup S^{-1}\right)^{\mathbb{Z}}\) the subshift whose configurations are the labels of bi-infinite paths over \(\Gamma\). This implies \(X_{\Gamma}\) is a sofic subshift.
Proposition 3: _Let \((G,S)\) be a f.g. group, \(\Gamma\) a tileset graph and \(Y\) a non-empty skeletal subset. Then \(X=Y\cap X_{\Gamma}\) is non-empty if and only if there is a bi-infinite \(\Gamma\)-snake \((\omega,\zeta)\) with \(d\omega\in Y\). In addition, if \(Y\) is an effective/sofic subshift, then \(X\) is an effective/sofic subshift._
The previous result reduces the problem of finding an infinite \(Y\)-snake, to the problem of emptiness of the intersection of two one-dimensional subshifts. As previously stated determining if a subshift is empty is co-recursively enumerable for effective subshifts, and decidable for sofics. Therefore, we can provide a semi-algorithm when the skeleton is effective. This is true for the class of recursively presented groups. Because these groups have recursively enumerable word problem, \(\mathrm{WP}(G,S)\) is recursively enumerable for all finite generating sets. This enumeration gives us an enumeration of the forbidden patterns of our subshift.
Proposition 4: _Let \(G\) be a recursively presented group. Then \(X_{G,S}\) is effective for every finite generating set \(S\)._
This allows us to state the following proposition.
Proposition 5: _Let \(Y\) be a skeletal subshift. Then, if \(Y\) is sofic (resp. effective) the \(Y\)-snake problem is decidable (resp. in \(\Pi^{0}_{1}\)). In particular, if \(G\) is recursively presented, the infinite snake problem for any generating set is in \(\Pi^{0}_{1}\)._
We now identify where the undecidability of the infinite snake problem comes from. If we restrict the directions in which the snake moves to 2 or 3, the problem becomes decidable. This means that the ability to make "space-filling"-like curves in \(\mathbb{Z}^{2}\) is, heuristically speaking, required for the proof of the undecidability of the infinite snake problem.
Theorem 4.2: _The infinite snake problem in \(\mathbb{Z}^{2}\) restricted to 2 or 3 directions among \((1,0),(0,1),(-1,0),(0,-1)\) is decidable._
Proof: The set of skeletons of snakes restricted to 3 directions, for instance left, right and up (denoted by \(a^{-1}\), \(a\) and \(b\) respectively), is the subshift \(Y_{3}\subseteq\{a,a^{-1},b\}^{\mathbb{Z}}\) where the only forbidden words are \(aa^{-1}\) and \(a^{-1}a\). As \(Y_{3}\) is a skeletal SFT of \(X_{\mathbb{Z}^{2},\{a,b\}}\), by Proposition 5, the \(Y_{3}\)-snake problem is decidable. The case of two directions is analogous as \(Y_{2}\) is the full shift on the two generators \(a\) and \(b\).
A natural variation of the infinite snake problem, from the point of view of group theory, is asking if there is an infinite snake whose skeleton defines a geodesic. These skeletons are captured by the geodesic skeleton subshift; a subshift of \(X_{G,S}\) comprised exclusively of bi-infinite geodesic rays. Formally,
\[X^{g}_{G,S}=\{x\in X_{G,S}\mid\forall w\sqsubseteq x,w^{\prime}=_{G}w:\ |w|\leq|w^{\prime}|\}.\]
This subshift can be equivalently defined through the set of forbidden patterns given by \(\mathrm{Geo}(G,S)\). Then, by the definition of a sofic shift, we can state the following proposition.
**Proposition 6**: _Let \((G,S)\) be a f.g. group. If \(\mathrm{Geo}(G,S)\) is regular, then \(X^{g}_{G,S}\) is sofic._
\(\mathrm{Geo}(G,S)\) is known to be regular for all generating sets in abelian groups [24] and hyperbolic groups [11], and for some generating sets in many other classes of groups [24, 17, 6, 16, 2]. Theorem 3.1 implies that the geodesic infinite snake problem is decidable for all such \((G,S)\); most notably for \(\mathbb{Z}^{2}\) with its standard generating set.
Theorem 5.1: _The geodesic infinite snake problem is decidable for any f.g. group \((G,S)\) such that \(\mathrm{Geo}(G,S)\) is regular. In particular, it is decidable for abelian and hyperbolic groups for all generating sets._
What happens with skeletal subsets that are not closed and/or not effective? In these cases, Ebbinghaus showed that the problem can be undecidable outside of the arithmetical hierarchy [9]. If we define \(Y\) to be the skeletal subset of \((\mathbb{Z}^{2},\{a,b\})\) of skeletons that are not eventually a straight line, then, \(Y\) is not closed and deciding if there exists a \(Y\)-skeleton snake is \(\Sigma^{1}_{1}\)-complete. Similarly, if we take the \(Y\) to be the set of non-computable skeletons of \(\mathbb{Z}^{2}\), the \(Y\)-skeleton problem is also \(\Sigma^{1}_{1}\)-complete.
## 5 Snake Embeddings
Let us introduce a suitable notion of embedding, that guarantees a reduction of snake problems. To do this, we will make use of a specific class of finite-state transducer called _invertible-reversible transducer_, that will translate generators of one group to another in an automatic manner.
Definition 7: An invertible-reversible transducer is a tuple \(\mathcal{M}=(Q,S,T,q_{0},\delta,\eta)\) where,
* \(Q\) is a finite set of states,
* \(S,T\) are finite alphabets,
* \(q_{0}\in Q\) is an initial state,
* \(\delta:Q\times S\to Q\) is a transition function,
* \(\eta:Q\times S\to T\) is such that \(\eta(q,\cdot)\) is an injective function for all \(q\in Q\),
such that for all \(q\in Q\) and \(s\in S\) there exists a unique \(q^{\prime}\) such that \(\delta(q^{\prime},s)=q\).
We extend both \(\eta\) and \(\delta\) to manage inverses of \(S\) by setting \(\eta(q,s^{-1})=\eta(q^{\prime},s)^{-1}\) and \(\delta(q,s^{-1})=q^{\prime}\), where \(q^{\prime}\) is the unique state satisfying \(\delta(q^{\prime},s)=q\). Furthermore, we denote by \(q_{w}\) the state of \(\mathcal{M}\) reached after reading the word \(w\in(S\cup S^{-1})^{*}\) starting from \(q_{0}\). We introduce the function \(f_{\mathcal{M}}:(S\cup S^{-1})^{*}\to(T\cup T^{-1})^{*}\) recursively defined as \(f_{\mathcal{M}}(\epsilon)=\epsilon\) and \(f_{\mathcal{M}}(ws^{\pm 1})=f_{\mathcal{M}}(w)\eta(q_{w},s^{\pm 1})\).
Definition 8: Let \((G,S)\) and \((H,T)\) be two f.g. groups. A map \(\phi:G\to H\) is called a _snake embedding_ if there exists a transducer \(\mathcal{M}\) such that \(\phi(g)=f_{\mathcal{M}}(w)\) for all \(w\in(S\cup S^{-1})^{*}\) such that \(\bar{w}=g\), and \(f_{\mathcal{M}}(w)=_{H}f_{\mathcal{M}}(w^{\prime})\) if and only if \(w=_{G}w^{\prime}\).
Remark 1: Snake-embeddings are a strictly stronger form of a translation-like action. Such an action is a right group action \(*:G\to H\) that is free, i.e. \(h*g=h\) implies \(g=1_{G}\), and \(\{d(h,h*g)\mid h\in H\}\) is bounded for all \(g\in G\). A straightforward argument shows that if \(\phi:(G,S)\to(H,T)\) is a snake-embedding, then \(h*g=h\phi(g)\) is a translation-like action. The converse is not true: there are translation-like actions that are not defined by snake-embeddings. For instance, from Definition 8 we see that there is a snake embedding from \(\mathbb{Z}\) to a group \(G\) if and only if \(\mathbb{Z}\) is a subgroup of \(G\). Nevertheless, infinite torsion groups admit translation-like actions from \(\mathbb{Z}\), as shown by Seward in [25], but do not contain \(\mathbb{Z}\) as a subgroup.
Proposition 7: _Let \((G,S)\) and \((H,T)\) be two f.g. groups such that there exists a snake-embedding \(\phi:G\to H\). Then, the infinite snake (resp. ouroboros) problem on \((G,S)\) many-one reduces to the infinite snake (resp. ouroboros) problem on \((H,T)\)._
The reduction consists in taking a tileset for \((G,S)\) and using the transducer to create a tileset for \((H,T)\) that is consistent with the structure of \(G\). Because the transducer is invertible-reversible, we have a computable way to transform a bi-infinite snake from one group to the other.
Using snake-embeddings we can prove that non-\(\mathbb{Z}\) f.g. free abelian groups have undecidable snake problems.
Proposition 8: _The infinite snake and ouroboros problems on \(\mathbb{Z}^{d}\) with \(d\geq 2\) are undecidable for all generating sets._
Proof: Let \(S=\{v_{1},...,v_{n}\}\) be a generating set for \(\mathbb{Z}^{d}\). As \(S\) generates the group, there are two generators \(v_{i_{1}}\) and \(v_{i_{2}}\), such that \(v_{i_{1}}\mathbb{Z}\cap v_{i_{2}}\mathbb{Z}=\{1_{\mathbb{Z}^{d}}\}\). Then, \(H=\langle v_{i_{1}},v_{i_{2}}\rangle\simeq\mathbb{Z}^{2}\) and there is clearly a snake-embedding from \(\mathbb{Z}^{2}\) to \(H\). Finally, by Lemma 1, the infinite snake and ouroboros problems are undecidable for \((\mathbb{Z}^{d},S)\).
## 6 Virtually Nilpotent Groups
Through the use of snake-embeddings and skeleton subshifts, we extend undecidability results from abelian groups to the strictly larger class of virtually nilpotent groups. For any group \(G\) we define \(Z_{0}(G)=\{1_{G}\}\) and
\[Z_{i+1}(G)=\{g\in G\mid ghg^{-1}h^{-1}\in Z_{i}(G),\forall h\in G\}.\]
The set \(Z_{1}(G)\) is called the _center_ of \(G\), and by definition is the set of elements that commute with every element in \(G\). We say a group is _nilpotent_ if there exists \(i\geq 0\) such that \(Z_{i}(G)=G\).
The next Lemma is stated for a larger class of groups that contain nilpotent groups, that will allow us to prove the undecidability results on the latter class.
Lemma 2: _Let \((G,S)\) be a f.g. group that contains an infinite order element \(g\) in its center, such that \(G/\langle g\rangle\) is not a torsion group. Then, there is a snake embedding from \((\mathbb{Z}^{2},\{a,b\})\) into \((G,S\cup\{g\})\), where \(\{a,b\}\) is the standard generating set for \(\mathbb{Z}^{2}\)._
The proof consists in finding a distorted copy of \(\mathbb{Z}^{2}\) within \((G,S\cup\{g\})\). One of the copies of \(\mathbb{Z}\) is given by \(\langle g\rangle\simeq\mathbb{Z}\). The other is obtained through the following result.
Proposition 9: _Let \((G,S)\) be a f.g. group. Then, \(G\) is a torsion group if and only if \(X_{G,S}\) is aperiodic._
Using \(g\) and a periodic point from this proposition we construct the snake-embedding.
Proposition 10: _Let \((G,S)\) be a f.g. group that contains a infinite order element \(g\) in its center and \(G/\langle g\rangle\) is not a torsion group. Then, \((G,S\cup\{g\})\) has undecidable infinite snake and ouroboros problems._
Proof: By Lemma 2, there is a snake-embedding from \(\mathbb{Z}^{2}\) to \((G,S\cup\{g\})\). Combining Proposition 7 and Theorem 1, we conclude that both problems are undecidable on \((G,S\cup\{g\})\).
Theorem 6.1: _Let \((G,S)\) be a f.g. infinite, non-virtually \(\mathbb{Z}\), nilpotent group. Then there exists \(g\) such that \((G,S\cup\{g\})\) has undecidable infinite snake and ouroboros problems._
Proof: Let \(G\) be a f.g. infinite nilpotent group that is not virtually cyclic. Because \(G\) is nilpotent, there exists a torsion-free element \(g\in Z_{1}(G)\). Furthermore no quotient of \(G\) is an infinite torsion group [7]. In addition, as \(G\) is not virtually cyclic \(G/\langle g\rangle\) is not finite. Therefore, by Proposition 10, both problems are undecidable on \((G,S\cup\{g\})\).
Through Lemma 1 we obtain undecidability for virtually nilpotent groups.
Corollary 2: _Let \(G\) be a f.g. infinite, non virtually \(\mathbb{Z}\), virtually nilpotent group. Then there exists a finite generating set \(S\) such that \((G,S)\) has undecidable infinite snake and ouroboros problems._
## 7 Snakes and Logic
We want to express snake problems as a formula that can be shown to be satisfied for a large class of Cayley graphs. To do this we use Monadic Second-Order (MSO) logic, as has been previously been done for the domino problem. Our formalism is inspired by [4]. Let \(\Lambda=(V,E)\) be an \(S\)-labeled graph with root \(v_{0}\). This fraction of logic consists of variables \(P,Q,R,...\) that represent subsets of vertices of \(\Lambda\), along with the constant set \(\{v_{0}\}\); as well as an operation for each \(s\in S\), \(P\cdot s\), representing all vertices reached when traversing an edge labeled by
\(s\) from a vertex in \(P\). In addition, we can use the relation \(\subseteq\), Boolean operators \(\wedge,\vee,\neg\) and quantifiers \(\forall,\exists\). For instance, we can express set equality by the formula \((P=Q)\equiv(P\subseteq Q\wedge Q\subseteq P)\) and emptiness by \((P=\varnothing)\equiv\forall Q(P\subseteq Q)\). We can also manipulate individual vertices, as being a singleton is expressed by \((|P|=1)\equiv P\neq\varnothing\wedge\forall Q\subseteq P(Q=\varnothing\lor P =Q)\). For example, \(\forall v\in P\) is shorthand notation for the expression \(\forall Q(Q\subseteq P\wedge|Q|=1)\). Notably, we can express non-connectivity of a subset \(P\subseteq V\) by the formula \(\textsc{nc}(P)\) defined as
\[\exists Q\subseteq P,\exists v,v^{\prime}\in P\left(v\in Q\wedge v^{\prime} \not\in Q\wedge\forall u,w\in P\left(u\in Q\wedge\bigvee_{s\in S}u\cdot s=w \implies w\in Q\right)\right).\]
The set of formulas without free variables obtained with these operations is denoted by \(\textsc{MSO}(\Lambda)\). We say \(\Lambda\) has _decidable_ MSO logic, if the problem of determining if given a formula in \(\textsc{MSO}(\Lambda)\) is satisfied is decidable.
The particular instance we are interested in is when \(\Lambda\) is the Cayley graph of a f.g. group \(G\) labeled by \(S\) a symmetric finite set of generators, that is, \(S=S^{-1}\) (we take such a set to avoid cumbersome notation in this section). In this case, the root of our graph is the identity \(v_{0}=1_{G}\). A landmark result in the connection between MSO logic and tiling problems comes from Muller and Schupp [21, 22], as well as Kuskey and Lohrey [19], who showed that virtually free groups have decidable MSO logic. Because the Domino Problem can be expressed in MSO, it is decidable on virtually free groups. Our goal is to obtain an analogous result for domino snake problems. To express infinite paths and loops, given a tileset graph \(\Gamma=(A,B)\), we will partition a subset \(P\subseteq V\) into subsets indexed by \(S\) and \(A\), such that \(P_{s,a}\) will contain all vertices with the tile \(a\) that point through \(s\) to the continuation of the snake. We denote the disjoint union as \(P=\coprod_{s\in S,a\in A}P_{s,a}\). First, we express the property of always having a successor within \(P\) as
\[N(P,\{P_{s,a}\})\equiv\bigwedge_{s\in S,a\in A}\left(P_{s,a}\cdot s\subseteq P \right).\]
We also want for this path to not contain any loops, by asking for a unique predecessor for each vertex:
\[\begin{split}\mathrm{up}(v)&\equiv\exists!s\in S,a \in A:v\in P_{s,a}\cdot s,\\ &\equiv\left(\bigvee_{\begin{subarray}{c}s\in S\\ a\in A\end{subarray}}v\in P_{s,a}\cdot s\right)\wedge\left(\bigwedge_{ \begin{subarray}{c}a,a^{\prime}\in A\\ a\neq a^{\prime}\end{subarray}}\bigwedge_{\begin{subarray}{c}s,t\in S\\ s\neq t\end{subarray}}\neg((v\in P_{s,a}\cdot s)\wedge(v\in P_{t,a^{\prime}} \cdot t))\right).\end{split}\]
Then, for a one-way infinite path
\[\mathrm{UP}(P,\{P_{s,a}\})\equiv\forall v\in P\left((v=v_{0}\wedge\bigwedge_{ s\in S,a\in A}v\not\in P_{s,a}\cdot s)\vee(v\neq v_{0}\wedge\mathrm{up}(v)) \right),\]
Thus, we state the property of having an infinite path as follows:
\[\infty\textsc{ray}(P,\{P_{s,a}\})\equiv\left(v_{0}\in P\wedge P=\copro_{s\in S,a \in A}P_{s,a}\wedge N(P,\{P_{s,a}\})\wedge UP(P,\{P_{s,a}\})\right).\]
In fact, we can do a similar procedure to express the property of having a simple loop within \(P\) by slightly changing the previous expressions. The only caveat comes when working with \(S\) a symmetric finite generating set of some group, as we must avoid trivial loops such as \(ss^{-1}\).
\[\ell(P,\{P_{s,a}\})\equiv\forall v\in P,\mathrm{up}(v)\wedge\bigwedge_{s\in S, a,a^{\prime}\in A}\left(P_{s,a}\cdot s\cap P_{s^{-1},a^{\prime}}=\varnothing \right).\]
This way, admitting a simple loop is expressed as
\[\textsc{loop}(P,\{P_{s,a}\})\equiv \left(v_{0}\in P\wedge P=\copro_{s\in S,a\in A}P_{s,a}\wedge N(P, \{P_{s,a}\})\wedge\ell(P,\{P_{s,a}\})\right)\] \[\wedge\forall Q\subseteq P,\forall\{Q_{s,a}\}\left(\neg\infty \textsc{ray}(Q,\{Q_{s,a}\})\right).\]
Lemma 3: _Let \(P\subseteq V\). Then,_
1. _If there exists a partition_ \(\{P_{s,a}\}_{s\in S,a\in A}\) _such that_ \(\infty\textsc{ray}(P,\{P_{s,a}\})\) _is satisfied,_ \(P\) _contains an infinite injective path. Conversely, if_ \(P\) _is the support of an injective infinite path rooted at_ \(v_{0}\)_, there exists a partition_ \(\{P_{s,a}\}_{s\in S,a\in A}\) _such that_ \(\infty\textsc{ray}(P,\{P_{s,a}\})\) _is satisfied._
2. _If there exists a partition_ \(\{P_{s,a}\}_{s\in S,a\in A}\) _such that_ \(\textsc{loop}(P,\{P_{s,a}\})\) _is satisfied,_ \(P\) _contains a simple loop. Conversely, if_ \(P\) _is the support of a simple loop based at_ \(v_{0}\)_, there exists a partition_ \(\{P_{s,a}\}_{s\in S,a\in A}\) _such that_ \(\textsc{loop}(P,\{P_{s,a}\})\) _is satisfied._
With these two structure-detecting formulas, we can simply add the additional constraint that \(P\) partitions in a way compatible with the input tileset graph of the problem, in the direction of the snake. This is captured by the formula
\[D_{\Gamma}(\{P_{s,a}\})\equiv\bigwedge_{(a,a^{\prime},s)\not\in B}\bigwedge_{s ^{\prime}\in S}P_{a,s}\cdot s\cap P_{a^{\prime},s^{\prime}}=\varnothing.\]
Theorem 4.1: _Let \(\Lambda\) be a Cayley graph of generating set \(S\). The infinite snake problem, the reachability problem and the ouroboros problem can be expressed in \(\mathrm{MSO}(\Lambda)\)._
Proof: Let \(\Gamma=(A,B)\) be a tileset graph for \(\Lambda\). By Lemma 3, it is clear that
\[\infty\textsc{-snake}(\Gamma)\equiv\exists P\exists\{P_{s,a}\}\left(\infty \textsc{ray}(P,\{P_{s,a}\})\wedge D_{\Gamma}(\{P_{s,a}\})\right),\]
\[\textsc{ouroboros}(\Gamma)\equiv\exists P\exists\{P_{s,a}\}\left(\textsc{ loop}(P,\{P_{s,a}\})\wedge D_{\Gamma}(\{P_{s,a}\})\right),\]
exactly capture the properties of admitting a one-way infinite \(\Gamma\)-snake and \(\Gamma\)-ouroboros respectively. Remember that Proposition 1 tells us that admitting a one-way infinite snake is equivalent to admitting a bi-infinite snake. We finish by noting that for reachability in a Cayley graph, we can take \(p=v_{0}\). Then, verifying the formula \(\textsc{Reach}(\Gamma,q)\) defined as
\[\exists P\exists\{P_{s,a}\}\left(q\in P\wedge\neg\textsc{nc}(P)\wedge P= \coprod_{s\in S,a\in A}P_{s,a}\wedge UP(P,\{P_{s,a}\})\wedge D_{\Gamma}(\{P_{ s,a}\})\right),\]
is equivalent to \(P\) containing the support of a \(\Gamma\)-snake that connects \(p\) to \(q\).
As previously mentioned, virtually free groups have decidable MSO logic for all generating sets. Thus, we can state the following corollary.
Corollary 3: _Both the normal and seeded versions of the infinite snake, reachability and ouroboros problems are decidable on virtually free groups, independently of the generating set._
Proof: Let \(\Gamma=(A,B)\) be a tileset graph with \(a_{0}\in A\) the targeted tile. Then adding the clause \(\bigvee_{s\in S}v_{0}\in P_{s,a_{0}}\) to the formulas of any of the problems in question, we obtain a formula that expresses its corresponding seeded version.
## Acknowledgments
We would like to thank Pierre Guillon and Guillaume Theyssier for helping with Lemma 3. We would also like to thank David Harel and Yael Etzion for providing a copy of [12]. We are grateful to the anonymous referees for their useful remarks.
|
2307.05329 | Decoding the Popularity of TV Series: A Network Analysis Perspective | In this paper, we analyze the character networks extracted from three popular
television series and explore the relationship between a TV show episode's
character network metrics and its review from IMDB. Character networks are
graphs created from the plot of a TV show that represents the interactions of
characters in scenes, indicating the presence of a connection between them. We
calculate various network metrics for each episode, such as node degree and
graph density, and use these metrics to explore the potential relationship
between network metrics and TV series reviews from IMDB. Our results show that
certain network metrics of character interactions in episodes have a strong
correlation with the review score of TV series. Our research aims to provide
more quantitative information that can help TV producers understand how to
adjust the character dynamics of future episodes to appeal to their audience.
By understanding the impact of character interactions on audience engagement
and enjoyment, producers can make informed decisions about the development of
their shows. | Melody Yu | 2023-07-04T18:16:58Z | http://arxiv.org/abs/2307.05329v2 | # Decoding the Popularity of TV Series: A Network Analysis Perspective
###### Abstract
In this paper, we analyze the character networks extracted from three popular television series and explore the relationship between a TV show episode's character network metrics and its review from IMDB. Character networks are graphs created from the plot of a TV show that represents the interactions of characters in scenes, indicating the presence of a connection between them. We calculate various network metrics for each episode, such as node degree and graph density, and use these metrics to explore the potential relationship between network metrics and TV series reviews from IMDB. Our results show that certain network metrics of character interactions in episodes have a strong correlation with the review score of TV series. Our research aims to provide more quantitative information that can help TV producers understand how to adjust the character dynamics of future episodes to appeal to their audience. By understanding the impact of character interactions on audience engagement and enjoyment, producers can make informed decisions about the development of their shows.
computational linguistics, character networks, network analysis, TV series
## I Introduction
People only have so much time. When we come home from work or school, we have our priorities; we do chores, send emails, or catch up on our favorite TV shows. For producers and TV channels, this time is imperative. The number of viewers on a show can vastly differ based on the time a show airs, and there are only so many slots available; ideal "prime times" like Tuesday at 9 pm can only be occupied by so many shows. Additionally, viewers only have so many shows they can be invested in at the same time. For the 122.4 million households that watch TV, this poses an interesting question. What makes a show good; more so, what makes a show continue to be good? Unlike movies, TV shows consistently need to produce episode after episode, season after season. Sometimes viewership drops, and your favorite 9 o'clock show is canceled and taken off-air, or disappears from your favorite streaming service. But why? For the executives producing a show, the answer is simple: ratings. High ratings mean more seasons, and low ratings mean the show ends. But there's more to it than that. Quality fluctuates, and many shows are notorious for their lackluster attempts at good content in later seasons. What causes a great show, loved and revered in seasons one through three, to suddenly become an insult to its fans, its ratings plummeting in the next four seasons? To answer this question, we need to look at what makes a show good in the first place. Ratings are determined by viewer satisfaction, so what did I see in a show that made me want to invest my time in the first place? Is that related to why I don't enjoy it now?
The answer could be something that the viewers themselves might not even know. In this paper, we attempt to understand the psychology behind these ratings through character networks, and the different interactions between characters. More specifically, we try to find a correlation between the reviews of TV show episodes and the character interactions. We use different metrics of network analysis to quantitatively answer the question: do character interactions affect ratings in TV series?
Network analysis is described as the "set of integrated techniques to depict relations among actors and to analyze the social structures that emerge from the recurrence of these relations". We use network analysis to analyze graphs. A graph is a collection of nodes (vertices) along with identified pairs of nodes connected with edges. This paper applies network analysis to specific graphs called character networks, where character networks are defined as graphs that represent the interactions between characters in a fictional setting. Different methodologies to observe the behavior of a network and perform network analysis are known as network metrics. We can thus transform our previous question into: for a given TV show, is there a connection between a network metric and the popularity of an episode?
## II Dataset
Our dataset is composed of character networks created by Bost. et. al. 2016 from three popular TV series. To capture a dynamic view of the character interactions, every episode's character network is constructed by creating multiple successive static character networks.
A single-character network, also described as a segment graph or temporal graph, is defined as the interactions between characters over a period of ten scenes. Each episode contains multiple scenes, which means it can be split into multiple segment graphs. For the TV series Game of Thrones episode 1, 31 segment graphs are created from 310 scenes. Each graph of this episode contains between 14 and 17 nodes and 15 to 18 edges.
A node is defined as any character that speaks within this ten-scene window. The edges are thus defined as any conversational interaction between these nodes, where an undirected edge \(E_{i,j}\) is drawn between any two characters \(N_{i}\) and
who speak directly to each other. Every edge \(E_{i,j}\) additionally carries a weight \(W_{i,j}\) representing the total conversation time between \(N_{i}\) and \(N_{j}\), expressed in seconds. Each character network may contain many groups of nodes that are connected to each other, but not necessarily the rest of the graph. Each group is denoted as a cluster.
Figure 1 presents the character network episode graph from the TV show Game of Thrones season 1, episode 1. This character network is an undirected segment graph composed of four clusters. The first cluster, situated in the top left corner, describes a fully connected cluster of three characters. \(N_{1}\) (Jaime Lannister) and \(N_{2}\) (Tyrion Lannister) speak to each other, creating an undirected edge. \(N_{2}\) is also directly connected to \(N_{3}\) (Ros). The edges \(E_{1,2}\) and \(E_{2,3}\) have a similar weight, denoted by a similar thickness on the diagram. However, the edge between \(N_{1}\) and \(N_{3}\) has a smaller weight, shown as a thinner line. Weights are compared against all other clusters, with the average weight in some clusters shown as higher than in other clusters. For example, the average weight between the cluster containing \(N_{1}\), \(N_{2}\), and \(N_{3}\) is higher than the cluster in the lower right corner of the diagram. This cluster includes \(N_{4}\) (Jon Snow) speaking to \(N_{5}\) (Theon Greyjoy). The edge \(E_{4,5}\) is similar in weight to \(E_{2,3}\), showing that the pairs of characters spoke for similar periods of time during these ten scenes. \(N_{5}\) additionally spoke to \(N_{6}\) (Robb Stark), connecting three nodes in a chain. \(N_{6}\) (Robb Stark) and \(N_{4}\) (Jon Snow) do not have a direct edge connecting them, as they did not directly speak to each other.
Our dataset is extracted from three popular TV series: Breaking Bad (seasons 1-2), Game of Thrones (seasons 1-5), and House of Cards (seasons 1-2).
We additionally use the reviews of these shows from IMDb, an online database of information related to films and television series. Each review shows the average rating of an episode, represented by a decimal number between 1 and 10 with higher numbers representing more favorable or more liked episodes. IMDb's reviews are formed by aggregating individual reviews from registered users. Users are allowed to cast one vote per episode.
It is important to note that IMDb ratings for TV show episodes reflect the overall quality of the episode, not just the character interactions and plot. Other factors such as special effects, guest stars, and cinematography may also influence the score given by viewers. It is possible that these additional elements contribute to the overall enjoyment of the episode and are therefore included in the rating.
Figure 2 shows the IMDb reviews of Season 1 of Game of Thrones. The x-axis is the name of the episode, while the y-axis is the IMDb review of the episode.
## III Methods
We seek to establish a relationship between IMDb ratings and network indicators for a given TV series. Each temporal graph corresponds to ten scenes from an episode, whereas each IMDb review corresponds to one episode. Thus, to study the graph metrics for each episode, we first aggregate all segment graphs in one episode into one single weighted episode graph. Then, we study metrics on the aggregated episode graph.
### _Segment Graph Aggregation_
The aggregation process works by iterating through edges in all segment graphs for a given episode. Edges connecting the same nodes will aggregate by summing their weights across all graphs. That is to say, if our episode contains 10 temporal graphs with existing as an edge in graphs 1, 4, 7, and 10, the weight \(W_{i,j}\) in our episode graph of edge \(E_{i,j}\) will be This weight reflects the total conversation time in seconds between two characters in the episode. Figure 2 shows the aggregated character network for Game of Thrones season 1, episode 1.
Fig. 1: Game of Throne Season 1, Episode 1, S120-S130 Character Network.
Fig. 2: Game of Thrones Episode Review from IMDb.
### _Network Metrics_
We seek to establish a relationship between IMDb ratings and network indicators for a given TV series. Each temporal graph corresponds to ten scenes from an episode, whereas each IMDb review corresponds to one episode. Thus, to study the graph metrics for each episode, we first aggregate all segment graphs in one episode into one single weighted episode graph. Then, we study metrics on the aggregated episode graph.
#### Iii-B1 Active Nodes
Active nodes are the total number of nodes in a graph with at least one edge connecting them to another node. The number of active nodes represents the number of characters that speak during an episode. We use the number of active nodes in an episode graph to describe the complexity of the episode plot in this study.
#### Iii-B2 Network Density
A graph's density is the ratio between the number of actual edges that exist in a graph to the maximum number of edges that the graph can have. It indicates how many of a graph's potential connections are real connections. We define density as the following, where n represents the number of nodes and m represents the number of edges.
\[D(G)=\frac{\textit{TotalEdges}}{\textit{MaxPossibleEdges}}=\frac{2*m}{n(n-1)}\]
Figure 4 shows the time series for the number of active nodes and the network density for different segment graphs from Game of Thrones episode 1.
The density of the graph increases as more characters converse with one another. Characters in sparse graphs are often still connected, but the conversations are between two characters and not necessarily talk with many other characters. We use network density to measure the level of character interactions in each TV episode.
#### Iii-B3 Node Strength
A node's strength is defined as the sum of the strengths, or weights, of the node's edges. Since the edge weight in our graph indicates the time spent conversing between two characters, the strength of a character is the total amount of time spent conversing with all other characters.
Figure 5 shows the time series of characters with top 5 highest graph node strength from the Game of Thrones episode 1 graph.
The maximum value of node strengths describes the total conversation time of the character who talked most with other characters during this episode. We use this metric to describe the exposure of the most active character in the episode.
#### Iii-B4 Network Efficiency
The efficiency of a graph refers to how efficiently nodes can reach each other. It is also called communication efficiency. The efficiency of a pair of nodes in a graph is inversely proportional to their shortest distances: to find the efficiency of nodes \(N_{i}\) and \(N_{j}\), we would define it as \(\frac{1}{D_{i,j}}\) where \(D_{i,j}\) is defined as the shortest distance between nodes \(N_{i}\) and \(N_{j}\). The global efficiency of a graph is then defined as the average over the pairwise efficiencies.
Fig. 4: Time series for the number of active nodes and network density from Game of Thrones Episode 1.
Fig. 5: Time series for top 5 highest graph node strength characters from Game of Thrones Episode 1.
Fig. 3: Game of Throne Season 1, Episode 1, Aggregated Character Network.
\[E(G)=\frac{1}{n(n-1)}\frac{1}{\Sigma_{i\neq j}D_{i,j}}\]
In this study, the TV episode graphs are usually separated into separate subgraphs, often without any connections between these subgraphs. To more accurately represent these subgraphs, we use local efficiency, calculated from the average value of a subgraph's global efficiency.
#### Iii-B5 Network Transitivity
Network transitivity is the overall probability for the network to have interconnected adjacent nodes, revealing the existence of tightly connected communities. It refers to the extent to which the relation that relates two nodes in a network that are connected by an edge is transitive. The network transitivity is the fraction of all possible triangles present in \(G\). It is computed as dividing total number of node triangles in the graphs by the total number of "triads" (two edges with a shared vertex)
\[T(G)=3\frac{\text{\# of triangles}}{\text{\# of triads}}\]
Network transitivity is widely used to examine the level of clustering in social network analysis. In the content of character networks, high transitivity means that the network contains communities or groups of characters that are closely interacting with each other.
We use network transitivity to measure the extent of the complex relationship between characters, and their relation to the plot.
#### Iii-B6 Degree Centrality
In a connected graph, centrality is used to measure the importance of various nodes in a graph. In this study, we use various centrality metrics to measure the conversational interaction patterns of different characters in each TV episode.
Degree centrality is the simplest measure of centrality. The degree of a node is defined as the number of edges or connections that a node has to other nodes. In the context of character networks in TV shows, the degree of a character represents the number of other characters that they interact with in a given episode. Figure 6 illustrates the time series of characters with the top 5 highest degrees in the character network from the first episode of Game of Thrones.
A node with a higher degree than the rest of the nodes may be considered a central or pivotal character, as they have a large number of interactions with other characters. As our episode graph is aggregated from multiple segment graphs, the large number of interactions can be spread over different scenes, or concentrated in a few scenes involving many characters.
In this research, we will use the distribution of node degrees, including the maximum value and standard deviation, to describe the patterns of conversational interactions among the characters in a given episode.
#### Iii-B7 Closeness Centrality and Harmonic Centrality
The closeness centrality of a node in a connected graph, which is determined as the reciprocal of the sum of the shortest distances between the node and all other nodes in the graph, is a measure of centrality in a network. A node with a higher closeness centrality value is closer to all other nodes.
\[C(u)=\frac{n-1}{\Sigma_{v\neq u}D_{u,v}}\]
However, in our study, the TV episode graph is not necessarily a connected graph, as there can be some characters forming subgroups that do not interact with characters outside their subgroup. Therefore, we use harmonic centrality instead of closeness centrality to allow disconnected networks in our research. The harmonic centrality of a node u is defined as the sum of reciprocals of the shortest path distances from all other nodes to node u. Two nodes that are unreachable to each other has a distance of infinity.
\[H(u)=\Sigma_{v\neq u}\frac{1}{D_{v,u}}\]
In our study, we use harmonic centralization distribution (max value, standard deviation) to measure the extent to which the plot of the TV episode is concentrated on a single actor or a group of actors.
#### Iii-B8 Eigenvector Centrality
While degree centrality measures the number of connections a node has, it does not take into account the influence of a node's neighbors on its own importance. To address this, we can use eigenvector centrality, which considers the importance of a node's neighbors in determining the node's overall importance.
A node with a high eigenvector score is connected to many other nodes with high scores, indicating that it is connected to influential characters. In the context of character networks in TV shows, a character with a high eigenvector centrality is likely to be connected to characters who are also considered influential within the network. This can be useful in understanding the importance of a character within the context of the overall plot and character relationships.
## IV Results
In this study, we calculated network metrics for three popular TV series: Game of Thrones, House of Cards, and Breaking Bad.
### _Network Metrics_
We studied 22-26 episodes from the first three seasons of each show and calculated various network metrics for each episode. These metrics include density, efficiency, and transitivity, as well as node-level metrics such as degree, harmonic closeness centrality, and eigenvector closeness centrality.
For each metric, we also calculated the maximum and standard deviation values for each episode's character network. The results of these calculations are shown in Appendix Table V, VI, and VII, with each row representing a single episode and the first two columns indicating the episode number and review. The remaining columns display the various network metrics for that episode.
### _Correlation between network metrics and episode reviews_
To determine whether there is a relationship between the network metrics we calculated and the IMDB reviews, we used correlation analysis. Specifically, we used the Spearman correlation method, as it is well-suited for analyzing ordinal data such as TV episode review scores. While the absolute value of a review score (e.g. 8.7) may not provide much insight on its own, it is generally accepted that an episode with a higher review score is of higher quality than one with a lower score. As such, the review scores can be considered ordinal, with higher values indicating better quality. By using the Spearman correlation method, we were able to examine the relationship between the network metrics and the episode reviews and determine if there is a correlation between them.
Table I shows the results of the Spearman correlation analysis for the TV series Game of Thrones, examining the relationship between the network metrics and episode reviews. The analysis found two significant correlations: a negative correlation between the number of active nodes and episode review scores, and a positive correlation between the standard deviation of eigenvector centrality and episode review scores.
Table II shows the results of the Spearman correlation analysis for the TV series House of Cards, examining the relationship between the network metrics and episode reviews. The table indicates that there are six significant correlations between the two variables. However, we note that two of these correlations may be influenced by influential outliers, as visualized in Figure 6. After accounting for these outliers, the analysis found four significant correlations: a negative correlation between network efficiency and episode review, a negative correlation between maximum node degree and episode review, a negative correlation between node degree standard deviation and episode review, and a negative correlation between maximum harmonic centrality and episode review.
Table III shows the Spearman correlation between episode network metrics and episode reviews for the TV series Breaking Bad. One significant correlation between the network metrics and episode reviews is a positive correlation between network transiti
Fig. 6: Density vs Review for House of Cards
## Discussion
The results of our research suggest that different character network structures may have different impacts on the quality of TV episodes. For example, the negative correlation between the number of active nodes and episode review scores in Game of Thrones may indicate that having too many characters in an episode can be overwhelming for viewers. On the other hand, for Breaking Bad, the positive correlation between the standard deviation of eigenvector centrality and episode review scores suggests that focusing on a smaller number of main characters is preferred over a wider focus on many different characters in a single episode.
House of Cards had four correlations. The negative correlation between network efficiency and episode review implies that nodes are not in tight-knit groups - for example, character A might talk to character B who might talk to character C, but they might not talk in a group together. It also found a negative correlation between maximum harmonic centrality and episode review, implying again that characters have isolated conversations with one another rather than engaging in group discussions.
Breaking Bad had a single positive correlation between network transitivity and episode review. This implies that episodes with tightly connected groups of characters tend to be preferred by viewers.
It is important to note that a limitation of our study is that we only have the reviews for the episode as a whole, and not just for the character networks/dynamics. This means that many other factors such as cinematography, script writing, and plot, are all factored into a single review. Additionally, the placement and release of the episode are factors. For example, the finale of a series might be highly anticipated and receive a higher rating, or a guest star might appear in a particular episode and contribute to its score. Because the overall score of an episode represents all of these details together, we do not have the most accurate data to find a correlation between the character network and the metrics themselves.
Our study is also limited to the types of metrics we tested. Though we tested many types, there are still many more network metrics that could potentially find greater correlations with the data.
## Conclusion
This research aims to investigate the potential relationship between character interactions in an episode of a TV series and the review of that episode. We hypothesize that certain features of the TV series may attract viewers and lead to positive reviews if these features are present in the episodes. To capture some of these attractive features, we use character network analysis to analyze the interactions between characters in three well-known TV series.
To do this, we construct character networks representing the interactions between characters in 74 episode graphs from the three TV shows. We then use network metrics to describe the characteristics of the conversations between characters and apply Spearman's rank correlation test to identify the metrics that are statistically significant.
The results of this study, shown in Table IV, indicates that there is a statistically significant correlation between character network metrics and TV show reviews. However, the specific network metrics that show significant correlation vary between the three series, with no common significant metric found across all three.
In summary, our research suggests that character networks can have an impact on TV show review scores. By analyzing the interactions between characters in TV series episodes using network metrics, we were able to identify statistically significant correlations between these metrics and review scores. These findings may be useful for TV producers and writers as they consider how to structure their shows and maintain audience engagement. While character networks are not the only factor that determines a show's success, they do play a role in audience enjoyment and should be carefully considered in the production of future seasons.
Fig. 7: Maximal Harmonic Centrality vs Review for House of Cards
## References
* [1] X. Bost, V. Labatut, S. Gueye and G. Linares, "Narrative smoothing: Dynamic conversational network for the analysis of TV series plots," in 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 1111-1118.
* Networks of characters (Version 7)," in figshare. [https://doi.org/10.6084/m9.figshare.2199646.v7](https://doi.org/10.6084/m9.figshare.2199646.v7).
* [3] C. -Y. Weng, W. -T. Chu and J. -L. Wu, "Movie Analysis Based on Roles' Social Network," in 2007 IEEE International Conference on Multimedia and Expo, 2007, pp. 1403-1406.
* [4] D. Elson, N. Dames and K. McKeown "Extracting Social Networks from Literary Fiction," in ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden. pages 138-147.
* [5] A. Agarwal, A. Corvalan, J Jensen and O. Rambow "Social network analysis of alice in wonderland," In Proceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature, pages 88-96.
* [6] A. Ricardo, J. Miro-Julia, and F. Rossello., "Marvel Universe looks almost like a real social network," arXiv preprint cond-mat/0202174 (2002).
* [7] X. Bost, V. Labatut and G. Linares, "Serial Speakers: a Dataset of TV Series," in 12th International Conference on Language Resources and Evaluation (LREC 2020),May 2020, Marseille, France p.4256-4264
* [8] NetworkX documentation. [https://networkx.org](https://networkx.org) |
2305.13332 | Conditional Online Learning for Keyword Spotting | Modern approaches for keyword spotting rely on training deep neural networks
on large static datasets with i.i.d. distributions. However, the resulting
models tend to underperform when presented with changing data regimes in
real-life applications. This work investigates a simple but effective online
continual learning method that updates a keyword spotter on-device via SGD as
new data becomes available. Contrary to previous research, this work focuses on
learning the same KWS task, which covers most commercial applications. During
experiments with dynamic audio streams in different scenarios, that method
improves the performance of a pre-trained small-footprint model by 34%.
Moreover, experiments demonstrate that, compared to a naive online learning
implementation, conditional model updates based on its performance in a small
hold-out set drawn from the training distribution mitigate catastrophic
forgetting. | Michel Meneses, Bruno Iwami | 2023-05-19T15:46:31Z | http://arxiv.org/abs/2305.13332v1 | # Conditional Online Learning for Keyword Spotting
###### Abstract
Modern approaches for keyword spotting rely on training deep neural networks on large static datasets with _i.i.d._ distributions. However, the resulting models tend to underperform when presented with changing data regimes in real-life applications. This work investigates a simple but effective online continual learning method that updates a keyword spotter on-device via SGD as new data becomes available. Contrary to previous research, this work focuses on learning the same KWS task, which covers most commercial applications. During experiments with dynamic audio streams in different scenarios, that method improves the performance of a pre-trained small-footprint model by 34%. Moreover, experiments demonstrate that, compared to a naive online learning implementation, conditional model updates based on its performance in a small hold-out set drawn from the training distribution mitigate catastrophic forgetting.
Michel Meneses, Bruno Iwami SiDi, Brazil [email protected], [email protected]
## 1 Introduction
Keyword spotting (KWS) concerns detecting a predefined set of spoken words in an audio stream [1]. Compared to general automatic speech recognition (ASR), KWS deals with a much more restricted predefined vocabulary. That explains why the industry has widely adopted KWS systems as the entry-point for more complex voice applications that target edge devices [2, 3, 4], such as personal voice assistants (_e.g._, Bixby, Siri, Alexa). In those cases, the KWS system runs entirely at the edge in an always-listening mode. The goal is to detect commands that should trigger the rest of the application, which is usually more complex and, therefore, runs on another machine with considerably more computational power.
Modern approaches for KWS currently rely on training deep neural networks (DNN) to classify spectral representations of audio segments extracted from the input stream (_e.g._, log-Mel spectrograms, Mel-frequency cepstral coefficients) [5, 6]. They typically depend on large training datasets of spoken keywords, either labeled or not, which should be as large and diverse as possible, so the training process can better cover the problem space [7]. Therefore, while training, the DNN model is fed with thousands of positive samples representing each target keyword and negative samples representing other spoken words and environmental sounds that should not trigger the model.
Although that offline-training approach sounds reasonable, in real life, engineers must go through the arduous process of preparing the largest and most comprehensive dataset possible to better approximate the training distribution to the one expected in the final application scenario after the deployment of the model. Moreover, once in production, the model keeps unchanged for months after its deployment until its vendor schedules a periodic software update. During that meantime, changes in its input distribution, not anticipated during training (_e.g._, new prosodies, environment noises, and even microphone settings), may degrade its predictive performance.
This work argues that instead of attempting to anticipate eventual distribution shifts in production by arduously building large training datasets, an alternative is to update the pre-trained model on the fly at inference time as new data from the production environment becomes available. Indeed, the latest releases of popular frameworks for machine learning already support on-device training 1. The latest research on online-continual learning for keyword spotting [8, 9] focuses on adapting the model to new tasks different from the one it was pre-trained to solve. That additional complexity drives those works to elaborate online learning approaches that are arguably difficult to execute on tiny low-resource edge devices, such as online knowledge distillation and online multi-task learning.
Footnote 1: [https://www.tensorflow.org/lite/examples/on_device_training/overview](https://www.tensorflow.org/lite/examples/on_device_training/overview)
Contrary to those references, this work restricts the online adaptation of a pre-trained keyword spotting model to the same KWS task, given a continual stream of speech generated by a distribution not learned by the model before its deployment. By performing that exploration, this work intends to directly contribute to the many keyword spotting systems widely available in the market, which must detect a fixed set of target keywords but underperform due to the non-stationary nature of the production environment. By restricting the scope of the problem, this paper also gains margin to explore a simple online learning algorithm based on conditional model updates via stochastic gradient descent (SGD), which is suitable for execution on low-resource devices. Experiments with audio streams in different scenarios illustrate the increasingly superior performance of the online learner over its frozen pre-trained base keyword spotter while mitigating the effects of catastrophic forgetting compared to its naive implementation.
The remaining of this document has the following structure: section 2 introduces the related literature on continual learning for keyword spotting, section 3 formulates the online learning algorithm for the same KWS task, section 4 details the experimentation methodology, section 5 presents and discusses the results of the experiments. Finally, section 6 states the final remarks about this work.
## 2 Background
In the previous decades, machine learning research has focused on offline methods that iterate throughout all the training samples available in a fixed dataset to induce a decision model.
They typically assume those samples come from a stationary distribution, which would eventually generate the samples observed during the model consumption in production [10]. As a result, the output models tend to underperform when presented with changing data regimes [11]. Continual learning (also referred to as _lifelong_ learning) is a paradigm that attempts to emulate the human ability to incrementally learn as new data becomes available, hence addressing the non-stationary nature of the real world. When implemented in an online mode, that paradigm allows the learner to update itself during inference without interrupting the application [12].
Some recent works have explored continual learning for keyword spotting. [8] pre-trains a TC-ResNet model to learn 15 keywords from Google Speech Commands (GSC) dataset. Then, it explores a knowledge distillation-based strategy to update that model every time new task data becomes available. Those new tasks consist of new keywords from GSC the model has never seen. Its previous states work as teacher references for the current online training step, so the resulting model's performance on the previously learned tasks is not degraded. [9] introduces a method that instantiates a sub-network for each new task incrementally presented to the model. Each sub-network should be capable of learning the individual target keyword defined by each task while sharing some knowledge to accelerate their learning process. That work follows the same experimentation methodology described by [8].
Similarly to what has been observed in recent research on general ASR [13], the related works on continual learning for keyword spotting focus on adapting the model to new tasks different from the one it was pre-trained to solve. More precisely, those new tasks refer to detecting new target keywords never seen by the model before its deployment. Hence, those works explore elaborated techniques for incrementally adapting the model to each new task without reducing its performance on the previously learned tasks. Such a problem is known in continual learning literature as catastrophic forgetting [11].
## 3 Method
### Problem Statement
This work assumes there is a model \(M_{0}\) already trained on a labeled dataset \(D=\{(X_{n},y_{n})\}_{n=1}^{N}\) of size \(N\). Each pair \((X_{n},y_{n})\) contains the feature vector \(X_{n}\) of a spoken word and its label \(y_{n}\mid y_{n}\in\{\textit{target word},\textit{non-target word}\}\). It assumes a binary-classification problem for simplicity and because that covers most of the keyword-spotting applications that serve as the wake-up systems of voice assistants. Nonetheless, one could easily extend it to multiple classes.
Once trained, \(M_{0}\) is deployed to the target device and put into production. After its deployment, a set of new labeled samples \(S=\{(X_{i},y_{i})\}_{i=1}^{I}\) becomes _incrementally_ available to \(M_{0}\), _i.e._, it has access to each pair \((X_{i},y_{i})\) sequentially. Those new samples result from the input audio stream captured by the device's microphone. The goal is to incrementally update \(M_{0}\) as those new samples continuously become available. It is also assumed that it would be impractical to update \(M_{0}\) based on \(D\cup S\) due to the large size of \(N\).
Similar to [13], the problem statement assumes the online data stream is labeled. Naturally, that is not the case for most real applications. However, following that work, this one includes that same assumption to focus further discussions exclusively on the online learner design and to set an upper bound for its performance. In practice, one could implement simple strategies for estimating the labels of the incoming data as isolated modules [14].
### Conditional Online Learning
This work explores a simple but effective approach for online continual-learning KWS via neural networks based on stochastic gradient descent (SGD) conditioned to the performance observed with a static hold-out dataset. Offline training methods for neural networks based on SGD usually iterate throughout a static dataset along multiple epochs via mini-batches. The motivation for processing the same data more than once is the limited number of samples available [10]. In an online continual learning setting, incoming data is abundant. However, it is unreasonable to assume that data is independent and identically distributed (_i.i.d_). As a result, directly updating the model via SGD based on that continuous data leads to poor convergence since the estimated gradients are biased [15].
To mitigate the effect of those biased estimates while taking advantage of that incoming data, the solution explored in this work considers a static hold-out dataset \(D_{v}\) generated by the same _i.i.d._ distribution of the training dataset \(D\). Algorithm 1 describes that method, named COOL (an abreviation for **Co**onditional **O**nline **L**earning). Once both \(M_{0}\) and \(D_{v}\) have been deployed into the target device, during each step \(i\) it receives the incoming pair \((X_{i},y_{i})\in S\) and update its target (\(S_{t}\)) and non-target (\(S_{n}\)) sample buffers accordingly. Once \(S_{t}\) and \(S_{n}\) have enough samples to compose a balanced batch \(B\) of size \(b\), the method updates the model's parameters via gradient descent optimization based on its instant loss for \(B\). Before consolidating those new parameters, the method evaluates the model with the hold-out dataset \(D_{v}\) and \(B\). The new parameters become permanent only if there is a decrease in the loss on \(B\) and \(l^{\prime}_{v}\), the loss on \(D_{v}\), does not exceed \(l_{v}\), _i.e._, the loss of \(M_{0}\) on the hold-out dataset.
```
0:\(M_{0}\): pre-trained model
0:\(D_{v}\): hold-out dataset
0:\(S\): set of labeled samples from input stream
0:\(b\): online batch size
1:\(j\gets 0\)
2:\(S_{t},S_{n}\leftarrow\emptyset,\emptyset\)
3:\(l_{v}\gets loss(D_{v},M_{0})\)
4:for all\((X_{i},y_{i})\in S\)do
5: Add \((X_{i},y_{i})\) into \(S_{t}\) or \(S_{n}\) depending on \(y_{i}\)
6:if\(S_{t}\) and \(S_{n}\) have at least \(b/2\) samples each then
7:\(B\leftarrow\) the \(b/2\) latest samples from \(S_{t}\) and \(S_{n}\) each
8:\(l\gets loss(B,M_{j-1})\)
9:\(M_{j}=update(l,M_{j-1})\)\(\triangleright\) Via gradient decent
10:\(l^{\prime}\gets loss(D,M_{j})\)
11:\(l^{\prime}_{v}\gets loss(D_{v},M_{j})\)
12:if\(l^{\prime}_{v}>l_{v}\) or \(l^{\prime}\geq l\)then
13:\(M_{j}\gets M_{j-1}\)\(\triangleright\) Update not consolidated
14:end if
15:\(j\gets j+1\)
16: Discard the samples in \(S_{t}\) and \(S_{n}\)
17:endif
18:endfor
```
**Algorithm 1** Conditional Online Learning (COOL)
The continual online strategy described in algorithm 1 essentially reproduces the classical use of a hold-out set for regularizing the offline training of machine learning models. Nev
ertheless, from an engineering perspective, that strategy is simple enough to execute on edge devices with limited resources, given the modern frameworks for machine learning currently available. At the same time, its regularization aspect helps mitigate the undesired effects of distribution shifts compared to the naive SGD implementation (demonstrated in the next sessions), while demanding no hyper-parameter tunning besides defining the hold-out set \(D_{v}\).
## 4 Experiments
### Overview
The experiments to evaluate COOL for keyword spotting have considered the following methodology:
1. A small-footprint CNN model has trained on a KWS dataset of clean spoken words;
2. The resulting pre-trained model has been evaluated on some noisy streaming test datasets;
3. COOL has been evaluated with those same datasets on top of the pre-trained model as \(M_{0}\).
Both approaches had their accuracy on the test datasets registered for future analysis, along with the training and validation metrics of the pre-trained model. Additionally, COOL was compared to its naive online learning counterpart, which performs SGD updates without checking a hold-out set.
### Datasets
The experiments have considered Google Speech Commands v2 (GSC) [16] and DCASE [17] datasets. GSC contains 105 thousand audio files of 35 different single spoken words. This work has considered a smaller version of it, which consists of 8 different words: "down", "go", "left", "no", "right", "stop", "up" and "yes". It has been used for training and testing the evaluated approaches. On the other hand, DCASE dataset has noisy audio files split into three categories: baby crying, gun shoot and glass break. It has been used as the noise source when mixing the clean test partition of GSC. Both GSC and DCASE contain audio files sampled at 16KHz.
Each of those eight words from GSC has originated a new binary classification task. Each task contains training, validation, and testing sets obtained by sampling the task's target word from the respective partitions in GSC. Both training and validation sets have the same words in clean conditions. Hence, the hold-out dataset used in algorithm 1 for each task originates from that validation set. Each task randomly sampled their non-target words from GSC. As a result, there are eight tasks, each targeting one of those eight different words.
To simulate a data streaming, the final experiment considers a single long audio file per task, which is the concatenation of all their respective testing samples. All testing audio samples have been zero-padded with 500ms to avoid the cases where more than one word simultaneously occupies the model's input window. Each frame of those long files receives a label according to the presence of the target keyword. More precisely, a positive label indicates that at least 80% of the target word occupies that frame (Figure 1). It has been mixed noise from DCASE with those test files at a SNR of 25dB.
### Settings
The architecture of the base model \(M_{0}\) is the _cnn-one-stride4_ small-footprint convolutional network[6]. For each task, that model has its parameters updated on the training set for 20 epochs via Adam optimizer with a fixed learning rate of \(1\mathrm{e}{-3}\), a batch size of 64 and a patient coefficient of 3 iterations for the early-stopping mechanism. It considers, as input features, 40-dim MFCC along 32 frames. The STFT frame length corresponds to 1K samples, and the frame step is 477 samples. The online batch size \(b\) has been set to 16. It has been applied time-shift during training and validation.
## 5 Results
Table 1 presents the base model's final metrics after training for each target keyword task. It reached around \(90\%\) validation accuracy for half of the tasks but only nearly \(50\%\) for the other half. Nevertheless, the difference between training and validation metrics suggests the base model has not overfitted the training set of any task. Table 2 compares the average accuracy between COOL and its base model across the different test scenarios. For each of those, online learning has always initiated from the base model \(M_{0}\), _i.e._, no knowledge is transferred between scenarios on this stage. As expected, COOL improves the average performance of the base model in all those scenarios. Moreover, it reduces its oscillation.
Table 3 shows COOL's average performance gain against its base model throughout a sequential tes
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Target word** & **Train loss** & **Val. loss** & **Train acc.** & **Val. acc.** \\ \hline _Down_ & \(0,29\) & \(0,31\) & \(0,88\) & \(0,89\) \\ _Go_ & \(0,69\) & \(0,69\) & \(0,53\) & \(0,50\) \\ _Left_ & \(0,69\) & \(0,69\) & \(0,50\) & \(0,56\) \\ _No_ & \(0,69\) & \(0,69\) & \(0,50\) & \(0,50\) \\ _Right_ & \(0,20\) & \(0,21\) & \(0,91\) & \(0,95\) \\ _Stop_ & \(0,69\) & \(0,69\) & \(0,50\) & \(0,50\) \\ _Up_ & \(0,35\) & \(0,34\) & \(0,85\) & \(0,89\) \\ _Yes_ & \(0,19\) & \(0,24\) & \(0,92\) & \(0,91\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Base model’s final metrics after pre-training for each target keyword task.
Figure 1: Segment of a concatenated test audio file labeled according to the occurrence of the target word. Each label marks the first 100ms-duration frame of its respective input window of 1s.
tests for each keyword are concatenated into a single long audio file that encodes sequential environment transitions, hence there is knowledge transfering between scenarios. The first column of that table refers to the isolated performance gain observed for that particular scenario, whereas the second column presents the cumulative gain across all the experiments. Compared to the performance gain presented in Table 2, the cumulative gain in Table 3 demonstrates that the continual learning approach benefits from the knowledge obtained in previous scenarios. Moreover, it also demonstrates that the conditional update mechanism effectively mitigates catastrophic forgetting since the isolated gain observed when processing the clean scenario twice keeps stable.
Finally, Figure 2 complements Table 3 by illustrating the average cumulative accuracy gain observed during that long sequential experiment, including the performance of the naive online SGD implementation (_i.e_., without checking the loss with the hold-out validation set). Since it updates the base model after every inference step, that naive leaner initially overperforms COOL in the first scenario. However, its average cumulative accuracy starts decaying at the first change in the incoming data distribution since gradients estimated on that data are biased. On the other hand, by simply checking the loss of the updated model with the hold-out dataset, COOL mitigates the effect of those gradients while still taking advantage of the abundant incoming data to improve the accuracy of the KWS base model.
Overall, the previous results demonstrate that a simple approach such as COOL has the potential to significantly boost the performance of small-footprint keyword spotters, especially when presented with changing data regimes at inference time. Those results also suggest that simple regularization strategies effectively mitigate catastrophic forgetting. Naturally, limitations on this work's methodology affect those insights, such as the length of the testing audio stream and the assumption that the incoming audio samples are labeled. Even so, the results presented so far set an upper-bound performance gain that should encourage researchers and engineers to take a step further and build upon the method described in this work, especially since this is, to the best of our knowledge, the first work to set a practical baseline for online-continual learning in the context of streamed KWS with the same task targeting low-resource devices.
## 6 Conclusion
This work has evaluated a simple but effective online continual learning method for same-task keyword spotting based on SGD updates conditioned to the performance observed in a hold-out dataset. The experimental results demonstrate it dynamically improves the performance of a small-footprint pre-trained model on contrasting scenarios while receiving an input audio stream. Moreover, conditional updates improve the online learner's robustness against catastrophic forgetting. Due to its simplicity, that method can serve as a baseline for future research on online KWS and as an alternative for reducing false acceptance rates on commercial KWS applications.
For future works, the authors aim to explore strategies for estimating the labels of the incoming data and evaluate how their uncertainty impacts the final performance of the online learner. The authors also intend to extend this study to the latest and more ellaborated online learning algorithms proposed in the related literature. Finally, the authors plan to implement those future steps in a real mobile application using the on-device training API recently available on TensorFlow Lite.
|
2303.14933 | MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos | User-generated content (UGC) live videos are often bothered by various
distortions during capture procedures and thus exhibit diverse visual
qualities. Such source videos are further compressed and transcoded by media
server providers before being distributed to end-users. Because of the
flourishing of UGC live videos, effective video quality assessment (VQA) tools
are needed to monitor and perceptually optimize live streaming videos in the
distributing process. In this paper, we address \textbf{UGC Live VQA} problems
by constructing a first-of-a-kind subjective UGC Live VQA database and
developing an effective evaluation tool. Concretely, 418 source UGC videos are
collected in real live streaming scenarios and 3,762 compressed ones at
different bit rates are generated for the subsequent subjective VQA
experiments. Based on the built database, we develop a
\underline{M}ulti-\underline{D}imensional \underline{VQA} (\textbf{MD-VQA})
evaluator to measure the visual quality of UGC live videos from semantic,
distortion, and motion aspects respectively. Extensive experimental results
show that MD-VQA achieves state-of-the-art performance on both our UGC Live VQA
database and existing compressed UGC VQA databases. | Zicheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min, Ying Chen, Guangtao Zhai | 2023-03-27T06:17:10Z | http://arxiv.org/abs/2303.14933v2 | # MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos
###### Abstract
User-generated content (UGC) live videos are often bothered by various distortions during capture procedures and thus exhibit diverse visual qualities. Such source videos are further compressed and transcoded by media server providers before being distributed to end-users. Because of the flourishing of UGC live videos, effective video quality assessment (VQA) tools are needed to monitor and perceptually optimize live streaming videos in the distributing process. In this paper, we address **UGC Live VQA** problems by constructing a first-of-a-kind subjective UGC Live VQA database and developing an effective evaluation tool. Concretely, 418 source UGC videos are collected in real live streaming scenarios and 3,762 compressed ones at different bit rates are generated for the subsequent subjective VQA experiments. Based on the built database, we develop a Multi-Dimensional VQA (**MD-VQA**) evaluator to measure the visual quality of UGC live videos from semantic, distortion, and motion aspects respectively. Extensive experimental results show that MD-VQA achieves state-of-the-art performance on both our UGC Live VQA database and existing compressed UGC VQA databases.
## 1 Introduction
With the rapid development of social media applications and the advancement of video shooting and processing technologies, more and more ordinary people are willing to tell their stories, share their experiences, and have their voice heard on social media or streaming media platforms such as Twitch, Tiktok, Taobao, etc. However, due to the lack of photography skills and professional equipment, the visual quality of user-generated content (UGC) videos may be degraded by in-the-wild distortions [51]. What's more, in common live platforms, live videos are encoded and distributed to end-users with very low delay, where compression algorithms have a significant influence on the visual quality of live videos because they can greatly reduce transmission bandwidth. As illustrated in Fig. 1, video quality assessment (VQA) tools play an important role in monitoring, optimizing, and further improving the Quality of Experience (QoE) of end-users in UGC live streaming systems.
Currently, many UGC VQA databases have been carried out [14, 51, 36, 45, 21] to address the impact of general in-the-wild distortions on video quality, while some compression VQA databases [40, 22, 34] are proposed to study the influence of compression artifacts. Then some compressed UGC VQA databases [1, 21, 46] are further constructed to solve the problem of assessing the quality of UGC videos with compression distortions. However, they are either small in scale or employ high-quality UGC videos as the sources, and all of the mentioned databases lack videos in live streaming scenes. Therefore, there is a lack of a proper **UGC Live VQA** database to develop and validate the video quality measurement tools for live streaming systems.
To address UGC Live VQA problems, we first con
Figure 1: The distributing process of UGC live videos, where the users upload the videos degraded by the UGC distortions to the live platforms and the distorted videos are further compressed before being distributed to the viewers. The VQA models can monitor the quality changes of the compressed UGC live videos and adaptively optimize the transcoding setting.
struct a large-scale database named **TaoLive**, consisting 418 source UGC videos from the TaoBao [2] live streaming platform and the corresponding 3,762 compressed videos at various bit rates. Then we perform a subjective experiment in a well-controlled environment. Afterward, we propose a no-reference (NR) Multi-Dimensional VQA (**MD-VQA**) model to measure the visual quality of UGC live videos in terms of semantic, distortion, and motion aspects. The semantic features are extracted by pretrained convolutional neural network (CNN) model; the distortion features are extracted by specific handcrafted image distortion descriptors (i.e. blur, noise, block effect, exposure, and colorfulness); and the motion features are extracted from video clips by pretrained 3D-CNN models. Compared with existing UGC VQA algorithms, MD-VQA measures visual quality from multiple dimensions, and these dimensions correspond to key factors affecting live video quality, which thereby has better interpretability and performance. The contributions of this paper are summarized as below:
* **We build a large-scale UGC Live VQA database targeted at the compression artifacts on the UGC live videos.** We collect 418 raw UGC live videos that are diverse in content, distortion, and quality. Then 8 encoding settings are used, which provides 3,762 compressed UGC live videos in total.
* **We carry out a well-controlled in-lab subjective experiment.** 44 participants are invited to participate in the subjective experiment and a total of 165,528 subjective annotations are gathered.
* **A multi-dimensional NR-VQA model is proposed,** using pretrained 2D-CNN, handcrafted distortion descriptors, and pretrained 3D-CNN for the semantic, distortion, and motion features extraction respectively. The extracted features are then spatio-temporally fused to obtain the video-level quality score. The extensive experimental results validate the effectiveness of the proposed method.
## 2 Related Works
### VQA Databases
During the last decade, many VQA databases [12, 14, 31, 42, 46, 51, 36, 41, 53] have been carried out to tackle the challenge of VQA problems and a review of the common VQA database is exhibited in Table 1. The early VQA databases [41, 42] usually collect limited numbers of source videos and manually introduce distortions such as compression and transmission error to generate distorted ones. Such databases are less diverse in content and distortions, which do not fit the scope of UGC videos. Then the CVD2014 [31], LIVE-Qualcomm [12], and LIVE-VQC [36] databases are formed with videos that are captured by real cameras. However, the scale of the mentioned in-capture databases is relatively small and the included distortions are much simpler. Later, UGC VQA databases such as KoNViD-1k [14], YouTube-UGC [45], and LSVQ [51] gather in-the-wild UGC videos from online platforms, which have more diverse content and distortions and have significantly promoted the development of UGC VQA tasks.
For UGC live videos, we can consider them as incaptured UGC videos followed by compressed distortions, where both in-the-wild distortions and compression distortions have a significant impact on the visual quality.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Database & Year & Duration/s & Ref. Num. & Scale & Scope & Subjective Evaluating Format \\ \hline CVD2014 [31] & 2014 & 10-25 & - & 234 & In-capture & In-lab \\ LIVE-Qualcomm [12] & 2016 & 15 & - & 208 & In-capture & In-lab \\ \hline KoNViD-1k [14] & 2017 & 8 & - & 1,200 & In-the-wild & Crowdsourced \\ LIVE-VQC [36] & 2018 & 10 & - & 585 & In-the-wild & Crowdsourced \\ YouTube-UGC [45] & 2019 & 20 & - & 1,500 & In-the-wild & Crowdsourced \\ LSVQ [51] & 2021 & 5-12 & - & 39,075 & In-the-wild & Crowdsourced \\ \hline UGC-VIDEO [21] & 2019 & \(>\)10 & 50 & 550 & UGC + Compression & In-lab \\ LIVE-WC [53] & 2020 & 10 & 55 & 275 & UGC + Compression & In-lab \\ YT-UGC\({}^{+}\)(Subset) [46] & 2021 & 20 & 189 & 567 & UGC + Compression & In-lab \\ ICME2021 [1] & 2021 & - & 1,000 & 8,000 & UGC + Compression & In-lab \\
**TaoLive(proposed)** & 2022 & 8 & 418 & 3,762 & UGC + Compression & In-lab \\ \hline \hline \end{tabular}
\end{table}
Table 1: Review of common VQA databases, where ’UGC+Compression’ refers to manually encoding the UGC videos with different compression settings.
Figure 2: Comparison of the quality distribution of reference videos between the TaoLive and ICME2021 [1] databases. The source UGC videos in the ICME2021 database are centered on high-quality levels while the quality levels of the source UGC videos in the TaoLive database are more diverse.
Although some works such as UGC-VIDEO [21], LIVE-WC [53], and YT-UGC\({}^{+}\)[46] attempt to assess the quality of UGC videos caused by common compression algorithms, the relatively small size of these databases makes it difficult to support the mainstream of data-driven models, e.g. deep learning-based models in Section 2.2. The recent ICME2021 [1] database is large in scale. However, as shown in Fig. 2, source UGC videos in ICME2021 exhibit high visual quality, and thus can not reflect real source video quality distribution in live streaming systems. What's more, none of the mentioned databases includes source videos collected from practical live-streaming platforms.
### UGC VQA models
**Handcrafted-based:** Handcrafted-based VQA models extract quality-aware features to model spatial and temporal distortions, such as natural scene statistics (NSS) features [23, 29, 33], artifacts [18, 27, 39], motion [10, 18, 39], etc. For example, VIDEO [29] gives the intrinsic statistical regularities gathered from natural videos and assesses the video quality according to the regularities. V-BLIINDS [33] evaluates the video quality by using a spatio-temporal NSS model and a motion representation model. TLVQM [18] computes low complexity and high complexity quality-aware features in two steps to obtain the final video quality. VIDEVAL [39] carefully chooses representative quality-aware features among the mainstream NR-VQA models and regresses the representative features into quality values.
**Deep learning-based:** Considering the huge parameters of deep neural networks (DNN) and the relatively small scale of VQA databases, some VQA methods use pretrained DNN models for feature extraction. VSFA [20] extracts deep semantic features with a pre-trained DNN model and learns the quality-aware representation with gated recurrent units (GRUs). To enhance the understanding of video motion features, some studies [19, 37, 51] further attempt to extract motion features with 3D-CNN models pre-trained on the video action recognition databases to help detect video motion distortions and have yielded good performance. Later, some Transformer-based VQA methods are carried out. LSCT [52] first extracts frame-wise perceptual quality features and then feeds the features into a long short-term convolutional Transformer to predict the video quality. FAST/FASTER-VQA [48, 49] proposes Grid Minipatch Sampling and forms the video sampling results as fragments, which are put into a fragment-modified video swin transformer [24] for video quality representation.
## 3 UGC Live Dataset and Human Study
### Videos Collection
As illustrated in Fig. 1, users upload their live video streams to platforms, and platforms compress the video streaming at different bit rates and distribute one of them to viewers according to the Internet bandwidth or the viewers' choice. To reflect the real quality distribution of in-captured live video, we first collect large-scale uncompressed raw UGC videos from the Taolive [2] platform, a very popular live platform in China. Then, we manually select raw UGC videos that contain the scenes of technology, fashion, food, daily life, financial sales, etc, to ensure content diversity. In the last, we collect 418 raw UGC videos (110 videos have resolutions of 720P while 318 videos have resolutions of 1080P), and each raw UGC video is cropped into about 8s duration as source UGC live videos.
We use the open-source compression tools, _FFmpeg_[3], to compress source UGC live videos by 8 Constant Rate Factors (CRF) of H.265 including 16, 20, 24, 28, 32, 36, 40, and 44 to close to the distributing process of live platforms. Therefore, 3,344=418\(\times\)8 compressed UGC videos are generated and a total of 3,762 = 3,344 + 418 UGC videos are collected for evaluation, the samples of which are exhibited in Fig. 3.
### Human Study
The human study is carried out in a well-controlled environment. 44 humans including 20 males and 24 females are invited to participate in the subjective experiment. The viewers are seated about 1.5 times the screen height (45cm) with normal indoor illumination and the videos are played on an iMac monitor which supports a resolution up to
Figure 3: Sample frames of the videos in the proposed Taolive Database, where the resolutions are marked on the top right. Additionally, the frame samples are cropped to delete some sensitive content such as human faces and watermarks for exhibition.
4096\(\times\)2304. Before the viewers start to evaluate the UGC live videos, a short introduction is given to get the viewers familiar with the equipment and quality assessment tasks. We split the experiment into 76 small sessions and each session contains 50 UGC live videos with no content overlap. The viewers participate in all the sessions individually and each session lasts about 30 minutes. There is at least 1-hour break between the sessions and each subject is allowed to attend no more than 2 sessions in a single day. During the sessions, each video is played only once and the viewers can rate the video quality from 1 to 5, with a minimum interval of 0.1. We make sure that each UGC live video is evaluated by the 44 invited viewers and 165,528=3,762\(\times\)44 subjective ratings are collected in the end.
### Subjective Data Analysis
According to the recommendation of ITU-R BT.500-13 [6], we compute the z-scores as the quality label of UGC live videos:
\[z_{ij}=\frac{r_{ij}-\mu_{i}}{\sigma_{i}}, \tag{1}\]
where \(r_{ij}\) represents the quality rating given by the \(i\)-th subject on the \(j\)-th UGC live video, \(\mu_{i}=\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}r_{ij}\), \(\sigma_{i}=\sqrt{\frac{1}{N_{i}-1}\sum_{j=1}^{N_{i}}\left(r_{ij}-\mu_{i}\right)}\), and \(N_{i}\) is the number of UGC live videos evaluated by subject \(i\). Then we remove the quality labels from unreliable subjects according to the recommended subject rejection procedure in [6]. Finally, the z-scores are linearly rescaled to \([1,5]\) and the mean opinion score (MOS) of the UGC video \(j\) is obtained by averaging the rescaled z-scores:
\[MOS_{j}=\frac{1}{M}\sum_{i=1}^{M}z_{ij}^{{}^{\prime}}, \tag{2}\]
where \(MOS_{j}\) represents the MOS for the \(j\)-th UGC video, \(M\) is the number of the valid subjects, and \(z_{ij}^{{}^{\prime}}\) are the rescaled z-scores.
We further plot the MOS distributions from the CRF and resolution perspectives. As shown in Fig. 3(a), conservative CRF parameter selection (16\(\sim\)24) introduces slight perceptual quality loss to the UGC live videos. When CRF increases from 28 to 44, the downward trend of perceptual quality is more obvious. Moreover, when CRF\(\geq\)40, nearly no compressed UGC video gains higher quality score than 3, which suggests that the 40+ CRF selection can result in a viewing experience below average. Such phenomena can provide useful guidelines for the compression strategy of live platforms. From Fig. 3(b), we can find that the general quality of UGC videos with a resolution of 720P is lower than the UGC videos with a resolution of 1080P, which fits the common sense that lower resolutions lead to poorer visual quality levels.
## 4 Proposed Method
The framework of the proposed MD-VQA model is illustrated in Fig. 5, which includes the feature extraction module, feature fusion module, and feature regression module. Specifically, quality-aware features are extracted from multiple dimensions including the semantic, distortion, and motion aspects. What's more, the feature error between adjacent frames is employed to reflect the temporal quality fluctuation. Then the obtained multi-dimensional features are fused in spatio-temporal manners and mapped to quality scores via the quality regression module.
### Feature Extraction
Given a video whose number of frames and frame rate is \(n\) and \(r\), we split the video into \(\frac{n}{r}\) clips for feature extraction and each clip lasts for 1s. For each clip \(C_{i}\) (\(i\) represents the index of the clip), \(2L\) frames are uniformly sampled for semantic and distortion feature extraction while the whole clip is employed for motion feature extraction.
#### 4.1.1 Semantic Feature Extraction
Different semantic contents shall have diverse impacts on humans' tolerance for different distortions [20]. For example, humans are more able to endure blur distortions on flat and texture-less objects such as clear sky and smooth walls [55, 58]. However, the blur distortions can be unacceptable on objects that are rich in texture such as rough rocks and complex plants. It is also believed that semantic information
Figure 4: Illustration of the proposed TaoLive database’s MOS distributions from different perspectives.
can help identify the existence and extent of the perceived distortions [9]. Moreover, it has been proven that human perception is highly affected by compression [50, 15]. Thus it is quite reasonable to incorporate the semantic information into the compressed UGC video quality assessment.
Considering that visual perception is a hierarchical structure, that is, input visual information is perceived hierarchically from low-level features to high-level [43, 44, 26], we propose to utilize multi-scale features extracted from the last 4 stages of the pretrained EfficientNetV2 [38] as the frame-level semantic information:
\[\begin{split} SF_{l}^{i}&=\alpha_{1}\oplus\alpha_{2 }\oplus\alpha_{3}\oplus\alpha_{4},l\in\{1,\cdots,2L\},\\ \alpha_{j}&=\mathrm{GAP}(L_{j}(F_{l}^{i})),j\in\{1, 2,3,4\},\end{split} \tag{3}\]
where \(SF_{l}^{i}\) indicates the extracted semantic features from the \(l\)-th sampled frame \(F_{l}^{i}\) of clip \(C_{i}\), \(\oplus(\cdot)\) stands for the concatenation operation, \(\mathrm{GAP}(\cdot)\) represents the global average pooling operation, \(L_{j}(F_{l}^{i})\) stands for the feature maps obtained from \(j\)-th last layer of EfficientNetV2, and \(\alpha_{j}\) denotes the average pooled features from \(L_{j}(F_{l}^{i})\).
#### 4.1.2 Distortion Feature Extraction
Various types of distortions exist in UGC videos and only utilizing semantic information is insufficient to model the distortion perception of UGC videos. What's more, the original UGC distortions can exhibit dissimilar quality representations under different levels of compression. For example, the blur is less sensitive to compression [56, 57] since compression usually wipes out high-frequency information, and noise can be eased or even disappear when higher compression levels are applied [5]. Therefore, to better improve the quality representation of the proposed method, some handcrafted distortion descriptors are further employed for quality prediction, which include blur [54], noise [7], block effect [47], exposure [18], and colorfulness [32]. Then the frame-level distortion features can be derived as:
\[DF_{l}^{i}=\Psi(F_{l}^{i}),l\in\{1,\cdots,2L\}, \tag{4}\]
where \(DF_{l}^{i}\) represents the extracted distortion features from the \(l\)-th sampled frame \(F_{l}^{i}\) of clip \(C_{i}\) and \(\Psi(\cdot)\) stands for the distortion feature extraction process.
#### 4.1.3 Motion Feature Extraction
UGC live videos are often bothered with motion distortions resulting from the unstable shooting environment as well as the restricted bit rates. However, these motion distortions such as the video shaking and the motion blur are difficult to recognize from the spatial features alone. Furthermore, video compression deeply depends on motion estimation [25, 11], which indicates that motion distortions can influence the quality of video compression. Therefore, to help the model better understand the motion information, we propose to use the pretrained 3D-CNN backbone, ResNet3D-18 [13], to capture clip-level motion distortions:
\[MF^{i}=\Gamma(C_{i}), \tag{5}\]
where \(MF^{i}\) denotes the motion features extracted from clip \(C_{i}\) and \(\Gamma(\cdot)\) represents the motion feature extraction operation.
Figure 5: The framework of the proposed method, where the semantic, distortion, and motion features are extracted by the pretrained EfficientNetV2 [38], handcrafted distortion descriptors, and pretrained ResNet3D-18 [13] respectively. The absolute error between the adjacent frames’ semantic and distortion features is used to reflect the temporal quality fluctuations. Finally, the multi-dimensional features are spatio-temporally fused and regressed into quality values.
To sum up, given the \(i\)-th clip \(C_{i}\) of the video, we can obtain the clip-level semantic features \(SF^{i}\in\mathbb{R}^{2L\times N_{S}}\), the distortion features \(DF^{i}\in\mathbb{R}^{2L\times N_{D}}\), and the motion features \(MF^{i}\in\mathbb{R}^{1\times N_{M}}\), where \(N_{S}\), \(N_{D}\), and \(N_{M}\) represent the number of channels for the semantic, distortion, and motion features respectively.
### Feature Fusion
It has been proven in [30] that videos with better quality tend to have smaller quality fluctuations while videos with lower quality tend to have larger quality fluctuations. Therefore, to quantify the fluctuations that are highly correlated with human perception, we propose to employ the absolute error between adjacent semantic and distortion features for temporal quality fluctuations reflection:
\[\begin{split} SF_{2k}^{i^{\prime}}&=|SF_{2k}^{i}-SF_ {2k-1}^{i}|,k\in\{1,\cdots,L\},\\ DF_{2k}^{i^{\prime}}&=|DF_{2k}^{i}-DF_{2k-1}^{i}|,k \in\{1,\cdots,L\},\end{split} \tag{6}\]
where \(SF_{2k}^{i^{\prime}}\) and \(DF_{2k}^{i^{\prime}}\) represent the absolute error between adjacent semantic and distortion features. Then the spatio-temporal fusion can be derived as:
\[\begin{split} SD_{2k}^{i}&=\omega(SF_{2k}^{i}) \oplus\omega(DF_{2k}^{i})\oplus\omega(SF_{2k}^{i^{\prime}})\oplus\omega(DF_{2k }^{i^{\prime}}),\\ STF^{i}&=W_{L}^{1}(SD^{i^{T}}),\end{split} \tag{7}\]
where \(\oplus(\cdot)\) stands for the concatenation operation, \(\omega(\cdot)\) represents the learnable Multilayer Perceptron (MLP), \(SD_{2k}^{i}\in\mathbb{R}^{1\times N_{SD}}\) indicates the frame-level spatial-fused features obtained from semantic and distortion features, \(SD^{i^{T}}\in\mathbb{R}^{N_{SD}\times L}\) is the transposition result of the clip-level semantic and distortion features \(SD^{i}\in\mathbb{R}^{L\times N_{SD}}\), \(W_{L}^{1}\) is a learnable linear mapping operation to fuse the \(SD^{i^{T}}\) in the temporal domain, and we finally obtain the spatio-temporal fused features \(STF^{i}\in\mathbb{R}^{N_{SD}\times 1}\). To further introduce the quality-aware motion features, we concatenate the spatio-temporal features with the motion features:
\[F^{i}=STF^{i^{T}}\oplus\omega(MF^{i}), \tag{8}\]
where \(STF^{i^{T}}\in\mathbb{R}^{1\times N_{SD}}\), the final clip-level quality-aware representation \(F^{i}\in\mathbb{R}^{1\times(N_{SD}+N_{M}^{\prime})}\) and \(N_{M}^{\prime}\) is adjusted number of channels for the motion features after MLP operation.
### Feature Regression
After the feature extraction process described above, we use the three-stage fully-connected layers to regress the clip-level quality-aware representation \(F^{i}\) into quality values:
\[Q^{i}=\mathbf{FC}(F^{i}), \tag{9}\]
where \(\mathbf{FC}(\cdot)\) indicates the fully-connected layers and \(Q_{i}\) stands for the quality value of clip \(C_{i}\). Consequently, the overall UGC live video quality can be obtained via average pooling:
\[\mathbf{Q}=1/\frac{n}{r}\sum_{1}^{\frac{n}{n}}Q^{i}, \tag{10}\]
where \(\mathbf{Q}\) is the video quality value and \(\frac{n}{r}\) represents the number of clips. We simply use the Mean Squared Error (MSE) as the loss function:
\[Loss=\frac{1}{n}\sum_{m=1}^{n}\left(Q_{m}^{\prime}-Q_{m}\right)^{2} \tag{11}\]
where \(n\) indicates the number of videos in a mini-batch, \(Q_{m}^{\prime}\) and \(Q_{m}\) are the subjective quality labels and predicted quality levels respectively.
## 5 Experiment
In this section, we first give the details of the experimental setup. Then we validate the proposed MD-VQA model with other mainstream VQA models on the proposed Tao-Live database and the other two UGC compression VQA models. The ablation study and cross database validation are conducted to investigate the contributions of different groups of features and the generalization ability of the VQA models. Finally, we test the proposed MD-VQA model on two in-the-wild UGC VQA databases.
### Benchmark Databases
The proposed TaoLive database and two compressed UGC VQA databases including the LIVE-WC [53] and YT-UGC\({}^{+}\)[46] databases are selected as the benchmark databases. For all the databases, we follow the common practice and split the databases with an 80%-20% train-test ratio. Additionally, all the databases are validated separately. To fully evaluate the stabilization and performance of the VQA models, the split is randomly conducted 30 times and the average results are recorded as the final performance.
### Implementation Details
The EfficientNetV2 [38] backbone is fixed with the EfficientNetV2-S weights pretrained on the ImageNet database [8] for semantic feature extraction while the ResNet3D-18 [13] is fixed with the weights pretrained on the Kinetics-400 [16] database. All the frames are maintained with the original resolution for the semantic, distortion, and motion feature extraction. The Adam optimizer [17] is employed with the initial learning rate set as 0.001. If the training loss has not decreased for 5 epochs, the learning rate will be reduced to half. The default number of epochs is set as 50. The parameter \(L\) described in Section 4.1 is set as 8, which means 16 frames are uniformly sampled for the semantic and distortion feature extraction for a single clip.
### Competitors & Criteria
To fully evaluate the performance of the proposed method, we select several popular quality assessment models for comparison, which include BRISQUE [28], VSFA [20], TLVQM [18], VIDEVAL [39], PVQ [51], BVQA [19] and SimpleVQA [37]. It's worth mentioning that BRISQUE belongs to NR-IQA models and we obtain the video quality features by averaging the features extracted from each frame with BRISQUE. The other VQA models are trained with the default parameter setting defined by their authors.
Two criteria are adopted to evaluate the performance of the quality assessment models, which include the Spearman Rank Order Correlation Coefficient (SRCC) and Pearson Linear Correlation Coefficient (PLCC). Before calculating the criteria values, a four-parameter logistic regression function [35] is utilized to fit the predicted scores to the scale of MOSs. The value range for SRCC and PLCC is [0,1] and better models should yield higher SRCC and PLCC values.
### Performance Discussion
The experimental performance on the three compressed UGC VQA databases is shown in Table 2, from which we can draw several conclusions. (a) The proposed MD-VQA achieves first place and surpasses the second place (SimpleVQA [37]) by about 0.004, 0.033, and 0.010 in terms of SRCC values on the LIVE-WC, YT-UGC+, and Tao-Live databases respectively, which demonstrates its effectiveness of predicting the quality levels of compressed UGC videos. (b) The handcrafted-based methods (BRISQUE, TLVQM, and VIDEVAL) are significantly inferior to the deep learning-based methods (VSFA, PVQ, BVQA, SimpleVQA, and MD-VQA). It can be explained that the handcrafted based methods hold the prior experience of NSS, which comes from the pristine videos. However, the characteristics of compressed UGC videos are far more complicated and do not suit the prior knowledge of natural regularities. (c) All the VQA methods experience performance drops on the YT-UGC\({}^{+}\) database compared with the other two databases. The YT-UGC\({}^{+}\) database uses the recommended VP9 settings and target bit rates [4] for compression while the LIVE-WC and TaoLive databases control the compression by varying the CRF parameters of H.264 and H.265 respectively. It might be because the recommended VP9 compression settings do not monotonically reduce the video bit rates, thus being more challenging for quality prediction.
### Ablation Study
To investigate the contributions of different features employed in MD-VQA, we conduct the ablation study in this section. The experimental results for employing different types of features are shown in Table 3. Combining features yield better performance than using a single group of features and employing all features leads to the best perfor
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{LIVE-WC} & \multicolumn{2}{c|}{YT-UGC\({}^{+}\)} & \multicolumn{2}{c}{TaoLive} \\ \cline{2-7} & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) \\ \hline w/o ABS & 0.912 & **0.916** & **0.814** & 0.814 & **0.925** & 0.920 \\ w/o FFM & **0.913** & 0.915 & 0.811 & **0.817** & 0.921 & **0.928** \\ \hline All & **0.931** & **0.937** & **0.822** & **0.828** & **0.942** & **0.945** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study results for absolute error (ABS) and feature fusion module (FFM), where ABS is replaced with error and FFM is replaced with concatenation.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Hand} & \multirow{2}{*}{Deep} & \multicolumn{2}{c|}{LIVE-WC} & \multicolumn{2}{c|}{YT-UGC\({}^{+}\)} & \multicolumn{2}{c}{TaoLive} \\ \cline{4-9} & & & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) \\ \hline BRISQUE (TIP, 2012) [28] & ✓ & \(\times\) & 0.787 & 0.788 & 0.303 & 0.309 & 0.771 & 0.777 \\ TLVQM (TIP, 2019) [18] & ✓ & \(\times\) & 0.838 & 0.830 & 0.672 & 0.697 & 0.862 & 0.869 \\ VIDEVAL (TIP, 2021) [39] & ✓ & \(\times\) & 0.812 & 0.825 & 0.660 & 0.662 & 0.914 & 0.910 \\ \hline VSFA (ACM MM, 2019) [20] & \(\times\) & ✓ & 0.856 & 0.857 & 0.784 & 0.783 & 0.920 & 0.917 \\ PVQ (CVPR, 2021) [51] & \(\times\) & ✓ & 0.901 & 0.909 & 0.775 & 0.776 & 0.916 & 0.919 \\ BVQA (TCSVT, 2022) [19] & \(\times\) & ✓ & 0.912 & 0.916 & 0.777 & 0.781 & 0.926 & 0.922 \\ SimpleVQA (ACM MM, 2022) [37] & \(\times\) & ✓ & **0.927** & **0.920** & **0.789** & **0.784** & **0.932** & **0.926** \\ \hline
**MD-VQA(Ours)** & ✓ & ✓ & **0.931** & **0.937** & **0.822** & **0.828** & **0.942** & **0.945** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental performance on the compressed UGC VQA databases. ‘Hand’ indicates using handcrafted-based features while ‘Deep’ indicates using deep learning-based features. Best in **red** and second in **blue**.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{Feature} & \multicolumn{2}{c|}{LIVE-WC} & \multicolumn{2}{c|}{YT-UGC\({}^{+}\)} & \multicolumn{2}{c}{TaoLive} \\ \cline{2-7} & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) \\ \hline SF & 0.911 & 0.910 & 0.785 & 0.787 & 0.931 & 0.934 \\ DF & 0.640 & 0.658 & 0.277 & 0.381 & 0.603 & 0.638 \\ MF & 0.841 & 0.858 & 0.537 & 0.561 & 0.909 & 0.912 \\ SF+DF & **0.925** & 0.922 & **0.805** & **0.824** & 0.935 & 0.936 \\ SF+MF & 0.921 & **0.924** & 0.792 & 0.789 & **0.940** & **0.941** \\ DF+MF & 0.857 & 0.861 & 0.573 & 0.631 & 0.925 & 0.926 \\ \hline All & **0.931** & **0.937** & **0.822** & **0.828** & **0.942** & **0.945** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental performance of the ablation study, where SF, DF, and MF indicate the semantic features, distortion features, and motion features respectively.
mance among the combinations of different features, which confirms the contributions of the semantic, distortion, and motion features. Additionally, by comparing the performance of SF, DF, and MF models, we can see that the SF model achieves first place on all the databases, which indicates that the semantic features make the most devotion to the final performance. What's more, the distortion and motion features achieve only 0.277 and 0.537 in terms of SRCC values on the YT-UGC\({}^{+}\) database. This is because 1/3 of the YT-UGC\({}^{+}\) database's compressed UGC videos are obtained from game videos. The game videos' distortion and motion characteristics differ greatly from the natural videos. To illustrate, noise usually does not exist in game videos and the motion blur effect is manually introduced if applied. Such phenomena result in the relatively low performance of the distortion and motion features. We also conduct the ablation study for using absolute error (ABS) and feature fusion module (FFM) and the results are listed in Tab 4, from which we can find that both ABS and FFM make contributions to the final results.
### Cross Database Performance
Since UGC videos are diverse in contents and distortions, We carry out the cross database validation to test the generalization ability of the VQA models in this section. The VQA models are trained on the TaoLive database and tested on the other two compressed UGC VQA databases. The experimental results are listed in Table 5. The proposed MD-VQA model has surpassed all the compared VQA models on both LIVE-WC and YT-UGC\({}^{+}\) databases, which proves its strong generalization ability. The handcrafted-based VQA models (BRISQUE, TLVQM, and VIDEVAL) perform badly on the YT-UGC\({}^{+}\) database and the deep learning-based methods also undergo significant performance drops from the LIVE-WC database to the YT-UGC\({}^{+}\) database. The reason is that the YT-UGC\({}^{+}\) database employs VP9 [4] while the TaoLive database utilizes H.265 for compression respectively. The different compression standards can bring unignorable gaps between the data distributions. Therefore the quality representation learned from the TaoLive database is less effective on the YT-UGC\({}^{+}\) database but works well on the H.264 compressing LIVE-WC database.
### In-the-wild Performance
Although the proposed MD-VQA model focuses on the compressed UGC VQA issues, we also test its performance on some mainstream in-the-wild UGC VQA databases, which includes the KoNViD-1k [14] and LIVE-VQC [36] databases. These databases are not focused on compression and contain a wide range of distortions. Validation on these databases can reflect the VQA models' ability to handle general UGC VQA issues. The experimental results are exhibited in Table 6, from which we can find that the proposed MD-VQA outperforms the compared VQA models on the LIVE-VQC databases and gets the second-ranking on the KoNViD-1k database. This implies that the proposed MD-VQA remains competitive not only for compression-specific VQA issues and can be taken as a strong baseline for in-the-wild UGC VQA tasks as well.
## 6 Conclusion
In this paper, we focus on the compressed UGC VQA issues. To meet the practical needs of live platforms, we carry out a large-scale compressed UGC VQA database called **TaoLive**. Unlike the common compression VQA databases that employ high-quality videos as source videos, the TaoLive database collects the 418 source UGC videos that cover a wide quality range and generate the compressed videos by varying the CRF parameters with H.265. A well-controlled subjective experiment is conducted to gather the quality labels for the compressed UGC videos. Further, we propose a VQA model (MD-VQA) to assess the quality of the compressed UGC videos from the semantic, distortion, and motion dimensions. The extensive experimental results confirm the effectiveness of the proposed method.
## 7 Acknowledgement
This work was supported in part by NSFC (No.62225112, No.61831015), the Fundamental Research Funds for the Central Universities, National Key R&D Program of China 2021YFE0206700, and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).
\begin{table}
\begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{KonViD-1k} & \multicolumn{2}{c}{LIVE-VQC} \\ \cline{2-5} & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) \\ \hline BRISQUE [28] & 0.657 & 0.658 & 0.593 & 0.638 \\ TLVQM [18] & 0.773 & 0.769 & 0.799 & 0.803 \\ VIDEVAL [39] & 0.783 & 0.780 & 0.752 & 0.751 \\ VSFA [20] & 0.785 & 0.797 & 0.716 & 0.775 \\ SimpleVQA [37] & **0.856** & **0.860** & **0.811** & **0.815** \\ \hline MD-VQA & **0.851** & **0.853** & **0.814** & **0.839** \\ \hline \end{tabular}
\end{table}
Table 6: Experimental performance of in-the-wild UGC databases.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{LIVE-WC} & \multicolumn{2}{c}{YT-UGC\({}^{+}\)} \\ \cline{2-5} & SRCC\(\uparrow\) & PLCC\(\uparrow\) & SRCC\(\uparrow\) & PLCC\(\uparrow\) \\ \hline BRISQUE [28] & 0.708 & 0.709 & 0.026 & 0.059 \\ TLVQM [18] & 0.562 & 0.583 & 0.155 & 0.184 \\ VIDEVAL [39] & 0.557 & 0.583 & 0.077 & 0.132 \\ VSFA [20] & 0.701 & 0.698 & 0.357 & 0.399 \\ SimpleVQA [37] & **0.711** & **0.723** & **0.388** & **0.394** \\ \hline MD-VQA & **0.742** & **0.728** & **0.440** & **0.448** \\ \hline \end{tabular}
\end{table}
Table 5: Experimental performance of cross databases, where the VQA models are all pretrained on the TaoLive database. |
2305.19141 | Taylorformer: Probabilistic Modelling for Random Processes including
Time Series | We propose the Taylorformer for random processes such as time series. Its two
key components are: 1) the LocalTaylor wrapper which adapts Taylor
approximations (used in dynamical systems) for use in neural network-based
probabilistic models, and 2) the MHA-X attention block which makes predictions
in a way inspired by how Gaussian Processes' mean predictions are linear
smoothings of contextual data. Taylorformer outperforms the state-of-the-art in
terms of log-likelihood on 5/6 classic Neural Process tasks such as
meta-learning 1D functions, and has at least a 14\% MSE improvement on
forecasting tasks, including electricity, oil temperatures and exchange rates.
Taylorformer approximates a consistent stochastic process and provides
uncertainty-aware predictions. Our code is provided in the supplementary
material. | Omer Nivron, Raghul Parthipan, Damon J. Wischik | 2023-05-30T15:50:24Z | http://arxiv.org/abs/2305.19141v2 | # Taylorformer: Probabilistic Predictions for
###### Abstract
We propose the Taylorformer for time series and other random processes. Its two key components are: 1) the LocalTaylor wrapper to learn how and when to use Taylor series-based approximations for predictions, and 2) the MHA-X attention block which makes predictions in a way inspired by how Gaussian Processes' mean predictions are linear smoothings of contextual data. Taylorformer outperforms the state-of-the-art on several forecasting datasets, including electricity, oil temperatures and exchange rates with at least 14% improvement in MSE on all tasks, and better likelihood on 5/6 classic Neural Process tasks such as meta-learning 1D functions. Taylorformer combines desirable features from the Neural Process (uncertainty-aware predictions and consistency) and forecasting (predictive accuracy) literature, two previously distinct bodies.
## 1 Introduction
Attention-based models such as GPT [1] have shown impressive results on natural language tasks, where the goal is to predict sequences of discrete values. These models are built on the standard multi-head attention introduced by Vaswani et al. [2]. However, the scientific community is still working out what adaptations are needed to get similarly impressive results for continuous problems (such as time-series or spatial processes), where the goal is to predict real-valued targets on a continuum. For example, the Transformer Neural Process (TNP) [3] uses standard attention mechanisms to model continuous problems in an autoregressive manner, and the Autoformer [4], tailored for time-series forecasting, incorporates a decomposition block which extracts different temporal trends to model in a batch-generation manner.
Problem setup.Our goal is to model the distribution of a set of unobserved points (target), \(\mathbf{Y}_{T}\), at a specified set of indices, \(\mathbf{X}_{T}\), given a set of observed points and their associated indices (context), \(\{\mathbf{X}_{C},\mathbf{Y}_{C}\}\). Here, \(\mathbf{Y}_{T}\in\mathbb{R}^{n_{T}\times\alpha},\mathbf{X}_{T}\in\mathbb{R}^{ n_{T}\times\beta}\), \(\mathbf{Y}_{C}\in\mathbb{R}^{n_{C}\times\alpha}\) and \(\mathbf{X}_{C}\in\mathbb{R}^{n_{C}\times\beta}\), where \(n_{C}\) and \(n_{T}\) are the numbers of points in the context and target sets respectively. The context or target set may be permuted arbitrarily -- they need not be ordered.
Our Contributions.(1) We propose the Taylorformer, a model for continuous processes. It produces probabilistic predictions for interpolation and extrapolation settings and irregularly sampled data. It approximates a consistent process [5] and builds on insights derived from two well-known approaches for modelling continuous processes: Taylor series and Gaussian Processes (GPs). (2) We introduce the LocalTaylor wrapper. Taylor series can be used for local approximations of functions
but are only useful under certain conditions. LocalTaylor uses a neural network to learn how and when to use the information provided by a Taylor approximation. (3) We create the MHA-X block to incorporate an inductive bias from GPs (specifically, how the mean prediction is a linear smoothing of contextual values). (4) We introduce a masking procedure for attention units in the Taylorformer to allow training the ability to predict points at arbitrary locations.
We demonstrate Taylorformer's performance on several tasks in the Neural Process (NP) and time-series forecasting literature. Together, these evaluate the following i) both mean predictions and probabilistic predictions, ii) in both interpolation and extrapolation settings and iii) where the context/target sets can either be ordered or take arbitrary permutations. Taylorformer has a better log-likelihood/mean-squared-error than state-of-the-art models on 17/18 tasks, including image completion and electricity consumption. The Taylorformer shows 14-18%, 21-34% and 95-99% reductions in MSE compared to the next best model for forecasting transformers' oil temperatures, electricity consumption, and exchange rates, respectively. Figure 1 shows samples from the Taylorformer and the Autoformer for the ETT dataset [4].
This work focuses on building models that can capture the target data distribution better, so we do not focus on computational cost or inference efficiency.
## 2 Related Work
There has been a separation in the literature between those that use neural networks to model time-series and those that model random processes (the Neural Process family). We believe there is a shared project and are motivated by both.
Neural Processes and Consistency.The initial members of the Neural Process family (the Conditional Neural Process, CNP, [6] and the Neural Process, NP, [5]) set out to create a neural network version of Gaussian Processes (GPs), to combine the benefits of both. GPs are probabilistic models which specify distributions over functions. They can be used to perform regression and provide uncertainty estimates. Prior knowledge can be incorporated through the specification of a parametric kernel function. One of their desirable properties is that a GP is a consistent stochastic process (roughly, it does not matter what order you present the data in); however, they can be computationally intensive to use, and selecting appropriate kernels is a challenging task. One consequence of consistency is target equivariance which means that for a given probability model with a likelihood function, \(p_{\theta}\), for a given context set \(\{\mathbf{X}_{C},\mathbf{Y}_{C}\}\) and target set \(\{\mathbf{X}_{T},\mathbf{Y}_{T}\}\)
\[p_{\theta}(\mathbf{Y}_{\pi(T)}\,|\,\mathbf{Y}_{C},\mathbf{X}_{C},\mathbf{X}_{ \pi(T)}=p_{\theta}(\mathbf{Y}_{T}\,|\,\mathbf{Y}_{C},\mathbf{X}_{C},\mathbf{X} _{T}) \tag{1}\]
where \(\pi(T)\) is a permutation of the target set.
The TNP [3] is a state-of-the-art autoregressive Neural Process based on the transformer architecture, with adaptations, such as a new mask, for continuous processes. They use a shuffling approach to encourage target equivariance through the training algorithm. Nguyen and Grover [3] define the desired target-equivariant model by means of its likelihood \(\tilde{p}_{\theta}\),
Figure 1: Taylorformer (left) can generate higher quality samples on the ETT [4] dataset than the state-of-the-art Autoformer [4] (right). This is representative of the general difference between these models at generation time. The task is to predict the next 48 hours (192 target points) given a 24-hour window (96 context points).
\(\mathbb{E}_{\pi}[p_{\theta}(\mathbf{Y}_{\pi(T)}\,|\,\mathbf{Y}_{C},\mathbf{X}_{C}, \mathbf{X}_{\pi(T)}]\) where \(p_{\theta}\) is the likelihood of the base TNP model. The expectation is approximated using a Monte Carlo average over randomly sampled permutations during training. Inspired by the TNP, we use a similar shuffling procedure for training.
The initial Neural Process work (CNP, NP) enforced consistency through constraints to the architectures, but such constraints resulted in the underfitting of data. Advances included using attention in a way that maintains consistency [7], using convolutional architectures [8] and using hierarchical latent variables to capture global and local latent information separately [9]. The TNP outperformed all these by trading an increase in flexibility for the loss of architecture-enforced consistency.
Forecasting.There have been various improvements to standard Attention layers in the forecasting literature. Two state-of-the-art examples are Informer [10] and Autoformer [4]. Autoformer proposes a replacement for the classic multi-head attention block by using an auto-correlation mechanism, and the Informer offers a more efficient way to choose the most relevant keys for specific queries using ProbSparse attention.
One key difference between Taylorformer and Autoformer/Informer is that we model the targets autoregressively, while the latter uses a batch-generation approach (one-shot generation). Batch-generation approaches model the targets as conditionally independent given the context, analogous to CNPs [6]. The conditional independence assumption is restrictive, and allowing conditional dependencies between targets can improve results, as seen by how the NP [5] improves on the CNP.
Much other work focuses on making attention mechanisms more efficient [11; 12; 13; 14; 15]. And there is another strand combining both accuracy and efficiency [4; 10; 16]. Many are batch-generation models, so they are more efficient at forecast time than autoregressive ones since all the targets can be produced in one shot. As noted earlier, though, the gap that we aim to fill is how to make a better attention model for the data, putting efficiency aside.
## 3 Our approach: Taylorformer
Taylorformer is an autoregressive probabilistic model. The likelihood assigned to \(\mathbf{Y}_{T}\) given \(\mathbf{X}_{T},\mathbf{Y}_{C},\mathbf{X}_{C}\) is decomposed in the typical autoregressive manner
\[p(\mathbf{Y}_{T}|\mathbf{X}_{T},\mathbf{Y}_{C},\mathbf{X}_{C})=p(Y_{T}^{1}|X_ {T}^{1},\mathbf{Y}_{C},\mathbf{X}_{C})\prod_{i=2}^{n_{T}}p(Y_{T}^{i}|X_{T}^{i},\mathbf{Y}_{T}^{1:i},\mathbf{X}_{T}^{1:i},\mathbf{Y}_{C},\mathbf{X}_{C}) \tag{2}\]
where \(\mathbf{Y}_{T}\in\mathbb{R}^{n_{T}\times\alpha},\mathbf{X}_{T}\in\mathbb{R}^{ n_{T}\times\beta},\mathbf{Y}_{C}\in\mathbb{R}^{n_{C}\times\alpha},\mathbf{X}_{C} \in\mathbb{R}^{n_{C}\times\beta}\) and \(Y_{T}^{i}\in\mathbb{R}^{\alpha}\). The superscript notation \(1:i\) means all indices from \(1\) to \(i-1\).
The following equation reflects our full contribution. We model \(Y_{T}^{i}\) given \(X_{T}^{i}\) and a set of \(\{\mathbf{X},\mathbf{Y}\}\) information (which will contain context points and already-predicted target points) as follows, illustrated here using \(\{\mathbf{X}_{C},\mathbf{Y}_{C}\}\)
\[Y_{T}^{i}=\mathrm{LocalTaylor}(\mathrm{MHA-X-Net},X_{T}^{i}, \mathbf{X}_{C},\mathbf{Y}_{C};p,q)+\\ \mathrm{Linear}(\mathrm{MHA-X-Net}(\mathbf{W},X_{T}^{i}, \mathbf{X}_{C},\mathbf{Y}_{C}))\cdot Z_{i} \tag{3}\]
where LocalTaylor is detailed in section 3.1, and \(p\) and \(q\) are its hyperparameters. It is a wrapper around the neural network MHA-X-Net and uses approximations based on the Taylor series to augment predictions. Part of the LocalTaylor is the creation of the features \(\mathbf{W}\) (equation (8)). The MHA-X-Net is a neural network composed of standard multi-head attention units (referred to here as MHA-XY) and our new MHA-X block (section 3.2). \(Z^{i}\sim N(0,1)\). The model architecture is shown in Figure 1(a). In section 3.3, we explain how the model is trained.
### LocalTaylor
Taylor polynomials (curtailed Taylor series) are useful for making local predictions for functions, in low-noise settings, based on information at neighbouring points. Such approaches are useful in machine learning contexts too. The usefulness of a Taylor approximation is especially clear in the modelling of dynamical systems [17; 18; 19; 20], where one goal is to emulate the fixed-time-step evolution of a system that evolves based on an ordinary differential equation (ODE) of the form
\(\frac{dy}{dt}=f(y)\) where an approach based on modelling residuals
\[y_{a+\delta t}=y_{a}+\delta t\,\mathrm{NeuralNet}(y_{a}) \tag{4}\]
often works far better than modelling the target outright, as in
\[y_{a+\delta t}=\mathrm{NeuralNet}(y_{a}) \tag{5}\]
where \(\mathrm{NeuralNet}\) is a machine learning function. In this example, equation (4) has a clear parallel with the first order Taylor approximation of \(y_{a+\delta t}\) about \(y_{a}\) which is \(y_{a+\delta t}=y_{a}+\delta t\left.\frac{dy}{dt}\right|_{t=a_{a}}\). The neural network in equation (4) can be interpreted as a term to model all higher-order terms in the Taylor expansion.
Taylor approximations are not obviously suited for making long-range predictions (i.e., where \(\delta t\) in equation (4) is large) nor when the data is too noisy. The naive fix for long-range predictions would be to include higher-order terms in the Taylor approximation. However, estimations of higher-order derivative terms will become more inaccurate given the successive estimations needed for each higher-order derivative. Noisy data would compound the difficulty in estimating accurate derivatives. Moreover, some functions will have a Taylor series that only converges within a specific range. Hence,
Figure 2: a) Taylorformer architecture corresponding to equation (3). LocalTaylor is a wrapper around the central neural network, MHA-X-Net. The channels on the right-hand side are MHA-X. The ones on the left are MHA-XY. The noted features are shown in the following equations: XY features with masking, eq. (13), XY features, eq. (14), and X features, eq. (15). b) Example mask for \(n_{C}=3\) and \(n_{T}=3\). Each token can attend to other shaded tokens in its row.
a Taylor approximation's appropriateness will depend on the function being approximated, the point, \(a\), used for the expansion and the distance between \(a\) and the predicted point.
The **LocalTaylor wrapper** models the mean prediction of \(Y_{T}^{i}\) as the sum of i) explicit Taylor expansion terms and ii) a neural network adjustment function that uses Taylor expansion terms as features. This approach confronts the challenge of working out when Taylor approximations are useful (dependent on the noisiness of data, number of terms used in the Taylor approximation, and the gap over which predictions are made) by using a neural network to _learn_ how to use this information when making predictions. Giving features based on the Taylor expansion to the neural network can remove the hard-coded Taylor expansion terms if deemed not useful. Concretely,
\[\mu_{Y_{T}^{i}}=\mathrm{LocalTaylor(NeuralNet,\mathbf{X}_{C},X_{T}^{i}, \mathbf{Y}_{C};p,q)} \tag{6}\]
where \(p\) and \(q\) are hyperparameters referring to the truncation order of the Taylor terms used for the hard-coded and neural network features, respectively. This is illustrated below for \(p=1\) and \(q=1\),
\[\mathrm{LocalTaylor(NeuralNet,\mathbf{X}_{C},X_{T}^{i},\mathbf{Y }_{C};1,1)=Y_{T}^{n(i)}+\Delta X_{T}^{n(i)}D_{T}^{n(i)}+\\ \mathrm{Linear}\left(\,\mathrm{NeuralNet(\mathbf{W},\mathbf{X}_ {C},X_{T}^{i},\mathbf{Y}_{C})}\right) \tag{7}\]
the first two terms are a hard-coded first-order Taylor approximation, and the third is the neural network adjustment function taking in Taylor expansion terms (**W**) as features. Below, we describe the terms \(Y_{T}^{n(i)},\Delta X_{T}^{n(i)},D_{T}^{n(i)},\mathbf{W}\), and how they are created.
We introduce the RepresentationExtractor to create the terms required to compute the terms for a Taylor series approximation. It takes the hyperparameter \(q\), and we show it for \(q=1\):
\[\mathrm{RepresentationExtractor(}X_{T}^{i},\mathbf{X}_{C},\mathbf{Y }_{C};1)=X_{T}^{n(i)},\mathbf{X}_{C}^{n(I)},Y_{T}^{n(i)},\mathbf{Y}_{C}^{n(I)},\\ \Delta X_{T}^{n(i)},\Delta\mathbf{X}_{C}^{n(I)},\Delta\mathbf{Y}_ {C}^{n(I)},\mathbf{D}_{C},\mathbf{D}_{C}^{n(I)},D_{T}^{n(i)} \tag{8}\]
where \(X^{n(i)}\) (the \(i^{th}\) element of \(\mathbf{X}^{n(I)}\)) is the nearest already-seen neighbour of \(X^{i}\), where the neighbour may come from either context set or previously seen values of the target set. Mathematically, \(n(i)=\arg\min_{i^{\prime}\neq i,i^{\prime}\in C\text{ or }i^{\prime}<i}|X^{i}-X^{i^{ \prime}}|\). The remaining terms are defined as \(\Delta\mathbf{X}^{n(I)}=\mathbf{X}-\mathbf{X}^{n(I)}\), \(\Delta\mathbf{Y}^{n(I)}=\mathbf{Y}-\mathbf{Y}^{n(I)}\) and \(\mathbf{D}=\frac{\Delta\mathbf{Y}^{n(I)}}{\Delta\mathbf{X}^{n(I)}}\), a data-based approximation of the derivative at \(\mathbf{X}\), and \(\mathbf{D}^{n(I)}\) is an analogous data-based approximation of the derivative at \(\mathbf{X}^{n(I)}\). These definitions hold for \(\{\mathbf{X},\mathbf{Y}\}\) whether they are from the context or target sets. We handle ties in the \(\arg\min\) by randomly choosing one of the tied neighbours. We also positionally encode the \(\mathbf{X}\) variables, similarly to Vaswani et al. [2]. We then define \(\mathbf{W}\) as the concatenation of these features, \(\mathbf{W}=\mathrm{Concat[i\text{ for i in RepresentationExtractor}(X_{T}^{i},\mathbf{X}_{C},\mathbf{Y}_{C};1)]}\).
Inspiration for dealing with noisy data also comes from local smoothing techniques, such as LOESS, which deal with noise by adjusting the size of the data subsets used to fit their local functions. Similarly, we can smooth out noise using averages based on different data subsets. For example, instead of estimating derivatives based only on the nearest-neighbor point, we can take an average of derivatives where each is calculated based on different points near the estimation point.
The estimation of derivatives can be generalised to higher-dimensions by using partial derivatives; we use it for our 2D regression experiments on the CelebA [21] and EMNIST [22] datasets.
### MHA-X block
It is worth drawing inspiration from how GPs model processes given they are well-liked probabilistic tools used to model various continuous problems. In fact, they are even included in scikit-learn [23]. Although they model a restricted class of functions, based on their wide usage, it is evident that the classes of functions which they do model are important to users.
For a GP, the mean of its predictions of \(Y_{T}^{i}\) is simply a linear combination of the contextual information \(Y_{C}\). This is seen from their predictive distribution for \(Y_{T}^{i}\) at \(X_{T}^{i}\) conditioned on \(\mathbf{Y}_{C}\) and \(\mathbf{X}_{C}\), \(Y_{T}^{i}\sim N(A,B)\), where
\[A=K(X_{T}^{i},\mathbf{X}_{C})K(\mathbf{X}_{C},\mathbf{X}_{C}^{\prime})\mathbf{ Y}_{C} \tag{9}\]
where \(K\) is a kernel specifying the covariance and is a function only of \(\mathbf{X}\). Therefore, equation (9) is a linear weighting of \(\mathbf{Y}_{C}\), with the weighting done by the kernel, which is a non-linear function of the \(\mathbf{X}\) values.
GPs weigh contextual information in a contrasting manner to that in a standard GPT-style attention model, where non-linear combinations of \(\{\mathbf{X}_{C},\mathbf{Y}_{C}\}\) (referred to as values, \(V\)) are weighted based on non-linear functions of _both_\(\mathbf{X}_{C}\) and \(\mathbf{Y}_{C}\) (referred to as the queries \(Q\) and keys \(K\)) as follows
\[\mathrm{Attention}(Q,K,V)=\mathrm{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{10}\]
where \(V\in\mathbb{R}^{nc\times d_{e}}\), \(K\in\mathbb{R}^{nc\times d_{k}}\), \(Q\in\mathbb{R}^{nc\times d_{k}}\).
This standard attention's (MHA-XY) approach to weighting is important for language tasks as considerable weight should still be put on distant words if they have similar semantic meaning to closer-by words. But it is not an obvious requirement for regression tasks seen by how GPs are so useful despite this mechanism being absent.
We propose the **MHA-X block**, which allows the mean prediction of \(Y_{T}^{i}\) to be modelled as a linear weighting of \(\mathbf{Y}_{C}\), with the weighting being a non-linear function, learnt using a neural network, of just \(\mathbf{X}_{C}\) and \(X_{T}^{i}\).
An MHA-X block with \(N\) layers is composed of \(N-1\), multi-head attention units
\[\mathrm{MultiHead}(f(\mathbf{X}_{C}),f(\mathbf{X}_{C}),f(\mathbf{X}_{C}))= \mathrm{Concat}(\text{head}_{1},...,\text{head}_{h})R^{O} \tag{11}\]
with \(\text{head}_{i}=\mathrm{Attention}(QR_{i}^{Q},KR_{i}^{K},VR_{i}^{V})\), where \(f(\mathbf{X}_{C})\) are non-linear functions only of features derived from \(\mathbf{X}_{C}\), and \(R_{i}^{Q},R_{i}^{K}\in\mathbb{R}^{d_{\text{multi}}\times d_{k}}\), \(R_{i}^{V}\in\mathbb{R}^{d_{\text{multi}}\times d_{\text{multi}}}\) and \(R^{O}\in\mathbb{R}^{hd_{e}\times d_{\text{multi}}}\). The last layer of the MHA-X block is composed of the multi-head attention unit:
\[\mathrm{MultiHead}(g(\mathbf{X}_{C}),g(\mathbf{X}_{C}),\mathbf{Y}_{C}) \tag{12}\]
where \(g(\mathbf{X}_{C})\) is the output of the previous layer in the MHA-X block.
### Training
Our model is trained by maximising the likelihood, \(\Pr(\mathbf{Y}_{T}|\mathbf{X}_{T},\mathbf{X}_{C},\mathbf{Y}_{C})\). Two key differences exist between our training and a standard transformer-decoder model like GPT. The first is using a shuffling method to encourage consistency through training. For tasks where we wish to prioritise target-equivariance, we use a shuffling approach similar to Nguyen and Grover [3]: during training, we maximise the likelihood of \(p(\mathbf{Y}_{\pi(T)}|\mathbf{X}_{\pi(T)},\mathbf{Y}_{C},\mathbf{X}_{C})\) where \(\pi\) are permutations of the target set which are chosen randomly for each training batch.
The second difference is an approach to prevent data leakage when training to predict in arbitrary orders. To predict points in arbitrary orders, we need a mechanism to query points at arbitrary \(\mathbf{X}\) locations without revealing the corresponding \(\mathbf{Y}\) value during training. The classic GPT-style mask is not capable of this. Nguyen and Grover [3] introduced a mask and target-padding approach that deals with this. Our process is an alternative approach. The main difference is that the number of multiplications required for the scaled dot-product in equation (10) is \(O((n_{C}+2n_{T})^{3})\) for Nguyen and Grover [3], whereas ours is only \(O((n_{C}+n_{T})^{3})\) operations, though we reiterate that efficiency is not our work's focus. Our mechanism combines separate keys and queries for the attention mechanism with specific masking. The process is different for MHA-X and MHA-XY and is detailed below.
Mha-XyFor the first layer in the MHA-XY block, there are \(n_{C}+n_{T}\) queries, keys and values. The queries are
\[Q_{i}=\begin{cases}(X^{\text{fe},i},Y^{\text{fe},i},Y^{\text{seen},i},1)&\text {if }i\in C\\ (X^{\text{fe},i},0,Y^{\text{seen},i},0)&\text{if }i\in T\end{cases} \tag{13}\]
where \(X^{\text{fe},i}\) are features which are only functions of \(X^{i}\) and \(X^{j}\) for \(j\in C\cup\{j:j<i\}\), \(Y^{\text{fe},i}\) are features which are functions of \(Y^{i}\), \(X^{i}\), \(Y^{j}\) and \(X^{j}\) for \(j\in C\cup\{j:j<i\}\), and \(Y^{\text{seen},i}\) are features which are functions of \(X^{i}\), \(Y^{j}\) and \(X^{j}\) for \(j\in C\cup\{j:j<i\}\). For this work, \(X^{\text{fe},i}=[X^{i},X^{n(i)},\Delta X^{n(i)}]\), \(Y^{\text{fe},i}=[Y^{i},\Delta Y^{n(i)},D^{i}]\) and \(Y^{\text{seen},i}=[Y^{n(i)},D^{n(i)}]\), where these terms are defined in section 3.1. We mask \(Y^{\text{fe},i}\) so that when making predictions for the target set, we can query by \(Q_{i}\) (as we need to know the \(X\) information of the point we are looking to predict) without revealing the target value during training. The final item is a label indicating whether the variables \(Y^{\text{fe},i}\) which contain \(Y^{i}\) are masked (set to zero) or not. The keys and values are
\[K_{i}=V_{i}=(X^{\text{fe},i},Y^{\text{fe},i},Y^{\text{seen},i},1) \tag{14}\]
The masking mechanism is designed so that: (1) the context points only attend to themselves and (2) target points only attend to the previous target points and the context points. Figure 1(b) shows an example mask for \(n_{C}=3\) and \(n_{T}=3\).
For subsequent layers in MHA-XY, the queries, keys and values are the outputs of the previous MHA-XY layers.
Mha-XFor the first N-1 layers, the inputs are the outputs of the previous layer, where the inputs to the first layer are
\[Q_{i}=K_{i}=V_{i}=X^{\text{fe},i} \tag{15}\]
For the final layer, the inputs are \(Q_{i}=K_{i}=O_{i}\), where \(O_{i}\) is the output of the previous MHA-X layer, and \(V_{i}=Y_{i}\). The same mask as for MHA-XY is used for MHA-X.
## 4 Experiments
To assess the Taylorformer as a model for continuous processes, we evaluate on key tasks in the NP literature: 1D regression and 2D regression (tested on image completion). To assess it on time-series, we evaluate it on three key forecasting tasks: electricity consumption, the load of electricity transformers and exchange rate.
For these experiments, it was empirically better to truncate the explicit Taylor expansion terms to the zeroth order (setting \(p=0\)) and keep the first order terms in \(\mathbf{W}\) (setting \(q=1\)) with using the approximate derivative of the nearest neighbour. Therefore, we model the mean of \(Y_{T}^{i}\) as \(\mu_{Y_{T}^{i}}=\mathrm{LocalTaylor(NeuralNet,\mathbf{X}_{C},X_{T}^{i},\mathbf{ Y}_{C};0,1)}\).
The Taylorformer was trained on a 32GB NVIDIA V100S GPU. Further implementation details are below.
### Neural Process tasks
DatasetsFor 1D regression, we generate three datasets corresponding to a meta-learning setup -- each dataset contains 100K sequences with \(\mathbf{X}\sim U[-2,2]\). We query Gaussian Process (GPs) at those \(\mathbf{X}\) to give \(\mathbf{Y}\) with a random selection of kernel hyper-parameters as specified in Appendix A. Each dataset uses either the RBF, Matern or Periodic kernel for GP generation. The hold-out dataset includes sequences generated in the same manner as the training data.
For 2D regression, we use two datasets. The first comes from the balanced EMNIST [22] dataset, which contains monochrome images. Each pixel \(y_{i}\in\mathbf{R}^{1}\) is scaled to be in [-0.5, 0.5] and \(x_{i}\in\mathbf{R}^{2}\) with each dimension scaled to be in [-1, 1]. We train on only a subset of available classes (0-9, corresponding to handwritten numbers). There are two hold-out sets, the first includes unseen images from the same classes seen during training. The second includes images from unseen classes (10-46, corresponding to handwritten letters). The second dataset is CelebA [21]: coloured images of celebrity faces with each pixel \(y_{i}\in\mathbf{R}^{3}\) with each dimension scaled to be in [-0.5, 0.5] and \(x_{i}\in\mathbf{R}^{2}\) with each dimension scaled to be in [-1, 1].
Implementation detailsModels were trained for 250K, 71K and 125K iterations with 32, 64, and 64 batch sizes for 1D regression, EMNIST and CelebA, respectively. Optimization was done using Adam with a \(10^{-4}\) learning rate. For 1D-regression, \(n_{C}\sim U[3,97]\) and \(n_{T}=100-n_{C}\). For EMNIST and CelebA \(n_{C}\sim U(6,200)\) and \(n_{T}\sim U(3,197)\). Full implementation details are found in the Appendix.
Results.The meta-learning 1D-regression and image completion experiments were used extensively in the NP literature [5; 8; 7; 24; 3; 9], and we compare to NP [5], ANP [7] and TNP [3]. Table 1 shows that Taylorformer improves on the other methods for 5 out of 6 tasks. Figure 3 in Appendix B shows that for all 1D regression tasks, the validation loss is lowest for the Taylorformer. For the 2D regression, we perform best on both EMNIST tasks, but there is no improvement over the TNP for CelebA. We hypothesise that noise in the CelebA dataset may benefit from taking averages of derivatives.
### Forecasting
DatasetsWe use three datasets: 1) hourly electricity consumption data from 2013 to 2014.1 We pick client number 322 out of the 370 available clients. 2) 15-minute Electricity transformers load (ETT) [10] and 3) Daily exchange rate [25] from 1990-2016 (we pick the country denoted \(OT\)). For each dataset, we run four experiments in which each sequence has a fixed length context set, \(n_{C}=96\), (with randomly selected starting position) and a prediction length \(n_{T}\in\{96,192,336,720\}\). We standardise \(y_{i}\in\mathbf{R}\) for all experiments. \(x_{i}\in\mathbf{R}\) is scaled to be in [-1, 1] in each sequence -- only relative time matters to us.
Footnote 1: [https://drive.google.com/file/d/ljinfTAApPyuywW1P1hUpI3r1oJq@in1/view?usp=share_link](https://drive.google.com/file/d/ljinfTAApPyuywW1P1hUpI3r1oJq@in1/view?usp=share_link)
Implementation detailsModels were trained for 40K iterations with batch sizes of 32. Optimization was done using Adam with a \(10^{-4}\) learning rate. All datasets are split chronologically into training, validation and test sets by the ratios 72:8:20, 69:11:20, and 69:11:20, respectively. Full implementation details are in Appendix A.
ResultsThe electricity, ETT and exchange rate are used extensively in the literature, and we compare to Autoformer [4], Informer [10] and TNP [3]. For Autoformer and Informer, we keep the exact setup used in the papers concerning network and training parameters so the model is trained on mean-squared-error. We train TNP to maximise log-likelihood, and we set the number of network parameters to match ours since their paper does not implement these tasks. Further, we run a hyper-parameter search for TNP. We do not compare to the classic ARIMA model as it is out-performed by Autoformer and Informer on these datasets.
Table 2 shows that Taylorformer is notably better in terms of MSE for all forecasting tasks. We outperform the autoregressive TNP and the batch-generating Autoformer and Informer with 21-34%, 95-99% and 14-18% reductions in MSE compared to the closest model for the ETT, exchange and electricity tasks, respectively. Examples of our generated sequences can be seen in Figures 1 and 6 (Appendix). The likelihood results show a similar trend and are provided in Appendix B.
### Ablation study
We study what contributions are key to the final model performance by performing a 1D regression task on four equally sized models: a base transformer-decoder (just MHA-XY units), a model with only MHA-X, a model with the LocalTaylor wrapped around the base model, and a model with both contributions. Results in Figure 4 indicate that the combination of both contributions yields the best outcome.
\begin{table}
\begin{tabular}{|c|l c c c c|} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{Log-Likelihood} & \multicolumn{1}{c}{} & \\ \multicolumn{2}{c}{} & \multicolumn{1}{c}{**Taylorformer**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \hline \multirow{3}{*}{\(\bigtriangleup\)} & RBF & \(\mathbf{4.13\pm 0.03}\) & \(3.50\pm 0.05\) & \(0.96\pm 0.02\) & \(0.24\pm 0.01\) \\ & Matern & \(\mathbf{3.87\pm 0.05}\) & \(3.22\pm 0.05\) & \(1.07\pm 0.01\) & \(0.40\pm 0.02\) \\ & Periodic & \(\mathbf{0.31\pm 0.02}\) & \(0.13\pm 0.04\) & \(-0.85\pm 0.05\) & \(-1.56\pm 0.00\) \\ \hline \multirow{3}{*}{\(\bigtriangleup\)} & EMNIST-seen & \(\mathbf{2.18\pm 0.01}\) & \(1.59\pm 0.00\) & \(0.82\pm 0.04\) & \(0.62\pm 0.01\) \\ & EMNIST-unseen & \(\mathbf{1.90\pm 0.03}\) & \(1.47\pm 0.01\) & \(0.68\pm 0.08\) & \(0.44\pm 0.00\) \\ \cline{1-1} & CelebA & \(4.00\pm 0.02\) & \(\mathbf{4.13\pm 0.02}\) & \(2.78\pm 0.03\) & \(2.30\pm 0.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Log-likelihood on a test set. Higher is better. Taylorformer outperforms the SOTA TNP in 5/6 1D and 2D regression tasks. Each model is run five times to report log-likelihood with standard deviation results.
### Evaluating consistency
We evaluate target equivariance (equation (1)) of Taylorformer by computing the log-likelihood for sequences with permuted target points and taking the standard deviation. If consistent, all the likelihoods would be the same and the standard deviation would be zero. For our model the mean of the standard deviations is \(0.038\). Full results and figures are in Appendix B.
## 5 Discussion
LimitationsTwo limitations of the LocalTaylor approach are as follows: first, when dealing with noisy data, there needs to be a clear recommendation on how many samples of derivative estimations to use when computing the average derivative, for example. This is just a hyperparameter for now. Second, it would be better if we could also _learn_ how many of the hard-coded Taylor approximation terms to use instead of it just being a hyperparameter. An extension to augment the LocalTaylor approach would be to use a weighting function to more strongly weight data points closer to the point of estimation.
Taylorformer is an autoregressive model so is slower to generate predictions than batch-generation models. Moreover, the extra steps in the RepresentationExtractor (such as the nearest neighbour search) have an associated computational cost affecting training and inference -- one which is absent from the other NP models. Taylorformer would need to be better-suited for tasks where fast sampling and generation of sequences are required. A one-shot model such as the Autoformer [4] may be more suitable.
Another limitation is that autoregressive models like ours are prone to error accumulation, unlike batch-generation models. It is well-known in the dynamical modeling literature [26] that predictions from an autoregressive model slowly deviate from target values. A fully consistent process would address this, but as noted in section 4.4, Taylorformer does not have strict target equivariance. Nevertheless, the shuffling approach to training encourages consistency and is one part of what we expect to be the solution to the error accumulation issue.
ConclusionsThe Taylorformer is built on two key components: the MHA-X block and the LocalTaylor wrapper. Although the MHA-X block relies on using attention-based models, the LocalTaylor approach could be easily deployed for use in other classes of models too, such as RNNs, if preferred by the modeller.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \hline & & \multicolumn{2}{c}{MSE} & & \\ & & **Taylorformer** & TNP & Autoformer & Informer \\ \hline \multirow{4}{*}{\begin{tabular}{c} **learlear** \\ \end{tabular} } & 96 & \(\mathbf{0.00030\pm 0.00001}\) & \(0.00041\pm 0.00004\) & \(0.43\pm 0.21\) & \(0.40\pm 0.20\) \\ & 192 & \(\mathbf{0.00029\pm 0.00000}\) & \(0.00044\pm 0.00008\) & \(0.57\pm 0.26\) & \(0.65\pm 0.30\) \\ & 336 & \(\mathbf{0.00030\pm 0.00001}\) & \(0.00038\pm 0.00001\) & \(0.67\pm 0.32\) & \(0.72\pm 0.34\) \\ & 720 & \(\mathbf{0.00029\pm 0.0000}\) & \(0.00039\pm 0.00004\) & \(0.87\pm 0.42\) & \(0.85\pm 0.40\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} **learlear** \\ \end{tabular} } & 96 & \(\mathbf{0.002\pm 0.000}\) & \(0.040\pm 0.030\) & \(0.41\pm 0.20\) & \(0.57\pm 0.26\) \\ & 192 & \(\mathbf{0.002\pm 0.000}\) & \(0.079\pm 0.035\) & \(0.59\pm 0.28\) & \(1.53\pm 0.80\) \\ & 336 & \(\mathbf{0.002\pm 0.000}\) & \(0.067\pm 0.025\) & \(0.96\pm 0.46\) & \(3.28\pm 1.4\) \\ & 720 & \(\mathbf{0.001\pm 0.000}\) & \(0.152\pm 0.091\) & \(1.08\pm 0.51\) & \(1.47\pm 0.76\) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} **learlear** \\ \end{tabular} } & 96 & \(\mathbf{0.036\pm 0.000}\) & \(0.042\pm 0.002\) & \(0.130\pm 0.039\) & \(0.133\pm 0.005\) \\ & 192 & \(\mathbf{0.037\pm 0.000}\) & \(0.045\pm 0.001\) & \(0.117\pm 0.001\) & \(0.148\pm 0.011\) \\ \cline{1-1} & 336 & \(\mathbf{0.040\pm 0.001}\) & \(0.048\pm 0.002\) & \(0.143\pm 0.011\) & \(0.150\pm 0.013\) \\ \cline{1-1} & 720 & \(\mathbf{0.039\pm 0.001}\) & \(0.048\pm 0.003\) & \(0.216\pm 0.040\) & \(0.129\pm 0.012\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: MSE on a test set. Lower is better. Taylorformer outperforms three state-of-the-art models on three forecasting tasks (details in text). Each task is trained and evaluated with different prediction lengths \(n_{T}\in\{96,192,336,720\}\) given a fixed 96-context length. Each model is run five times to report log-likelihood with standard deviation results.
Broader ImpactsThe main negative impact would be of the work would be \(CO_{2}\) consumption from training. We foresee no other negative societal impacts.
Our code is linked in the supplementary material.
|
2308.07383 | Density-functional description of materials for topological qubits and
superconducting spintronics | Interfacing superconductors with magnetic or topological materials offers a
playground where novel phenomena like topological superconductivity, Majorana
zero modes, or superconducting spintronics are emerging. In this work, we
discuss recent developments in the Kohn-Sham Bogoliubov-de Gennes method, which
allows to perform material-specific simulations of complex superconducting
heterostructures on the basis of density functional theory. As a model system
we study magnetically-doped Pb. In our analysis we focus on the interplay of
magnetism and superconductivity. This combination leads to Yu-Shiba-Rusinov
(YSR) in-gap bound states at magnetic defects and the breakdown of
superconductivity at larger impurity concentrations. Moreover, the influence of
spin-orbit coupling and on orbital splitting of YSR states as well as the
appearance of a triplet component in the order parameter is discussed. These
effects can be exploited in S/F/S-type devices (S=superconductor,
F=ferromagnet) in the field of superconducting spintronics. | Philipp Rüßmann, David Antognini Silva, Mohammad Hemmati, Ilias Klepetsanis, Björn Trauzettel, Phivos Mavropoulos, Stefan Blügel | 2023-08-14T18:08:27Z | http://arxiv.org/abs/2308.07383v1 | # Density-functional description of materials for topological qubits and superconducting spintronics
###### Abstract
Interfacing superconductors with magnetic or topological materials offers a playground where novel phenomena like topological superconductivity, Majorana zero modes, or superconducting spintronics are emerging. In this work, we discuss recent developments in the Kohn-Sham Bogoliubov-de Gennes method, which allows to perform material-specific simulations of complex superconducting heterostructures on the basis of density functional theory. As a model system we study magnetically-doped Pb. In our analysis we focus on the interplay of magnetism and superconductivity. This combination leads to Yu-Shiba-Rusinov (YSR) in-gap bound states at magnetic defects and the breakdown of superconductivity at larger impurity concentrations. Moreover, the influence of spin-orbit coupling and on orbital splitting of YSR states as well as the appearance of a triplet component in the order parameter is discussed. These effects can be exploited in S/F/S-type devices (S=superconductor, F=ferromagnet) in the field of superconducting spintronics.
## I Introduction
Systems where superconductivity meets magnetism or topological materials are a rapidly emerging field. This is not only due to the advent of topological superconductivity, where Majorana zero modes can form building blocks for topological quantum computing [1; 2], but it also has become of interest in the field of superconducting spintronics [3; 4; 5]. Novel states of matter generally can exist in complex heterostructures of, for instance, superconductors and magnetic materials, chains of magnetic atoms on superconductors, or at interfaces between superconductors and topological insulators. While a plethora of different experimental techniques allows gaining invaluable insights and simple models can explain general trends, a material-specific theory allows for predictive modeling of complex material combinations on the atomic scale. The Kohn-Sham Bogoliubov-de Gennes (KS-BdG) method provides this capability, which offers new ways to understand the physics of complex superconducting heterostructures and enables pathways toward computational material optimization. Next to experiments and theory, computational physics forms the third pillar of modern physics that is indispensable in overcoming the immense material challenges in the quickly developing field around inhomogeneous superconducting materials.
Density-functional theory (DFT) for superconductors has been formulated since the works of Oliveira, Gross and Kohn [6]. In the spirit of the Bogoliubov-de Gennes approach, the theory extends the single-particle picture of DFT to an electron-hole space with a coupling that, for conventional \(s\)-wave superconductors, depends on the electron-phonon interaction. This method is typically referred to as the Kohn-Sham Bogoliubov-de Gennes method (KS-BdG). There are two different approaches in the DFT of superconductivity. The first approach is to calculate the electron-phonon coupling entirely from first principles, leading to a fully ab-initio prediction of superconductors [7; 8; 9]. The second approach, analogous to the LDA+\(U\) approach to strongly correlated systems in DFT, is to relate the coupling strength to an empirical parameter \(\lambda\), which is set by fitting the results to some experimentally accessible physical quantity, e.g. the superconducting gap [10; 11; 12]. While the first approach is realistic in systems with only few atoms in the unit cell, it becomes numerically very expensive in systems that require large computational unit cells. This is particularly the case for disordered systems where the second approach is favorable as a compromise between numerical demand and flexibility. It was successfully used to study conventional \(s\)-wave superconductors [10; 11; 12; 13], heterostructures of \(s\)-wave superconductors and non-superconductors [14; 15; 16; 17; 18], or impurities embedded into superconductors [19; 20; 21; 22].
In this work we focus on recent developments in the KS-BdG method using the parameter-based approach. This was combined with the Korringa-Kohn-Rostoker Green function formalism (KKR) based on multiple-scattering theory [23], founded on the seminal work of Suvasini _et al._[24] Recently, the KS-BdG approach was also implemented into the open-source JuKKR code [25], that allows a full-potential relativistic description of complex materials [12]. As a prime example for the capabilities of the KS-BdG method we choose to focus our attention on superconducting Pb doped with magnetic transition-metal defects. Lead is a well studied elemental superconductor that has low enough
electron density so that the \(3d\) elements Ti, V, Cr, Mn, Fe, and Co are magnetic when diluted in it. We demonstrate two approaches to describe magnetic impurities in superconducting Pb. On the one hand we introduce the ab-initio impurity embedding scheme that allows to describe impurities in the dilute limit as isolated defects in an otherwise perfectly ordered host crystal. On the other hand, we employ the Coherent Potential Approximation (CPA) to deal with finite concentrations of disorder. Both approaches are unique features of a Green's function method [26].
Using these two methods we are able to study the appearance of Yu-Shiba-Rusinov (YSR) states [27; 28; 29] that appear as bound states inside the superconducting gap of Pb [30]. With increasing concentration we find a broadening of these states with the formation of impurity bands up to a critical concentration where superconductivity breaks down. This critical concentration depends on the properties of the respective transition metal impurity. Furthermore, we comment on the appearance of a triplet order parameter that can survive over long distances in superconductor/ferromagnet (SF) junctions. Triplet Cooper pairs can potentially be exploited in SFS-type devices in the field of superconducting spintronics. Overall these examples demonstrate the merit of KS-BdG calculations for superconducting spintronics and quantum information technology.
This work is structured as follows. First, in Sec. II the KS-BdG method within the framework of KKR is reviewed, and the ab-initio impurity embedding and CPA approaches to disorder embedded in superconductors is introduced. Then, the capabilities of KS-BdG simulations are demonstrated at the example of magnetically-doped Pb in Sec. III. Finally, in Sec. IV the merit of KS-BdG simulations in the context of materials science involving superconductors, especially in the fields of topological qubits and superconducting spintronics, is discussed.
## II Methods
### The Kohn-Sham Bogoliubov-de Gennes method within KKR
The Bogoliubov-de Gennes (BdG) method is a powerful theoretical tool used for a microscopic description of superconductors. The advantages of the BdG method are the possibility to describe the excitation spectrum of superconductors with the ability of dealing with inhomogeneous systems, defects in superconductors and complex heterostructures [31]. In the context of DFT, the KS-BdG Hamiltonian can be written as [10; 12; 24; 32]
\[H_{\rm BdG}^{\rm KS}({\bf x})=\left(\begin{array}{cc}H_{0}^{ \rm KS}({\bf x})-E_{\rm F}&\Delta_{\rm eff}({\bf x})\\ \Delta_{\rm eff}^{*}({\bf x})&E_{\rm F}-\left(H_{0}^{\rm KS}({\bf x})\right)^{ \dagger}\end{array}\right), \tag{1}\]
where \(E_{\rm F}\) is the Fermi energy. Here, we have introduced the normal state Hamiltonian
\[H_{0}^{\rm KS}({\bf x})=-\nabla^{2}+V_{{\rm eff},0}({\bf x})+{\bf B}_{\rm eff} ({\bf x})\cdot\mathbf{\sigma}, \tag{2}\]
which is the conventional Kohn-Sham Hamiltonian describing the normal state (Rydberg atomic units are used where \(\hbar=1\)), as well as the effective superconducting pairing potential \(\Delta_{\rm eff}\). Note that the potential term in Eq. (2) consists of a non-magnetic (\(V_{{\rm eff},0}\)) and a magnetic part of the potential (last term on r.h.s.), where \(\mathbf{\sigma}\) denotes the vector of Pauli matrices. The effective single-particle potentials in Eq. (1) are functionals of the charge density \(\rho({\bf x})\), the magnetization density, \(\mathbf{\mu}({\bf x})\), and the anomalous density \(\chi({\bf x})\) (which is the superconducting order parameter) [24; 32],
\[V_{{\rm eff},0}({\bf x}) = V_{\rm ext}({\bf x})+2\int\frac{\rho({\bf x}^{\prime})}{|{\bf x} -{\bf x}^{\prime}|}d{\bf x}^{\prime}+\frac{\delta E_{\rm sc}[\rho,\mathbf{\mu}, \chi]}{\delta\rho({\bf x})}, \tag{3}\] \[{\bf B}_{\rm eff}({\bf x}) = {\bf B}_{\rm ext}({\bf x})+\frac{\delta E_{\rm sc}[\rho,\mathbf{\mu}, \chi]}{\delta\mathbf{\mu}({\bf x})},\] (4) \[\Delta_{\rm eff}({\bf x}) = \lambda_{i}\chi({\bf x}). \tag{5}\]
For the functional derivatives of the exchange correlation functional \(E_{\rm xc}\), the approximations used routinely in DFT can be used. This simplifies the last term in Eq. (3) to the standard formulation of the local density or generalized gradient approximations (LDA, GGA). The form of the pairing potential is an approximation introduced by Suvasini _et al._[24], who introduced the set of effective coupling constants \(\lambda_{i}\). The different \(\lambda_{i}\) are allowed to vary in the different sites \(i\) of a computational unit cell which enables the description of inhomogeneous systems consisting of superconductors and non-superconductors such as magnets [19; 22], normal metals [14; 15; 33; 18], or topological insulators [16; 17].
In the KKR formalism it is straight-forward to include the BdG formalism where using multiple-scattering theory the Green's function for the KS-BdG Hamiltonian, Eq. (1), is found [10; 24]. This results in a generalized structure
of the Green's function (\(\mathbf{x}\) is the position vector and \(E\) the energy)
\[\underline{\underline{G}}(\mathbf{x},\mathbf{x}^{\prime};E)=\left(\begin{array}{ cc}G^{e,e}(\mathbf{x},\mathbf{x}^{\prime};E)&G^{e,h}(\mathbf{x},\mathbf{x}^{\prime};E) \\ G^{h,e}(\mathbf{x},\mathbf{x}^{\prime};E)&G^{h,h}(\mathbf{x},\mathbf{x}^{\prime };E)\end{array}\right), \tag{6}\]
that consists of four particle-hole blocks that enter in the KS-BdG Hamiltonian of Eq. (1). Analogous to the conventional KKR formalism describing the electronic structure in the normal state, the charge \(\rho\), magnetization \(\mathbf{\mu}\), and anomalous density \(\chi\) can be computed from the (off-)diagonal blocks of the Green's function [10; 24]
\[\rho = -\frac{1}{\pi}\int_{-\infty}^{\infty}\mathrm{d}E\int\mathrm{d}r \,f(E)\Im\operatorname{Tr}[G^{e,e}(\mathbf{x},\mathbf{x};E)]+[1-f(E)]\Im \operatorname{Tr}[G^{h,h}(\mathbf{x},\mathbf{x};E)], \tag{7}\] \[\mathbf{\mu} = -\frac{1}{\pi}\int_{-\infty}^{\infty}\mathrm{d}E\int\mathrm{d}r \,f(E)\Im\operatorname{Tr}[\mathbf{\sigma}G^{e,e}(\mathbf{x},\mathbf{x};E)]+[1-f( E)]\Im\operatorname{Tr}[\mathbf{\sigma}G^{h,h}(\mathbf{x},\mathbf{x};E)],\] (8) \[\chi = -\frac{1}{4\pi}\int_{-\infty}^{\infty}\mathrm{d}E\int\mathrm{d}r \,[1-2f(E)]\left(\Im\operatorname{Tr}[G^{e,h}(\mathbf{x},\mathbf{x};E)]+\Im \operatorname{Tr}[G^{h,e}(\mathbf{x},\mathbf{x};E)]\right). \tag{9}\]
Here, \(f(E)\) refers to the Fermi-Dirac distribution and the trace is taken over the combined angular-momentum and spin index \(L=(\ell,m,s)\), used in the expansion of the KKR Green's function around scattering sites [23]. Note that this form of the anomalous density assumes \(s\)-wave pairing [10; 24]. It is however possible to generalize the form of \(\lambda\) and \(\chi\) from simple real-valued scalars to more complex matrix forms in \(L\). This approach can then be used to include a triplet order parameter in the self-consistent solution of the KS-BdG equation [34] during which, for a given fixed set of \(\lambda_{i}\), \(V_{\mathrm{eff}}\) and \(\chi\) are converged which results in observables such as the superconducting gap in the band structure.
### Impurities and disorder in superconductors
The Green's function formulation of KKR has the distinct advantage that impurities and disorder are easy to describe. This is due to the use of the Dyson equation that allows to connect the Green's function for a host crystal \(G_{\mathrm{host}}\), which is routinely calculated for periodic systems with the KKR method, to the Green's function \(G_{\mathrm{imp}}\) that describes an impurity embedded into the perfectly ordered host crystal. With \(\Delta V=V_{\mathrm{imp}}-V_{\mathrm{host}}\), the impurity Dyson equation reads
\[G_{\mathrm{imp}}(\mathbf{x},\mathbf{x}^{\prime};E)=G_{\mathrm{host}}( \mathbf{x},\mathbf{x}^{\prime};E)+\int\mathrm{d}^{3}x\,G_{\mathrm{host}}( \mathbf{x},\mathbf{x}^{\prime\prime};E)\Delta V(\mathbf{x}^{\prime\prime})G_{ \mathrm{imp}}(\mathbf{x}^{\prime\prime},\mathbf{x}^{\prime};E). \tag{10}\]
As in regular DFT for periodic systems, the impurity's charge density is related to the impurity potential via the Poisson equation. Thus, the impurity embedding requires a self-consistent solution of the impurity Dyson equation with updating \(\Delta V\). However, since impurities are typically screened by the host's electrons, there is only a small region around the impurity where \(\Delta V\neq 0\). Thus the impurity Dyson equation only needs to be solved in a small _impurity cluster_ around the defect atom, as illustrated in Fig. 1(a). This makes the Green's-function-based ab-initio impurity embedding very efficient, especially for the description of the dilute limit of defects. In contrast, wave-function-based DFT requires periodically repeated supercells, that typically have to be large in order to avoid spurious interaction between periodic images of the impurities.
An alternative approach to the dilute limit treated within the ab-initio impurity embedding is to describe random disorder through alloying. Disorder at finite concentrations can be described within the Coherent Potential Approximation (CPA) [35; 36]. In this Green-function based method, an effective medium is defined by a local Green function and an averaged local scattering \(\bar{t}\)-matrix. These are found self-consistently by an implicit equation, where the known \(t\)-matrices of the alloy atoms enter, which are derived directly from the atom's potential. In contrast to the impurity embedding, which includes a cluster of neighboring atoms around the defect, the CPA is a single-site theory that is, however, known to work well in metallic alloys in three dimensions, as far as average quantities are concerned [23].
In this work, we employ both ab-initio impurity embedding _and_ CPA to describe transition metal doping in the \(s\)-wave superconductor Pb. The combination of the two approaches allows to study properties of single defects as well as the formation of impurity bands and the breakdown of superconductivity with increasing magnetic doping. Both approaches are formulated in the KS-BdG method where Green's functions, potentials for host and impurity atoms, as well as \(t\)-matrices entering the self-consistent equations of CPA contain particle and hole components leading to the four-component Green's function given in Eq. (6).
### Computational details
Our calculations are performed with the JuKKR code [25] with the help of the AiiDA-KKR interface [37; 38] to the AiiDA infrastructure [39]. We use the experimental lattice constant for bulk Pb of \(4.95\,\mathrm{\SIUnitSymbolAngstrom}\) which crystallizes in the face centered cubic crystal structure [40], that is shown as an inset in Fig. 2(a). The transition metal impurities are placed in substitutional positions in impurity clusters that include the first shell of neighboring atoms, or within the (single-site) CPA. The superconducting state is calculated based on the solution of the Bogoliubov-de Gennes equation incorporated into the relativistic Korringa-Kohn-Rostoker Green function (KKR) method as described in detail in Ref. [12]. Scalar relativistic corrections are included, and we perform two kinds of calculations either (i) including or (ii) neglecting the effects of spin-orbit coupling.
We use an angular momentum cutoff of \(\ell_{\mathrm{max}}=2\) and include corrections for the exact shape of the cells around the atoms (i.e. full-potential calculations) [41; 42]. We use the local density approximation (LDA) [43] for the parametrization of the normal state exchange correlation functional and the semi-phenomenological approach of Suvasini _et al._[24] for the superconducting coupling. The coupling constant \(\lambda\) is set such that the superconducting gap of bulk Pb reproduces the experimental value of \(\Delta\approx 1.4\,\mathrm{meV}\)[44; 30]. This value of \(\lambda\) is then kept constant in the subsequent CPA calculations that include transition metal impurities in Pb. Both in the calculations for impurity embedding and within the CPA, we set \(\lambda\) to zero on the impurity site. Since the impurity concentration \(x\) is low (typically \(x<0.1\%\)), a constant value of \(\lambda\) independent of \(x\) is a realistic assumption. Since we wish to describe small concentrations of magnetic impurities below the percolation limit, the paramagnetic state is a more realistic model than a ferromagnetic calculation. We follow the concept of the disordered local moments (DLM) within the CPA in order to describe the magnetically-disordered state [45]. Thus, we define two species of magnetic defects with opposite magnetic moment orientations, e.g., \(\mathrm{Mn}^{\uparrow}\) and \(\mathrm{Mn}^{\downarrow}\). Then, the paramagnetic state \(\mathrm{Pb}_{1-x}\mathrm{Mn}_{x}\) is calculated as an alloy of three components, \(\mathrm{Pb}_{1-x}\mathrm{Mn}_{x/2}^{\uparrow}\mathrm{Mn}_{x/2}^{\downarrow}\).
## III Results
### SOC-induced splitting of Yu-Shiba-Rusinov states
Lead is a heavy metal where spin-orbit coupling (SOC) effects are particularly pronounced. This is visible in the density of states (DOS), that is shown with and without SOC in Fig. 2(a). In the superconducting state, Pb is a two-band superconductor [46; 47] due to different sheets that compose the Fermi surface [9; 11]. This results in a gap anisotropy [11] that is seen as the shoulder at the edge of the coherence peak of the Pb host crystal (grey background spectra in Figs. 3(b)). With SOC, the signature of this anisotropy, however, reduces which may explain why some experiments are not able to observe the two-band superconductivity of Pb [44], whereas it is visible in other
Figure 1: Impurity embedding and random disorder within the KKR method. **(a)** In the ab-initio impurity embedding, a defect introduces a change in the potential locally around the defect site. This is typically screened by the electrons of the host crystal so that the area where \(\Delta V\neq 0\) is often short-ranged. **(b)** Effective medium of the Coherent Potential Approximation (CPA). The effective medium is constructed from the constituents of the random alloy in such a way that, on average, the individual components do not introduce any additional scattering.
experiments [46; 47].
We embedded the \(3d\) series of transition metal (TM) atoms as substitutional defects in bulk Pb. The inset in Fig. 2(b) shows the resulting impurity spin moment. We find that Ti, V, Cr, Mn, Fe and Co impurities (\(Z=22\) to 27) develop a spontaneous magnetic moment that follows Hund's second rule with a maximum of the magnetic moment for half-filled \(d\)-shells. When the \(d\)-shell is unoccupied (Sc) or almost full (Ni, Cu and Zn), the impurities turn out to be non-magnetic. These trends are well known for transition metal impurities in non-magnetic metals with low electron density, and do not depend on whether we neglect or include SOC, as seen from the negligible differences in the impurity moments. However, SOC effects can be uncovered in the impurity DOS, shown in Fig. 2(b). SOC induces splittings on top of the prevailing crystal field splitting in \(e_{g}\) and \(t_{2g}\) states, typically present in fcc crystal structures [21]. The SOC-induced splitting is particularly pronounced in the majority spin channel of Fe impurities, as highlighted by the black arrows. Moreover, it is well-known that SOC also induces orbital magnetic moments [48]. We find sizable orbital moments of 0.14, 0.02, \(-\)0.04, \(-\)0.13, \(-\)0.44, and \(-\)0.45 for Ti, V, Cr, Mn, Fe and Co impurities, respectively. This trend follows the typical behavior for TM impurities in host materials with low electron density. The sign change from positive to negative values, which means a change from parallel to antiparallel orientation of spin and orbital magnetic moments, is consistent with Hund's third rule. These chemical trends are well known from transition metal impurities in other metallic host crystals [26; 49; 50].
In the superconducting state, the presence of a magnetic exchange field generally leads to breaking of singlet Cooper pairs [51], as illustrated in Fig. 3(a). Cooper pairs in conventional \(s\)-wave superconductors form from a coherent superposition of two electrons of opposite spin. In the presence of a magnetic field, the exchange field of a magnetic impurity introduces different scattering potentials for the two electrons of the Cooper pair which results in a breaking of the coherent superposition. As a consequence, this leads to pairs of in-gap states within the superconducting gap of the host material known as Yu-Shiba-Rusinov (YSR) states [27; 28; 29]. Here we calculate YSR states of magnetic TM impurities embedded in bulk Pb. For an Fe impurity the YSR states are shown in Fig. 3(b,c). Without SOC, two pronounced YSR states appear in the spin-integrated DOS at positive energies. In a simple model that assumes isotropic scattering, the energy of YSR states is given as [21; 52]
\[\varepsilon=\pm\Delta_{0}\frac{1-\alpha^{2}+\beta^{2}}{\sqrt{(1-\alpha^{2}+ \beta^{2})^{2}+4\alpha^{2}}}, \tag{11}\]
where \(\alpha=\pi N_{0}JS\) is the pair-breaking part of the scattering potential (\(N_{0}\) is the superconductor's normal DOS at \(E_{\mathrm{F}}\) and \(JS\) is the exchange energy a classical spin \(S\) experiences) and \(\beta=\pi N_{0}V\) is related to the non-magnetic part of the impurity's scattering potential \(V\). From Eq. (11) it is obvious that YSR states come in pairs (leading \(\pm\) on the r.h.s.),
Figure 2: Normal state electronic structure of bulk Pb and embedded transition metal impurities. **(a)** Normal state density of states (DOS) of bulk Pb calculated with and without SOC. The inset shows the fcc crystal structure of Pb. **(b)** Impurity DOS of the magnetic \(3d\) impurities. Full and broken lines indicate calculation results without and with SOC, respectively. SOC-induced splittings are highlighted by the black arrows for the Fe majority DOS. The inset in (b) shows the magnetic moment of the \(3d\) impurities embedded as substitutional defects into Pb where the full blue (dashed orange) line refers to calculation results neglecting (including) SOC.
which are symmetric with respect to flipping the electron's spin. Their intensities are however generally not identical due to the difference in the impurity DOS between minority and majority spin channels. In our spin-integrated DOS shown in Fig. 3(b), the majority spin-channel produces YSR states that are almost two orders of magnitude smaller than the minority spin channel. The splitting into two peaks can be attributed to the crystal field splitting the Fe impurity experiences in the fcc matrix of the Pb host crystal. This is understood from the different exchange field an electron coupling to the \(e_{g}\) or \(t_{2g}\) manifold of \(d\)-states experiences [21]. The crystal field splitting enters Eq. (11) via different values for \(JS\) and thus different values of \(\alpha\) for the \(e_{g}\) and \(t_{2g}\) states.
The SOC-induced splitting of the majority states of Fe also leads to a changing energy difference between \(e_{g}\) and \(t_{2g}\) states. We attribute the appearance of a YSR state closer to \(E=E_{\rm F}\), when SOC is included (_cf._ Fig. 3(c)), to this fact. In addition to this global shift, we also find that the \(e_{g}\) and \(t_{2g}\) manifolds further split, leading to the complex multi-peak structure in the YSR spectrum of Fe. A comparison of the YSR states of different magnetic TM impurities is shown in Fig. 4. The differences in the YSR spectra demonstrate the chemical trends for changing magnetic impurity. From the impurity DOS calculation where SOC is neglected, shown in Fig. 4(a), we deduce that the size of the crystal field splitting depends on the impurity type. We find the largest values for V and Fe impurities and smaller splittings for Co, Ti, Mn and Cr (ordered by the size of the splitting from highest to lowest). The filling of the impurity's \(d\)-shell, that is related to its magnetic moment, also strongly influences where the YSR states reside inside the host's superconducting gap. Whereas we find that the dominant YSR peaks arise at positive energies for Ti, Mn and Fe, we find the opposite behavior for V, Cr and Co. Furthermore, for Cr and Mn the YSR states stick to the edge of the gap but for V, Fe and Co we find them closer to the center of the gap. The SOC-induced splitting on top of the crystal field splitting of YSR states can furthermore lead to a significant broadening of the YSR spectrum as seen in Fig. 4(b). This is particularly pronounced for Co where we see that the YSR spectrum fill almost the entire lower half (i.e. at negative energies) of the superconducting gap.
Previous calculations for bulk Pb, that however neglected the effect of SOC, agree well with our results without SOC [21]. Measurements by Ruby _et al._[30] of Mn impurities on Pb(110) confirm that the main spectroscopic signature is found close to the coherence peak of Pb. This was recently also found theoretically by Wu _et al._[22], where it was pointed out that the exact position of YSR peaks at the surface depends sensitively on relaxations of the distance of the impurity to the surface. However, in the bulk, where we place our impurities, the impurities experience a higher coordination number of neighboring host atoms which will generally reduce the effects of lattice relaxations. The chemical trends that we observe for different TM impurities in Pb are, furthermore, consistent with STM experiments done for different magnetic impurities on the \(s\)-wave superconductor Nb [53]. For magnetic impurities on Nb it was further discussed that the direction of the magnetic of Mn impurities on the (110) surface and the size of SOC have a strong influence on the location of YSR peaks within the superconducting gap [54]. We also expect these effects to be reduced for impurities in the highly symmetric bulk fcc lattice.
Figure 3: Yu-Shiba-Rusinov (YSR) states of Fe in Pb. **(a)** Illustration of the pair breaking of singlet cooper pairs at magnetic defects. **(b,c)** Impurity DOS showing the YSR states of Fe without and with SOC included in the Pb host. The superconducting DOS of the Pb host crystal (scaled by a factor 20) is given as the grey background. The arrow in (b) highlights the gap anisotropy (visible as shoulder due to a second coherence peak) that is more pronounced in Pb without SOC. The inset in (c) illustrates the crystal field and SOC induced splittings of the five \(d\) orbitals in the fcc matrix of Pb.
### Breakdown of superconductivity at finite impurity concentrations
In the dilute limit, impurity states are localized around isolated impurity atoms and YSR bound states appear as discussed in the previous section. With increasing concentration, however, these YSR states will overlap and impurity bands can gradually form that might have a profound effect on the electronic structure. Analogous to critical magnetic fields where superconductivity breaks down, at a critical concentration of magnetic impurities mixed into an \(s\)-wave superconductor, a breakdown of superconductivity is expected [51]. We study this effect within the CPA, where we simulate the superconducting state of Pb with increasing concentrations of magnetic defects. The results are summarized in Fig. 5. For this study we chose to neglect SOC effects in these calculations because we expect the disorder broadening to be more significant than the SOC-induced splitting seen for the YSR states of some TM impurities. The CPA impurity DOS is shown in Fig. 5(a,b) for impurity concentrations of \(1\,\mathrm{ppm}\) and \(100\,\mathrm{ppm}\) (\(0.01\%\)). Indeed, we find a strong broadening of the YSR states that prevents the clear identification of distinct \(e_{g}\) and \(t_{2g}\) states for concentrations as low as \(100\,\mathrm{ppm}\), which is well below the critical concentration where superconductivity breaks down. Overall we find that the CPA results of the impurity DOS agree well with the calculations based on ab-initio impurity embedding, _cf._ Fig. 4(a). We note that the formation of bonding/anti-bonding pairs when YSR states of close-by atoms overlap [55], can further shift the location of YSR states. This effect was, for example, discussed in pairs of Mn atoms on the Nb(110) surface [20] and it is also included in our results within the CPA. The disorder broadening, however, prevents the observation of distinct shifts.
Figure 5(c) shows the breakdown of superconductivity with increasing impurity concentrations for different magnetic TM impurities. This is shown in terms of the integrated anomalous density \(\chi\). In the weak coupling limit of the BCS theory [56], it is known that \(T_{c}\) is proportional to the superconducting gap \(\Delta\), which in turn is proportional to \(\chi\) in our formalism. At \(x=0\) we obtain a maximum \(\chi=\chi_{0}\), which vanishes for \(x\geq x_{c}\). It is far more practical to monitor the superconducting state through the anomalous density than through the superconducting gap, because the latter is blurred by the presence of in-gap YSR impurity states and because close to \(x_{c}\) we expect gapless superconductivity. In all cases, \(\chi\) decreases as a function of the concentration up to a critical point \(x_{c}\), after which it vanishes. As far as can be inferred from our numerical results, the curve \(\chi(x)\) is continuous. At \(x\lessapprox x_{c}\) it decays linearly and shows a discontinuity of the first derivative at \(x_{c}\). This behaviour is well-known from the Abrikosov-Gor'kov theory [57] and it is, for example, seen in conventional magnetically-doped superconductors [58].
We find large differences in the values of \(x_{c}\) for different magnetic impurities. These values, however, do not seem to correlate clearly with the values of the magnetic moments (_cf._ Fig. 2). In principle, one would expect higher impurity moments to suppress the superconducting state at lower concentrations, since the spin-polarised part of the potential, giving rise to the magnetic moment, is also the cause of Cooper-pair-breaking. In our calculations we find that \(x_{c}\) is low for Ti and V, then grows for Cr and Mn, where the magnetic moments are more sizeable, and then falls off again for Fe and Co. We suspect that this non-linear behaviour is related to the existence of \(d\) level resonances at the Fermi energy, which is captured in our DFT calculation. As shown in Fig. 2(b), Fe and Co have a minority-spin resonance peaked at \(E_{\mathrm{F}}\), while V has a majority-spin resonance peaked at \(E_{\mathrm{F}}\). These impurities exhibit a low critical concentration, between \(0.04\%\) and \(0.05\%\). For Mn and Cr, on the other hand, \(E_{\mathrm{F}}\) lies between the majority-spin and
Figure 4: YSR states of different magnetic \(3d\) transition metal impurities in bulk Pb. Spin-integrated impurity DOS showing YSR states calculated **(a)** without SOC and **(b)** including SOC. The grey-shaded background shows the host’s DOS (scaled up by a factor 20) where the superconducting gap in the host crystal is visible.
minority-spin resonances. Between the two, the Mn minority-spin peak is slightly closer to \(E_{\rm F}\) than the Cr majority-spin peak. This is reflected in the values of \(x_{c}=0.091\%\) for Mn and \(0.115\%\) for Cr. Moreover, we find a correlation between the critical concentration and location of the YSR peaks within the superconducting gap of the host. For Mn and Cr, the YSR states stick to the edge of the superconducting gap whereas we find that they populate the central region for Fe, V and Co, as seen in Figs. 4(a) and 5(a,b).
## IV Discussion and perspective
Interfacing superconducting and magnetic materials forms the basis for the emergent field of superconducting spintronics [59; 3; 4; 5]. While conventional \(s\)-wave superconductivity breaks down at critical magnetic fields, proximity between magnetic and superconducting materials provides a fruitful basis for unconventional superconductivity. In material systems where the macroscopically coherent superconducting states coexist with microscopic exchange interaction at magnetic atoms or layers, for example, triplet superconductivity, odd-frequency pairing, long-range equal-spin supercurrents or Majorana qubits can be engineered [3; 4]. Devices that combine superconducting and magnetic materials allow to realize unconventional transport with superconducting spin valves or unconventional Josephson junctions consisting of superconductor ferromagnet (S/F) interfaces and S/F/S heterostructures [5].
An example are materials that provide large interfacial SOC in superconducting junctions, which allows to realize devices that show a large magnetoresistance effect [60]. Interesting effects arise not only at isolated magnetic defects in superconductors, which is discussed in this work in Sec. III, but in a variety of possible S/F geometries, _cf._ Fig. 6. For example, chains of magnetic atoms in proximity to a superconductor can, together with SOC, lead to the formation of Majorana zero modes at the end of the chains [1; 2]. With distributed magnetic doping or entire magnetic layers, S/F interfaces and S/F/S heterostructures, illustrated in Fig. 6(b,c), can be engineered. Finally, non-uniform magnetic order can have a remarkable effect in S/F hybrid structures. For instance, the existence of domain walls in magnetic layers can stabilize superconductivity [61]. Moreover, topological superconductivity together with spin-triplet correlations is proposed to exist in skyrmion lattices in contact to superconductors [62].
In F/S and S/F/S heterostructures, the proximity effect will in general depend on material properties such as crystal symmetries, orbital character of the electronic structure of its constituents. This calls for a microscopic modelling of the interfaces that include all these effects. The KS-BdG method is ideally suited for this task. It naturally accounts for crystal symmetries and has chemical accuracy, taking into account different orbital character of a material's electrons. Different geometries from single impurities over nanoclusters to interfaces and heterostructures including SOC and non-collinear magnetic textures are routinely described with DFT methods. The example of TM impurities embedded
Figure 5: Random alloying of magnetic impurities into superconducting Pb, computed with the CPA method. **(a,b)** Broadening of the YSR peaks for different magnetic impurities at concentrations of 1 ppm and 100 ppm. **(c)** Breakdown of superconductivity with increasing concentration of magnetic defects embedded into Pb. Shown is the anomalous density, which is the order parameter for superconductivity in the KS-BdG method.
into the \(s\)-wave superconductor Pb, which we discussed in Sec. III, demonstrates the power of the KS-BdG approach. Since this method is based on DFT, multi-band effects are automatically included and material-specific predictions are possible, even at the sub-meV energy scale necessary in the field of superconductivity. This is illustrated by the splitting of the manifold of YSR states that is derived from a combination of crystal field and SOC splittings of \(d\)-states. The chemical trends uncovered in the comparison of different transition metal impurities can have a profound effect on the details of the YSR in-gap states. This results in vastly different YSR peak positions and different critical concentrations, where superconductivity breaks down, for different transition metal impurities.
Applying the KS-BdG method to diverse material classes where superconductivity, SOC, and magnetism can coexist offers enticing research opportunities. For example, layered van der Waals (vdW) materials offer rich physical properties that can be combined easily in heterostructures [63]. A prominent vdW superconductor is NbSe\({}_{2}\)[64], where, for instance, YSR states around magnetic Fe impurities are found to extend over several nm [65]. NbSe\({}_{2}\) also shows signatures of equal-spin triplet pairs under an applied magnetic field, which are of great interest for superconducting spintronics and topological superconductivity [66]. Furthermore, a field-free version of a Josephson diode was recently realized in a heterostructure of NbSe\({}_{2}\) with Nb\({}_{3}\)Br\({}_{8}\)[67]. This accomplishes the superconducting diode effect with a built-in magnetic field, that was originally found in Nb/Ta/V multilayers with an applied _external_ field [68].
With regard to the field of superconducting spintronics harnessing and extending the novel capabilities of the KS-BdG method shows great promise. For instance, it is possible to predict unconventional pairing such as inter-orbital Cooper pairs that have singlet-triplet mixing [69; 18]. Signatures of triplet pairing, which is a hallmark for unconventional superconductivity, are also accessible in the KS-BdG method. The output anomalous density matrix \(\chi\) (introduced in Sec. II) in general has both singlet and triplet components. During the self-consistent solution of the KS-BdG equation, in addition to (or even instead of) the singlet component alone, a generalized set of coupling constants can be included. This was, for example, done to study triplet pairing in the unconventional superconductor LaNiGa\({}_{2}\)[34]. Investigating triplet order in the superconductivity of S/F interfaces and S/F/S junctions from the KS-BdG method will be of interest for the materials optimization challenges in superconducting spintronics. Moreover, being able to predict transport properties from microscopic band structure simulations of the KS-BdG calculations would also be of great interest. This would allow to connect the microscopic KS-BdG picture to the realm of mesoscopic physics where, for instance, transport characteristics of Josephson junctions are studied. A possible avenue towards that goal might be the pursuit of DFT-based simulations of the supercurrent, that was recently done in the ballistic limit of magnetic Josephson junctions consisting of Nb/Ni/Nb [70].
In summary, the KS-BdG method offers unique insights into the electronic structure, proximity effects and superconducting order based on the microscopic atomic structure. Applying this methodology to S/F/S-type heterostructures is expected to give valuable insights in the future, tackling challenging materials problem on the way towards applications in superconducting spintronics and quantum information technology.
Figure 6: Possible geometries where superconductivity and magnetism come together. Layers of atoms are illustrated by filled rectangles where the superconductor is coloured in blue and (ferro)-magnetic layers are coloured in orange. **(a)** Chain of magnetic atoms on a superconducting surface where Majorana zero modes may appear at the ends of the chain. **(b,c)** Interfaces between superconductors (S) and ferromagnets (F) in form of a S/F/S junction or S/F interface. **(c)** Non-collinear magnetic layer in contact to a superconductor. The magnetic ordering of the magnetic layers is (c,d) is illustrated by the direction of the spins in the chain above the surface.
###### Acknowledgements.
We thank the Bavarian Ministry of Economic Affairs, Regional Development and Energy for financial support within the High-Tech Agenda Project "Bausteine fur das Quantencomputing auf Basis topologischer Materialien mit experimentellen und theoretischen Ansatzen". Furthermore, this work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - Cluster of Excellence "Matter and Light for Quantum Computing" (ML4Q) EXC 2004/1 - 390534769. Part of this work has been performed under the Project HPC-EUROPA3 (INFRAIA-2016-1-730897), with the support of the EC Research Innovation Action under the H2020 Programme. We are also grateful for computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer CLAIX at RWTH Aachen University (project "jara0191") and of the supercomputer JURECA [71] at Forschungszentrum Julich (project "superint"), and this work was supported by computational time granted from the Greek Research & Technology Network (GR-NET) in the National HPC facility--ARIS--under projects ID 2DMATFUN and TopMagX3.
|
2301.04381 | Determinate Node Selection for Semi-supervised Classification Oriented
Graph Convolutional Networks | Graph Convolutional Networks (GCNs) have been proved successful in the field
of semi-supervised node classification by extracting structural information
from graph data. However, the random selection of labeled nodes used by GCNs
may lead to unstable generalization performance of GCNs. In this paper, we
propose an efficient method for the deterministic selection of labeled nodes:
the Determinate Node Selection (DNS) algorithm. The DNS algorithm identifies
two categories of representative nodes in the graph: typical nodes and
divergent nodes. These labeled nodes are selected by exploring the structure of
the graph and determining the ability of the nodes to represent the
distribution of data within the graph. The DNS algorithm can be applied quite
simply on a wide range of semi-supervised graph neural network models for node
classification tasks. Through extensive experimentation, we have demonstrated
that the incorporation of the DNS algorithm leads to a remarkable improvement
in the average accuracy of the model and a significant decrease in the standard
deviation, as compared to the original method. | Yao Xiao, Ji Xu, Jing Yang, Shaobo Li | 2023-01-11T10:02:14Z | http://arxiv.org/abs/2301.04381v1 | # Determinate Node Selection for Semi-supervised Classification Oriented Graph Convolutional Networks
###### Abstract
Graph Convolutional Networks (GCNs) have been proved successful in the field of semi-supervised node classification by extracting structural information from graph data. However, the random selection of labeled nodes used by GCNs may lead to unstable generalization performance of GCNs. In this paper, we propose an efficient method for the deterministic selection of labeled nodes: the Determinate Node Selection (DNS) algorithm. The DNS algorithm identifies two categories of representative nodes in the graph: typical nodes and divergent nodes. These labeled nodes are selected by exploring the structure of the graph and determining the ability of the nodes to represent the distribution of data within the graph. The DNS algorithm can be applied quite simply on a wide range of semi-supervised graph neural network models for node classification tasks. Through extensive experimentation, we have demonstrated that the incorporation of the DNS algorithm leads to a remarkable improvement in the average accuracy of the model and a significant decrease in the standard deviation, as compared to the original method.
## 1 Introduction
Since graphs are effective in representing real-world objects with interactive relationships and interdependencies, recently, many studies on extending deep learning approaches for graph data have emerged [23]. Graph neural networks (GNNs), which are capable of processing graph-structured data, have gained significant attention and have been widely applied in the modeling and solution of various problems across multiple fields, including social networks [13, 14], recommender systems [22], knowledge graphs [15], and graph matching [20, 21].
Graph neural networks (GNNs) can be utilized for a variety of graph analytics tasks, including node classification [13], graph classification [20], graph generation [24], and link prediction [20]. In the context of node classification, classical methods such as Graph Convolutional Networks (GCNs) [13], GraphSAGE [14], and Graph attention network (GAT) [21] infer the labels of unlabeled nodes in the graph by using a small number of labeled nodes for semi-supervised learning through the feature and structural information of the graph. GCNs, in particular, have been shown to be effective in extracting structural information from graph through the use of graph convolution operations to learn node embeddings for semi-supervised classification.
However, GCNs still have some limitations. One such limitation is the issue of over-smoothing, which can occur when too many GCN layers are stacked. This can cause the feature of nodes to become indistinguishable, which can negatively impact classification accuracy. Additionally, shallow GCNs may struggle to effectively propagate label information throughout the entire graph when there is a limited amount of labeled nodes available [11]. Besides, GCNs require additional validation sets for hyperparameter optimization and model selection, which is incompatible with the purpose of semi-supervised learning. To address the limitations, a common approach is to use self-training to expand label set using pseudo-labels based on the confident predictions [11, 20]. This can help to improve generalization performance with limited labeled nodes and eliminate the need for an extra validation set. Nevertheless, these approachs still randomly select the initial label set, which can lead to unstable generalization performance.
It should be noted that each node in a graph has the potential to represent the data distribution differently, due to the node's adjacency to other nodes in the graph. Some nodes may be more typical (i.e., have a high degree) while some nodes may be more divergent (i.e., have a low degree). To the best of our knowledge, current approaches for semi-supervised node classification using graph convolutional networks tend to simply use a random partitioning method to select the label set and the test set, without considering the typicality or divergence of the nodes in the graph. The shallow structure of GCNs leads to inefficient label information propagation, causing a decrease in the generalization ability of the model when trained with non-representative nodes. As
a result, the accuracy of current methods tends to fluctuate during multiple trainings.
In this paper, we propose a graph-based deterministic node selection algorithm: Determinate Node Selection (DNS), which exploits representative nodes in the graph as label set by using the structural information of the graph. The DNS algorithm is inspired by DeLaLA [20], which has been extensively tested and shown to be effective on Euclidean data. Unlike self-training methods that rely on pseudo-labels with high confidence, our DNS algorithm is non-iterative and determines the label set using only the node feature and structural information of the graph, without following model training. The comparison in Figure 1 illustrates the difference between the nodes selected by the DNS algorithm and those selected by the high confidence on a subgraph of two classes contained in Zachary's karate club dataset [14]. Through extensive experimentation, we demonstrate that using DNS to select the label set for GCNs leads to significant improvements in generalization performance. The main contributions of this paper are as follows:
* We probe the difference in the ability of the nodes in a graph to represent the data distribution and point out that to maintain good generalization performance, it is necessary for GCNs and other semi-supervised GNNs to be trained on representative nodes.
* We propose a deterministic graph node selection algorithm called DNS to replace the random split method in the original GCNs. This one-shot method does not require iterative training and allows for the predetermined selection of a label set.
* The DNS algorithm has the versatility to be applied as a general method to arbitrary semi-supervised node classification GNNs due to its ability to determine label sets using only the node features and structural information of the graph. Our experiments demonstrate that the use of label set selected by DNS algorithm can improve model performance.
## 2 Related Works
### Graph-Based Semi-Supervised Learning
Graph-Based Semi-Supervised Learning (GSSL) is a subfield of semi-supervised learning in which samples in a dataset are represented as nodes in an affinity graph, and the labeling information of unlabeled samples is inferred based on the structure of the affinity graph. The anchor graph-based GSSL methods [13, 14] use the K-Means algorithm to generate anchor points on the dataset and construct an affinity graph using the similarity between data and anchor points. Optimal leading forest (OLF) [20] is a structure that reveals the evolution of difference along paths within sub-trees, and the DeLaLA algorithm [20] which utilizes OLF for label propagation has achieved state-of-the-art performance on multiple datasets. However, GSSL methods are primarily designed for use with data in the Euclidean space and have not involved the study of graph data in the non-Euclidean domains.
### Convolutional Graph Neural Networks
Convolutional Graph Neural Networks (ConvGNNs) extend the convolutional operation from grid data to graph data, generating node representations by aggregating features from both the nodes themselves and their neighbors, and stack multiple graph convolutional layers to extract high-level node representations. ConvGNNs can be divided into two categories: Spatial-Based ConvGNNs and Spectral-Based ConvGNNs. Spatial-Based ConvGNNs [15, 21] define graph convolution based on the spatial relationship between nodes and propagate information between nodes through their edges. Spectral-based approaches [17, 18] perform graph convolution operations on the spectral domain of the graph.
GCNs [17] have simplified the spectral decomposition and bridged the gap between spectral-based and spatial-based approaches. However, GCNs suffer from the over-smoothing, which makes GCNs unsuitable for stacking over multiple layers. Additionally, GCNs are not able to effectively propagate label information to the entire graph when there are only a few labeled nodes. These limitations may impact the performance of GCNs in certain scenarios.
Figure 1: Nodes selected by different methods on Zachary’s karate club. The DNS algorithm can select the typical nodes and divergent nodes based on the structural information of the graph, while the nodes selected by confidence are hardly representative of the whole subgraph.
### Pseudo-label Methods on GCNs
[11] identified the strengths and weaknesses of GCNs, and proposed two pseudo-label methods: Co-Training and Self-Training, as a means of expanding the label set to improve the generalization performance of GCNs in the scenario of the graph with few labeled nodes. Co-Training utilizes a random walk model to add the neighbors of labeled nodes to the label set for training GCNs. Self-Training, on the other hand, starts with the initial label set to train GCNs and then uses the classification predictions of GCNs to select the nodes with the highest confidence in each class and add them to the label set for further training. [20] proposed an improvement to the Self-Training method outlined in [11] by introducing the multi-stage training framework and the alignment mechanism to enhance the accuracy of pseudo-labels.
While the above Pseudo-label Methods have greatly improved the classification accuracy of GCNs at a low label rate, the improvement they provide over Original GCNs tends to decrease as the label rate increases. The pseudo-label Methods also have the potential for introducing noisy labels into the label set, which can negatively impact the training of GCNs. In addition, these methods rely on random split to select the initial label set, and for graphs with few labeled nodes, the quality of the obtained pseudo-labels may be compromised if the labeled nodes are not representative. This can subsequently affect the performance of the model.
## 3 Determinate Node Selection on GCNs
We found that the accuracy of GCNs can vary significantly depending on the randomly splitting label set, which motivated us to address this issue. GCNs utilize the features of unlabeled nodes in a graph for learning, but the update of gradients is reliant on labeled nodes. If the labeled nodes are not representative, the model is unable to effectively propagate labeling information to the entire graph, leading to a significant decrease in generalization performance. Inspired by DeLaLA [22], we design a new deterministic node selection algorithm called DNS to replace the random split method and improve the stability of GCNs. The DNS algorithm allows for the classification accuracy of GCNs to converge across multiple trainings. The process of the DNS algorithm is shown in Figure 2.
### Graph-based Optimal Leading Forest
The concepts of "multimodal" and "mixmodal" were proposed in [23]. In simple terms, multimodal refers to the presence of multiple clusters within a single class, while mixmodal refers to the overlap of samples between different classes within a single cluster. The randomly selected labeled nodes may only belong to one cluster within a class and not represent all nodes within that class. Additionally, due to the mixmodal nature of the data, it may be difficult to distinguish labeled nodes from nodes in other classes.
To mitigate the negative effects of mixmodal and multimodal on model training, we need to design an algorithm for selecting representative nodes from the graph. These nodes can be classified into two categories: typical nodes, which are located at the center of clusters within a class, and divergent nodes, which are found at the edges of clusters within a class. Our goal is to adaptively select a representative set of nodes that includes both typical and divergent nodes.
The Optimal leading forest (OLF) is derived from the Density Peaks Clustering Algorithm [15]. OLF is a non-iterative algorithm and is designed to represent the biased order relationship between samples by their local density, while also capturing the evolution from the central location to the edges of the samples [21]. According to the characteristics of graph data, we developed the Graph-based Optimal Leading Forest (GOLF). GOLF uses the feature of the nodes in the graph and the structure of the graph to construct a leading tree, from which it identifies the centers of clusters and disconnects them from their parent nodes to form a leading forest. GOLF is constructed as follows.
To extract meaningful information from the graph, we apply graph convolution operations to aggregate the features of each node and its neighbors, embedding the structural information of the graph into the resulting node features. These node features serve as the input for GOLF, denoted as F:
\[F=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}X, \tag{1}\]
where \(X\) denotes the node feature, \(\tilde{A}=A+I\) and \(\tilde{D}\) is the degree matrix of \(\tilde{A}\). \(A\) is the the adjacent matrix of the graph.
\(F\) contains information about the density of nodes in the graph: if a node has more neighbors, or if the feature of nearby nodes contain more information, its aggregation will be more informative. As a result, the local density of nodes in the graph can be calculated using \(F\), which can be used to measure the typicality of nodes within the graph.
The local density is calculated as follows:
\[\rho_{i}=\exp(\frac{-\|F_{i}\|_{2}^{2}}{\sigma^{2}})\quad(\text{for}\ 1\leq i\leq n), \tag{2}\]
Figure 2: Flowchart of DNS algorithm.
where \(n\) is the size of nodes, \(F_{i}\) is the feature of node \(i\) and \(\sigma\) is a bandwidth parameter.
In the leading tree, each node is connected to a leading node (the parent in the leading tree) with a higher local density, except for the node with the highest local density. We use the graph adjacency and local density to determine the leading node \(Pa_{i}\) for each node \(i\) in the leading tree:
\[Pa_{i}=\begin{cases}-1,&\text{if }i=\operatorname*{arg\,max}(\rho)\\ \operatorname*{arg\,max}(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{ otherwise},\end{cases} \tag{3}\]
where \(\mathcal{N}(i)\) denotes the neighbors of node \(i\). The leading relationships reflect the biased order relationship between nodes in the graph. Specifically, \(Pa_{i}\), the leading node of node \(i\) is closer to the center of the graph than the node \(i\). Since the input feature F contains the structural information of the graph, it makes the local density a representation of the centrality of nodes in the graph. Therefore, we use the local density to denote the distance of node \(i\) from its leading node \(Pa_{i}\):
\[\delta_{i}=\begin{cases}\min(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{if }i= \operatorname*{arg\,max}(\rho)\\ \rho_{Pa_{i}},&\text{otherwise}.\end{cases} \tag{4}\]
This process results in a leading tree that includes all the nodes in the graph. To construct a GOLF, it is necessary to identify the centers of clusters in the leading tree. This centrality can be formulated as follows:
\[\gamma_{i}=\rho_{i}*\delta_{i}, \tag{5}\]
where \(\gamma_{i}\) denotes the probability that node \(i\) will be chosen as the center of a class cluster. By disconnecting the node with the highest \(\gamma\) value from its leading node, the GOLF can be constructed. This process also allows us to determine the depth \(layer_{i}\) of each node \(i\) within the GOLF.
The path of each tree in the GOLF follows the leading relationships, representing the evolution of the nodes in the graph from the edge towards the center. It is worth noting that the construction of GOLF does not require iterative optimization, making it an efficient algorithm.
### Deterministic Labeling Objective Function
[20] proposed an efficient Deterministic Labeling method using OLF to identify the key samples in the data and to depict the underlying structure of the data as completely as possible using a small number of samples. This method utilizes the OLF structure, so that the selected set of labeled samples contains both typical and divergent samples, and has a complexity of \(O(n\log n)\). We believe this method has the potential to be applied to graph data, and in this section, we provide a brief overview of the method.
The Deterministic Labeling method utilizes a discrete optimization objective function to obtain the label set. Let \(\mathcal{L}\) denotes the the label set, and \(\mathcal{L}_{c}\) denotes the labeled samples of class c. \(\mathcal{L}\) can be divided into two parts: \(\mathcal{L}_{Typ}\) and \(\mathcal{L}_{Div}\), denote the typical and divergent samples, respectively. \(\mathcal{L}\) can be obtained by solving the following objective function:
\[\begin{split}\min J(\mathcal{L})=&\alpha\sum_{x_{i} \in\mathcal{L}_{Typ}}q(\gamma_{i})+(1-\alpha)\sum_{x_{j}\in\mathcal{L}_{Div}} \frac{\rho_{j}}{layer_{j}},\\ s.t.&|\mathcal{L}|=l,\\ &|\mathcal{L}_{c}|\geq k,\end{split} \tag{6}\]
where the weight parameter \(\alpha\) is used to adjust the emphasis on typical or divergent samples, \(|\bullet|\) denotes the number of samples in the set. \(l=|\mathcal{L}|\), \(k\) is the minimum number of samples selected for each class, and \(q(\gamma_{i})=\frac{1}{\log(\gamma_{i})}\).
Since \(q(\gamma_{i})\) is a decreasing function, it is obvious that the objective function tends to select samples with larger \(\gamma\), which usually corresponds to the most typical pattern of a class or a mode within a class. Minimizing the first term in the objective function, therefore, results in the selection of samples with high typicality. Minimizing the second term in the objective function prefers samples with deeper layers and smaller local densities in the leading tree. Since the leading tree represents the biased order relationship between samples, the samples selected in this way are the most divergent samples within a class. The Deterministic Labeling method can be applied to GOLF, as it relies solely on the construction of OLF. Details of the Deterministic Labeling method can be viewed in [20] or in our code.
```
Input: Node features matrix \(X\), adjacent matrix \(A\),node set \(N\), size of labeled node set \(l\) Output: labeled node set \(L\) // Step 1: Compute input feature
1\(F=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}X\) // Step 2: Construct GOLF
2\(\rho=e^{-\frac{1}{\|F\|^{2}}{\sigma^{2}}}\)
3forenode \(i\) in \(N\)do
4\(Pa_{i}=\begin{cases}-1,&\text{if }i=\operatorname*{arg\,max}(\rho)\\ \operatorname*{arg\,max}(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{ otherwise},\\ \delta_{i}=\begin{cases}\min(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{if }i= \operatorname*{arg\,max}(\rho)\\ \rho_{Pa_{i}},&\text{otherwise}.\end{cases}\end{cases}\)
5
6 end for
7\(\gamma_{i}=\rho_{i}*\delta_{i}\)
8 Compute \(layer_{i}\) using \(\gamma\) to construct GOLF. // Step 3: Select \(l\) nodes to label
9\(L=Deterministic\_Labeling(\gamma,\rho,layer_{i})\) return\(L\)
```
**Algorithm 1**Determinate Node Selection Algorithm
```
Input: Node features matrix \(X\), adjacent matrix \(A\),node set \(N\), size of labeled node set \(l\) Output: labeled node set \(L\) // Step 1: Compute input feature
1\(F=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}X\) // Step 2: Construct GOLF
2\(\rho=e^{-\frac{1}{\|F\|^{2}}{\sigma^{2}}}\)
3fornode \(i\) in \(N\)do
4\(Pa_{i}=\begin{cases}-1,&\text{if }i=\operatorname*{arg\,max}(\rho)\\ \operatorname*{arg\,max}(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{ otherwise},\\ \delta_{i}=\begin{cases}\min(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{if }i= \operatorname*{arg\,max}(\rho)\\ \rho_{Pa_{i}},&\text{otherwise}.\end{cases}\end{cases}\)
5
6 end for
7\(\gamma_{i}=\rho_{i}*\delta_{i}\)
8 Compute \(layer_{i}\) using \(\gamma\) to construct GOLF. // Step 3: Select \(l\) nodes to label
9\(L=Deterministic\_Labeling(\gamma,\rho,layer_{i})\) return\(L\)
```
**Algorithm 2**Deterministic Labeling Algorithm
```
Input: Node features matrix \(X\), adjacent matrix \(A\),node set \(N\), size of labeled node set \(l\) Output: labeled node set \(L\) // Step 1: Compute input feature
1\(F=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}X\) // Step 2: Construct GOLF
2\(\rho=e^{-\frac{1}{\|F\|^{2}}{\sigma^{2}}}\)
3fornode \(i\) in \(N\)do
4\(Pa_{i}=\begin{cases}-1,&\text{if }i=\operatorname*{arg\,max}(\rho)\\ \operatorname*{arg\,max}(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{ otherwise},\\ \delta_{i}=\begin{cases}\min(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{if }i= \operatorname*{arg\,max}(\rho)\\ \rho_{Pa_{i}},&\text{otherwise}.\end{cases}\end{cases}\)
5
6 end for
7\(\gamma_{i}=\rho_{i}*\delta_{i}\)
8 Compute \(layer_{i}\) using \(\gamma\) to construct GOLF. // Step 3: Select \(l\) nodes to label
9\(L=Deterministic\_Labeling(\gamma,\rho,layer_{i})\) return\(L\)
```
**Algorithm 3**Deterministic Labeling Algorithm
```
Input: Node features matrix \(X\), adjacent matrix \(A\),node set \(N\), size of labeled node set \(l\) Output: labeled node set \(L\) // Step 1: Compute input feature
1\(F=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}X\) // Step 2: Construct GOLF
2\(\rho=e^{-\frac{1}{\|F\|^{2}}{\sigma^{2}}}\)
3fornode \(i\) in \(N\)do
5\(Pa_{i}=\begin{cases}-1,&\text{if }i=\operatorname*{arg\,max}(\rho)\\ \operatorname*{arg\,max}(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{ otherwise},\\ \delta_{i}=\begin{cases}\min(\rho_{j})&(\text{for }j\in\mathcal{N}(i)),&\text{if }i= \operatorname*{arg\,max}(\rho)\\ \rho_{Pa_{i}},&\text{otherwise}.\end{cases}\end{cases}\)
6 end for
7\(\gamma_{i}=\rho_{i}*\delta_{i}\)
8 Compute \(layer_{i}\) using \(\gamma\) to construct GOLF. // Step 3: Select \(l\) nodes to label
9\(L=Deterministic\_Labeling(\gamma,\rho,layer_{i})\) return\(L\)
```
**Algorithm 4**Deterministic Labeling Algorithm
``` Input: Node features matrix \(X\), adjacent matrix \(A\),node set \(N\), size of labeled node set \(l\) Output: labeled node set \(L\) // Step 1: Compute input feature
1\(F=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}X\) // Step 2: Construct GOLF
2\(\rho=e^{-\frac{1}{\|F\|^{2}}{\sigma^{2}}}\)
3fornode \(i\) in \(N\)do
5\(Pa_{i}=\begin{cases}-1,&\text{if }i=\operatorname*{arg\,max}(\rho)\\ \operatorname*{arg\,max}(\rho_{j})&(\text{for }j\in\mathcal
### Determinate Node Selection Algorithm
In summary, the DNS algorithm is described in Algorithm 1. The label set obtained using the DNS algorithm can be used as an alternative to randomly selected labels for any semi-supervised nodal classification GNNs, without requiring modifications to the original model. In the Experiments section, we report the improvement of DNS algorithm on the generalization performance of the original model.
### Complexity Analysis
The DNS algorithm consists of three main parts: computing the input features \(F\), constructing the GOLF, and selecting the labeled nodes. For a graph with \(e\) edges, \(n\) nodes, and \(d\) dimensional node features, the computational complexity of computing the input features F is \(O(ed)\). The complexity of constructing the GOLF is \(O(ne)\), which depends only on the number of nodes and edges in the graph. The process of selecting the labeled nodes involves sorting based on the \(\gamma\) of the nodes, with a complexity of \(O(n\log n)\). It can be seen that the complexity of the DNS algorithm is primarily determined by the size of the graph, specifically the number of nodes and edges.
## 4 Experiments
To confirm the effectiveness of our theoretical and DNS algorithms, we conducted experiments on three widely-used graph datasets: Cora, Citesser, and Pubmed [1]. The statistics of these datasets are summarized in Table 1.
As for baseline, we compare label propagation using ParWalks (LP) [21]; Convolutional Networks (GCNs) [13]; And two methods based on pseudo-label: DI [11] (including Self-Training, Co-Training, Union and Intersection) and M3S [1].
### Experimental Setup
For LP, we employ the same hyper-parameters as in [11]. For GCNs, DI and M3S, we use the same hyper-parameters as in [13] and [1] on Cora, Citeseer, and Pubmed: a learning rate of
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{**Label Rate**} & 0.5\% & 1\% & 2\% & 3\% & 4\% \\ \hline \multirow{7}{*}{**DI**} & **LP** & 58.0 \(\pm\) 5.7 & 61.5 \(\pm\) 4.8 & 63.6 \(\pm\) 2.8 & 64.4 \(\pm\) 2.5 & 66.1 \(\pm\) 2.8 \\ & **GCN** & 50.6 \(\pm\) 7.9 & 58.7 \(\pm\) 7.8 & 70.0 \(\pm\) 6.4 & 75.8 \(\pm\) 3.4 & 76.6 \(\pm\) 2.5 \\ & Co-training & 53.9 \(\pm\) 5.4 & 59.9 \(\pm\) 3.7 & 71.5 \(\pm\) 3.3 & 75.1 \(\pm\) 2.8 & 75.7 \(\pm\) 2.3 \\ & **Self-training** & 57.1 \(\pm\) 7.6 & 63.1 \(\pm\) 8.7 & 71.7 \(\pm\) 4.4 & 76.8 \(\pm\) 1.9 & 77.5 \(\pm\) 1.2 \\ & **Union** & 55.5 \(\pm\) 5.4 & 62.8 \(\pm\) 9.2 & 72.2 \(\pm\) 3.4 & 76.9 \(\pm\) 2.8 & 77.7 \(\pm\) 1.6 \\ & **Intersection** & 50.2 \(\pm\) 8.3 & 63.2 \(\pm\) 6.0 & 70.6 \(\pm\) 4.4 & 74.7 \(\pm\) 2.4 & 76.0 \(\pm\) 2.0 \\ & **M3S** & 61.6 \(\pm\) 7.1 & 67.7 \(\pm\) 5.6 & 75.8 \(\pm\) 3.5 & 77.8 \(\pm\) 1.6 & 78.0 \(\pm\) 2.1 \\ \hline \multirow{7}{*}{**DI+DNS**} & **LP+DNS** & (+5.3) \(\pm\) (-4.5) & (+4.1) \(\pm\) (-3.5) & (+1.1) \(\pm\) (-1.7) & (+0.6) \(\pm\) (-1.0) & (+4.0) \(\pm\) (-1.9) \\ & **GCN+DNS** & (+10.0) \(\pm\) (-2.6) & (+8.3) \(\pm\) (-3.8) & (+6.2) \(\pm\) (-4.3) & (+3.4) \(\pm\) (-2.7) & (+4.5) \(\pm\) (-1.0) \\ & **Co-training** & (+0.8) \(\pm\) (-1.6) & (+5.9) \(\pm\) (-1.8) & (+1.7) \(\pm\) (-3.9) & (+1.7) \(\pm\) (-1.2) & (+4.9) \(\pm\) (-1.7) \\ & **Self-training** & (+0.6) \(\pm\) (-4.1) & (+2.3) \(\pm\) (-6.3) & (+1.7) \(\pm\) (-1.8) & (+0.8) \(\pm\) (-0.6) & (+2.1) \(\pm\) (-0.3) \\ & **Union** & (+2.3) \(\pm\) (-1.4) & (+1.7) \(\pm\) (-6.6) & (+2.4) \(\pm\) (-1.8) & (+1.0) \(\pm\) (-2.0) & (+3.7) \(\pm\) (-0.7) \\ & **Intersection** & (+1.7) \(\pm\) (-4.0) & (+2.6) \(\pm\) (-3.9) & (+1.1) \(\pm\) (-2.3) & (+0.7) \(\pm\) (-0.4) & (+3.2) \(\pm\) (-1.2) \\ & **M3S+DNS** & (+2.7) \(\pm\) (-3.4) & (+1.6) \(\pm\) (-2.5) & (+0.6) \(\pm\) (-0.7) & (+0.4) \(\pm\) (-0.4) & (+0.6) \(\pm\) (-0.8) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean classification accuracy and standard deviation (%) on Cora with a few labeled nodes.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{**Label Rate**} & 0.5\% & 1\% & 2\% & 3\% & 4\% \\ \hline \multirow{7}{*}{**DI**} & **LP** & 37.8 \(\pm\) 7.5 & 41.8 \(\pm\) 4.5 & 42.3 \(\pm\) 4.7 & 44.7 \(\pm\) 2.6 & 44.8 \(\pm\) 2.4 \\ & **GCN** & 45.1 \(\pm\) 8.4 & 54.7 \(\pm\) 4.6 & 61.6 \(\pm\) 4.3 & 67.1 \(\pm\) 2.8 & 69.0 \(\pm\) 2.7 \\ & **Co-training** & 44.1 \(\pm\) 9.8 & 51.1 \(\pm\) 8.8 & 58.9 \(\pm\) 7.0 & 64.7 \(\pm\) 2.7 & 65.6 \(\pm\) 2.0 \\ & **Self-training** & 51.6 \(\pm\) 6.3 & 57.7 \(\pm\) 4.6 & 64.8 \(\pm\) 2.2 & 67.9 \(\pm\) 1.6 & 68.8 \(\pm\) 2.1 \\ & **Union** & 48.7 \(\pm\) 5.8 & 55.1 \(\pm\) 4.1 & 61.8 \(\pm\) 5.3 & 66.6 \(\pm\) 2.2 & 67.1 \(\pm\) 2.2 \\ & **Intersection** & 51.8 \(\pm\) 8.7 & 61.1 \(\pm\) 4.5 & 63.0 \(\pm\) 2.7 & 69.5 \(\pm\) 1.3 & 70.1 \(\pm\) 2.0 \\ & **M3S** & 56.2 \(\pm\) 5.6 & 62.4 \(\pm\) 4.6 & 66.5 \(\pm\) 2.5 & 70.3 \(\pm\) 2.0 & 70.6 \(\pm\) 1.9 \\ \hline \multirow{7}{*}{**DI+DNS**} & **LP+DNS** & (+4.1) \(\pm\) (-5.9) & (+3.0) \(\pm\) (-3.5) & (+1.5) \(\pm\) (-3.4) & (+0.5) \(\pm\) (-1.1) & (+1.0) \(\pm\) (-1.0) \\ & **GCN+DNS** & (+7.0) \(\pm\) (-4.4) & (+7.3) \(\pm\) (-2.3) & (+2.5) \(\pm\) (-2.8) & (+0.9) \(\pm\) (-1.8) & (+1.2) \(\pm\) (-1.2) \\ \cline{1-1} & **Co-training** & (+5.6) \(\pm\) (-7.5) & (+5.4) \(\pm\) (-6.9) & (+0.5) \(\pm\) (-5.5) & (+1.6) \(\pm\) (-1.8) & (+1.5) \(\pm\) (-1.0) \\ \cline{1-1} & **Self-training** & (+4.2) \(\pm\) (-2.4) & (+1.0) \(\pm\) (-1.6) & (+0.6) \(\pm\) (-0.2) & (+0.6) \(\pm\) (-0.2) & (+1.4) \(\pm\) (-1.1) \\ \cline{1-1} & **Union** & (+2.2) \(\pm\) (-2.1)
0.01, 200 training epochs, 0.5 dropout rate, \(5\times 10^{-4}\)\(L_{2}\) regularization weight, and 16 hidden units. The number of layers on Cora and Citeseer is 4,3,3,2,2 and 3,3,3,3,2 respectively for 0.5%,1%,2%,3%,4% label rates, fixed 4 for 0.03%,0.05%,0.1% label rates on Pubmed. We do not use an additional validation set. The K Stage setting of M3S is the same as in [15] on Cora, Citeseer, and Pubmed.
All results are the mean accuracy and standard deviation of 10 runs. For the original models, we randomly split labels into a small set for training, and a set with 1000 nodes for testing in each run. For models with DNS Algorithm, we use the same label set derived from DNS Algorithm for training and randomly select a set with 1000 nodes for testing in each run. We use the same random seed for original models and models with DNS Algorithm for fair comparison.
### Performance of DNS Algorithm
We report the performance improvement relative to the original method after using DNS Algorithm. The experimental results are shown in Tables 2, 3 and 4, with our method displayed in the lower portion of each table.
The results demonstrate that the classification accuracy of all methods improves and the standard deviation converges to a stable interval when using the DNS Algorithm instead of a randomly divided label set. This is because the DNS Algorithm considers the typicality and divergence of nodes in the graph, and selects the most representative nodes as the label set. In addition, the lower the label rate, the more obvious the performance improvement of the DNS Algorithm. This highlights the importance of selecting representative nodes for training and demonstrates that the DNS Algorithm is effective at identifying such nodes through exploration of the graph structure.
For pseudo-label methods DI and M3S, the gap between them and the original GCNs decreases as the labeling rate increases. We conducted experiments on the original GCNs and GCNs with the DNS Algorithm on Cora, Citeseer, and Pubmed at higher label rates, with the number of layers fixed at 2 and other parameters held constant. The results are shown in Figure 3. It can be seen that even at higher label rates, the
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Dataset** & **Nodes** & **Edges** & **Classes** & **Features** \\ \hline Cora & 2708 & 5429 & 7 & 1433 \\ Citeseer & 3327 & 4732 & 6 & 3703 \\ Pubmed & 19717 & 44338 & 3 & 500 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dateset statistics
Figure 3: Mean classification accuracy on high label rate.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{**Label Rate**} & \multicolumn{1}{c}{0.03\%} & \multicolumn{1}{c}{0.05\%} & \multicolumn{1}{c}{0.1\%} \\ \hline \multirow{7}{*}{**DI**} & **LP** & 58.8 \(\pm\) 9.4 & 61.5 \(\pm\) 8.1 & 64.1 \(\pm\) 5.2 \\ & **GCN** & 51.6 \(\pm\) 7.5 & 58.0 \(\pm\) 5.6 & 67.7 \(\pm\) 6.2 \\ & **Co-training** & 56.2 \(\pm\) 10.9 & 61.6 \(\pm\) 5.7 & 68.0 \(\pm\) 6.4 \\ & **Self-training** & 57.1\(\pm\) 11.8 & 63.9 \(\pm\) 7.3 & 70.1 \(\pm\) 5.3 \\ & **Union** & 57.3 \(\pm\) 9.8 & 64.5 \(\pm\) 6.1 & 70.0 \(\pm\) 5.6 \\ & **Intersection** & 55.1 \(\pm\) 9.4 & 58.9 \(\pm\) 7.3 & 67.9 \(\pm\) 6.5 \\ & **M3S** & 60.0 \(\pm\) 8.3 & 64.4 \(\pm\) 10.9 & 70.6 \(\pm\) 4.1 \\ \hline \multirow{7}{*}{**DI+DNS**} & **LP+DNS** & (+4.9) \(\pm\) (-8.0) & (+5.0) \(\pm\) (-6.7) & (+3.6) \(\pm\) (-3.5) \\ & **GCN+DNS** & (+5.1) \(\pm\) (-5.0) & (+11.5) \(\pm\) (-3.6) & (+6.6) \(\pm\) (-4.0) \\ & **Co-training** & (+5.6) \(\pm\) (-7.5) & (+5.4) \(\pm\) (-3.7) & (+5.8) \(\pm\) (-3.5) \\ & **Self-training** & (+6.6) \(\pm\) (-6.7) & (+6.5) \(\pm\) (-1.7) & (+4.0) \(\pm\) (-3.2) \\ & **Union** & (+10.2) \(\pm\) (-4.5) & (+7.5) \(\pm\) (-1.0) & (+5.7) \(\pm\) (-3.9) \\ & **Intersection** & (+4.0) \(\pm\) (-3.9) & (+1.1) \(\pm\) (-3.3) & (+3.3) \(\pm\) (-2.5) \\
**M3S+DNS** & (+8.4) \(\pm\) (-3.2) & (+4.8) \(\pm\) (-6.8) & (+1.0) \(\pm\) (-2.6) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean classification accuracy and standard deviation (%) on Pubmed with a few labeled nodes.
use of the DNS Algorithm to select the label set still leads to an improvement in the generalization performance of the GCNs.
In Section 3.4, we analyzed the complexity of the DNS Algorithm. The time cost of the DNS Algorithm is less than the GCN training time for all four graph datasets. This demonstrates that our method is efficient.
### Parameter Sensitivity Analysis
We performed sensitivity analysis on four main parameters in the DNS algorithm, including the local density \(\rho\) for constructing GOLF, the number of trees in the forest \(n\), the number of typical nodes \(k\) selected per class, and the parameter \(\alpha\) for balancing the selection of typical and divergent nodes. We conducted experiments on the Cora dataset at a label rate of 4% and the results are shown in Figure 4. All four parameters affect the classification accuracy, with the DNS algorithm being particularly sensitive to \(k\) and \(\alpha\). Using incorrect values for \(k\) and \(\alpha\) can significantly decrease the classification accuracy, while for \(\rho\) and \(n\), the classification accuracy obtained for any value is generally higher than that of the original GCN. The experiments indicate that the DNS algorithm is a parameter-sensitive algorithm, and its parameters must be carefully tuned, especially \(k\) and \(\alpha\), to achieve optimal performance.
## 5 Conclusion
This paper presents the first theoretical analysis of label set selection for semi-supervised node classification GNNs. We identify the limitations of randomly splitting the label set and highlight the impact of node selection on the generalization performance of GCNs. To address these issues, we propose the DNS algorithm, which selects representative nodes in the graph by considering their typicality and divergence. We demonstrate the effectiveness of the DNS algorithm through experiments. The DNS algorithm leads to an improvement in the generalization performance of the model at various label rates. As it only uses the graph structure for node selection and is not dependent on the specific model, it can be used as a generalized preprocessing method for other semi-supervised node classification GNNs. DNS algorithm is an efficient algorithm, but it still suffers from parameter sensitivity and requires careful tuning of parameters to obtain good performance. However, the DNS algorithm exhibits parameter sensitivity and requires careful parameter tuning to achieve optimal performance. In the future, we will investigate the possibility of applying the DNS algorithm to more semi-supervised node classification GNNs, and aim to design new algorithms to automatically adjust the parameters in the DNS algorithm.
## Acknowledgments
This work has been supported by the National Key Research and Development Program of China under grant XXX, the National Natural Science Foundation of China under grants XXX and XXX.
|
2304.01427 | Interrelationships between nematicity, antiferromagnetic spin
fluctuations and superconductivity: Role of hotspots in FeSe$_{1-x}$S$_{x}$
revealed by high pressure $^{77}$Se NMR study | The sulfur-substituted FeSe, FeSe$_{1-x}$S$_{x} $, is one of the unique
systems that provides an independent tunability of nematicity,
antiferromagnetism and superconductivity under pressure ($p$). Recently Rana et
al. [Phys. Rev. B 101, 180503(R) (2020)] reported, from $^{77}$Se nuclear
magnetic resonance (NMR) measurements on FeSe$_{0.91}$S$_{0.09}$ under
pressure, that there exists a clear role of nematicity on the relationship
between antiferromagnetic (AFM) spin fluctuations and superconducting
transition temperature ($T_{\rm c}$) where the AFM spin fluctuations are more
effective in enhancing $T_{\rm c}$ in the absence of nematicity than with
nematicity. Motivated by the work, we carried out $^{77}$Se NMR measurements on
FeSe$_{1-x}$S$_{x}$ with $x$= 0.15 and 0.29 under pressure up to 2.10 GPa to
investigate the relationship in a wide range of $x$ in the FeSe$_{1-x}$S$_x$
system. Based on the new results together with the previously reported data for
$x$=0 [P. Wiecki et al., Phys. Rev. B 96, 180502(R) (2017)] and 0.09 [K. Rana
et al. Phys. Rev. B 101, 180503(R) (2020)], we established a $p$ - $x$ -
temperature ($T$) phase diagram exhibiting the evolution of AFM spin
fluctuations. From the systematic analysis of the NMR data, we found that the
superconducting (SC) state in nematic state arises from a non Fermi liquid
state with strong stripe-type AFM spin fluctuations while the SC state without
nematicity comes from a Fermi liquid state with mild stripe-type AFM spin
fluctuations. Furthermore, we show that the previously reported impact of
nematicity on the relationship between AFM fluctuations and superconductivity
holds throughout the wide range of $x$ from $x$ = 0 to 0.29 in
FeSe$_{1-x}$S$_{x}$ under pressure. We discuss the origin of the role of
nematicity in terms of the different numbers of hotspots on Fermi surfaces with
and without nematicity. | K. Rana, D. V. Ambika, S. L. Bud'ko, A. E. Böhmer, P. C. Canfield, Y. Furukawa | 2023-04-04T00:22:08Z | http://arxiv.org/abs/2304.01427v1 | Interrelationships between nematicity, antiferromagnetic spin fluctuations and superconductivity: Role of hotspots in FeSe\({}_{1-x}\)S\({}_{x}\) revealed by high pressure \({}^{77}\)Se NMR study
###### Abstract
The sulfur-substituted FeSe, FeSe\({}_{1-x}\)S\({}_{x}\), is one of the unique systems that provides an independent tunability of nematicity, antiferromagnetism and superconductivity under pressure (\(p\)). Recently Rana _et al._ [Phys. Rev. B **101**, 180503(R) (2020)] reported, from \({}^{77}\)Se nuclear magnetic resonance (NMR) measurements on FeSe\({}_{0.91}\)S\({}_{0.09}\) under pressure, that there exists a clear role of nematicity on the relationship between antiferromagnetic (AFM) spin fluctuations and superconducting transition temperature (\(T_{\rm c}\)) where the AFM spin fluctuations are more effective in enhancing \(T_{\rm c}\) in the absence of nematicity than with nematicity. Motivated by the work, we carried out \({}^{77}\)Se NMR measurements on FeSe\({}_{1-x}\)S\({}_{x}\) with \(x=0.15\) and 0.29 under pressure up to 2.10 GPa to investigate the relationship in a wide range of \(x\) in the FeSe\({}_{1-x}\)S\({}_{x}\) system. Based on the new results together with the previously reported data for \(x=0\) [P. Wiecki _et al.,_ Phys. Rev. B **96**, 180502(R) (2017)] and 0.09 [K. Rana _et al._ Phys. Rev. B **101**, 180503(R) (2020)], we established a \(p\) - \(x\) - temperature (\(T\)) phase diagram exhibiting the evolution of AFM spin fluctuations. From the systematic analysis of the NMR data, we found that the superconducting (SC) state in nematic state arises from a non Fermi liquid state with strong stripe-type AFM spin fluctuations while the SC state without nematicity comes from a Fermi liquid state with mild stripe-type AFM spin fluctuations. Furthermore, we show that the previously reported impact of nematicity on the relationship between AFM fluctuations and superconductivity holds throughout the wide range of \(x\) from \(x=0\) to 0.29 in FeSe\({}_{1-x}\)S\({}_{x}\) under pressure. We discuss the origin of the role of nematicity in terms of the different numbers of hotspots on Fermi surfaces with and without nematicity.
## I Introduction
The interplay between magnetic fluctuations, electronic nematicity and the unconventional nature of superconductivity has received wide interest after the discovery of high \(T_{\rm c}\) superconductivity in iron pnictides [1]. In most Fe-based superconductors (FeSCs), superconductivity appears close to the quantum phase transitions of two long-range orders: the nematic order, which is an electronically driven structural transition from high-temperature tetragonal (HTT) with C4 symmetry to low-temperature orthorhombic (LTO) having C2 symmetry, and the antiferromagnetic (AFM) order with spontaneously oriented Fe-\(3d\) spins characterized by stripe-type spin structure [2; 3; 4; 5]. In those systems, the nematic transition temperature (\(T_{\rm s}\)) is usually at or just above the transition temperature for the AFM state, (\(T_{\rm N}\)), and superconductivity emerges upon the simultaneous suppression of both the nematic and the AFM transitions by carrier doping and/or the application of pressure (\(p\)). These results clearly indicate a close relationship between AFM and nematic phases, however, the individual contribution to superconductivity from these two phases is difficult to separate due to the close proximity of the two phases.
In this respect, the S-substituted FeSe system, FeSe\({}_{1-x}\)S\({}_{x}\), provides a suitable platform to investigate the individual contribution of nematic and AFM fluctuations to superconductivity. FeSe\({}_{1-x}\)S\({}_{x}\) has the simplest of crystal structures among the Fe-based superconductors, with quasi-two dimensional FeSe(Se,S) layers in the _ab_ plane, stacked along the \(c\) axis. At \(x=0\), FeSe undergoes a nematic phase transition at \(T_{\rm s}\sim 90\) K followed by a superconducting (SC) transition at \(T_{\rm c}\sim 8.5\) K without AFM ordering at ambient pressure [6; 7; 8; 9]. This allows the study of magnetic fluctuations inside the nematic order and its relationship with superconductivity [10]. The nematic phase in FeSe can be suppressed by pressure application, with \(T_{\rm s}\) decreasing down to 32 K at \(p=1.5\) GPa [11; 12]. \(T_{\rm c}\) shows a complex multi-domed structure with \(p\), reaching a maximum \(T_{\rm c}\sim 37\) K at \(p\sim 6\) GPa [13; 14; 15]. At the same time, an AFM ordered state appears above \(p=0.8\) GPa [16; 17], and \(T_{\rm s}\) merges with \(T_{\rm N}\) above \(p=1.7\) GPa [18; 19; 20], limiting the range for studying the effects of nematicity on superconductivity without AFM state.
The S-substitutions for Se in FeSe\({}_{1-x}\)S\({}_{x}\) is also a well known way to control the nematic phase where \(T_{\rm s}\) is suppressed to zero at the critical \(x\) value, \(x_{\rm c}\sim 0.17\), with increasing \(x\). \(T_{\rm c}\) first increases up to 10 K around \(x=0.09\) making a maximum and then decreases gradually at higher \(x\) close to \(x_{c}\)[21; 22; 23; 21]. Although AFM order is not seen at ambient pressure in the FeSe\({}_{1-x}\)S\({}_{x}\) system, an enhancement of AFM fluctuations was found at \(x\sim 0.09\) by nuclear magnetic resonance (NMR) measurements [12]. The NMR study also suggested that AFM fluctuations are important in determining the superconductivity in FeSe\({}_{1-x}\)S\({}_{x}\) even in the vicinity of a nematic quantum phase transition (QPT) at \(x_{c}\sim 0.17\). On the
other hand, recent spectroscopic-imaging scanning tunneling microscopy [24], thermal conductivity and specific heat [25] measurements show an abrupt change in \(T_{\rm c}\) around \(x_{\rm c}\)[26] and the considerable change in the size and anisotropy of the SC gap is observed at the nematic QPT, implying different SC states inside and outside nematic states [27]. These results may indicate that the nematicity also plays an important role in the SC states of FeSe\({}_{1-x}\)S\({}_{x}\).
In fact, nematicity has been reported to affect superconductivity by changing the symmetry [28; 29] and magnitude [12; 30] of AFM spin fluctuations. AFM spin fluctuations in FeSCs are characterized by the in-plane stripe-type wave vectors: \({\bf Q}_{1}=(\pi,{\bf 0})\) and/or \({\bf Q}_{2}=({\bf 0},\pi)\) (using the single-iron Brillouin zone notation) [31; 32]. In the HTT C4 symmetric state where nematicity is absent, the AFM spin fluctuations may originate from both \(\pm{\bf Q}_{1}\) and \(\pm{\bf Q}_{2}\) wave vectors. In contrast, in the C2 symmetric state when nematicity is present, only one of the two wave vectors, either \(\pm{\bf Q}_{1}\) or \(\pm{\bf Q}_{2}\), will contribute to AFM spin fluctuations, due to the breaking of symmetry equivalency [28; 29]. Recent \({}^{77}\)Se NMR studies on FeSe\({}_{1-x}\)S\({}_{x}\) under pressure actually showed the large change in the magnitude of AFM spin fluctuations with and without nematicity [33; 34; 35; 36]. In addition, from \({}^{77}\)Se NMR studies on FeSe\({}_{1-x}\)S\({}_{x}\) with \(x=0.09\) under pressure, the impact of nematicity on the relationship between AFM spin fluctuations and \(T_{\rm c}\) has been proposed, where the AFM spin fluctuations are more effective in enhancing superconductivity in the absence of nematicity as compared to when it is present [33; 37]. On the other hand, as described above, the importance of nematicity for the appearance in superconductivity has been discussed in FeSCs [38; 39; 40; 41; 42; 43]. Therefore it is important to investigate the relationships between AFM spin fluctuations, nematicity, and superconductivity in a wide range of \(x\) in FeSe\({}_{1-x}\)S\({}_{x}\).
In this paper, we carried out \({}^{77}\)Se NMR studies under pressure up to 2.1 GPa for \(x=0.15\) and 0.29 compounds to investigate the universality of the role of nematicity on the relationship for a wide range of \(x\) in FeSe\({}_{1-x}\)S\({}_{x}\). From the analysis of the NMR data for the two compounds together with the previous data for \(x=0\)[44] and \(x=0.09\)[33] systems under \(p\), we report that the SC state in nematic state arises from a non-Fermi liquid (nFL) with strong AFM spin fluctuations while the SC state without nematicity arises from a Fermi liquid (FL) with moderate AFM spin fluctuations in these systems. Furthermore, we confirm that nematicity has a clear impact on the relationship between AFM spin fluctuations and superconductivity throughout the FeSe\({}_{1-x}\)S\({}_{x}\) system under \(p\) as previously suggested [33]. The AFM spin fluctuations in the C4 phase were found to be more effective in enhancing \(T_{\rm c}\) compared to AFM spin fluctuations in the C2 phase by a factor of \(\sim 7\pm 2\). We provide a possible explanation for this observation in terms of the higher total number of hotspots in the Fermi surface that meet the nesting condition in the C4 symmetric state, in comparison with the case of the C2 symmetric state by a factor of 8. Finally it suggested that AFM correlation length (\(\xi_{\rm AFM}\)) in the C2 states is generally longer than in the C4 phase which could be related to the different nature of SC states with and without nematicity.
## II Experimental details
NMR measurements of \({}^{77}\)Se nuclei (\(I\)=1/2, \(\gamma_{\rm N}/2\pi=8.118\)MHz/T) were carried out using a laboratory-built phase-coherent spin-echo pulse spectrometer. The measurements were carried out under a fixed magnetic external field of \(H=7.4089\) T applied either along the \(c\) axis or the tetragonal [110] direction in the \(ab\) plane. The single crystals with sulfur contents \(x=0.15\) and 0.29 in FeSe\({}_{1-x}\)S\({}_{x}\) were grown using chemical vapor transport as outlined in Refs. [12; 45]. Multiple small single crystals were oriented and stacked with spacers inside the NMR coil. A NiCrAl/CuBe piston-cylinder was used for application of \(p\) up till \(\sim 2\) GPa. Daphne 7373 was used as the \(p\) mediating fluid, and the nuclear quadrupolar resonance (NQR) of Cu in Cu\({}_{2}\)O at 77 K was used for \(p\) calibration [46; 47]. Spectra were obtained from spin-echo signals by using fast Fourier transform. \(T_{1}\) was measured with a saturation-recovery method and \(1/T_{1}\) at each \(T\) was determined by fitting the nuclear magnetization \(M\) versus time \(t\) using the single exponential function \(1-M(t)/M(\infty)=e^{-t/T_{1}}\), where \(M(t)\) and \(M(\infty)\) are the nuclear magnetization at \(t\) after saturation and the equilibrium nuclear magnetization at \(t\rightarrow\infty\), respectively.
## III Results
### \({}^{77}\)Se NMR Spectrum and Knight shift
Figures 1(a)-1(d) show the \({}^{77}\)Se NMR spectra measured at \(T=20\) K under various pressures (\(p=0\) - 2.10 GPa) for \(x=0.15\) and 0.29, together with those for \(x=0\) and 0.09 reported previously [33; 44]. Here we applied magnetic field along [110] direction in the HTT phase (\(H\parallel ab\)). As in the case of \(x=0\) (\(T_{\rm s}=90\) K) and 0.09 (\(T_{\rm s}=65\) K) at ambient \(p\), the two split lines of the NMR spectrum (shown in red) are observed in nematic state for \(x=0.15\) (\(T_{\rm s}=35\) K at ambient \(p\)) as shown at the bottom of Fig. 1(c). Those two peak structures are due to the twinned nematic domains [44; 48; 9]. As can be seen in Fig. 1(c) for \(x=0.15\), the splitting in the NMR spectra is not seen at \(p=0.2\) GPa and higher (even down to 1.7 K at 0.2 GPa, as shown in the supplementary [49]). This indicates a very small critical pressure of \(p_{\rm c}\sim\,0.2\) GPa for nematic QPT at \(x=\,0.15\). In the case of \(x=0.29\) (\(>x_{c}\sim 0.17\)), no split of the NMR spectra is observed from ambient pressure up to 1.95 GPa as shown in Fig. 1(d), showing no-nematic state in the compound. NMR spectra for nonzero \(x\) values
for \(x=0\) due to the introduction of inhomogeneity with sulfur substitution of selenium.
Figures 2(a) and 2(b) show the \(T\) dependences of \(K\) for \(H\parallel ab\) (\(K_{\rm ab}\)) and for \(H\parallel c\) (\(K_{\rm c}\)) of \(x=0.15\) at various pressures from ambient up to 2.0 GPa. With decreasing \(T\), both \(K_{\rm ab}\) and \(K_{\rm c}\) decrease and become nearly constant below \(\sim\)50 K. The values of \(K_{\rm ab}\) decrease slightly with increasing \(p\) while \(K_{\rm c}\) shows less \(p\) dependence. In particular, no clear \(p\) dependence is observed in \(K_{\rm c}\) at low \(T\) below \(\sim\) 150 K within our experimental uncertainty. Similar \(T\) and \(p\) dependences of \(K_{\rm ab}\) and \(K_{\rm c}\) are observed for \(x=0.29\) as shown in Figs. 2(c) and 2(d).
To see the \(x\) and \(p\) dependences of \(K\) at high and low temperatures more clearly, we plot the \(K_{\rm ab}\) and \(K_{\rm c}\) measured at 295 K (top) and at 20 K (bottom) in Fig. 3(a) and 3(b), respectively. We also plot the average values of Knight shift (\(K_{\rm avg}\equiv\frac{2K_{\rm ab}+K_{\rm c}}{3}\)) in Fig. 3(c). Here the values of \(K\) for \(x=0\) and 0.09 were taken from Refs. [44] and [33], respectively. The average values of \(K_{\rm ab}\) for the two peaks were used when nematic state was present. In general, \(K\) consists of the \(T\)-independent orbital component (\(K_{0}\)) and the \(T\)-dependent spin component (\(K_{\rm spin}\)) which is proportional to static uniform magnetic susceptibility and thus to the density of states at the Fermi energy \(N(E_{\rm F})\). As we show later in Sec. IV, \(K_{0}\) is nearly independent of \(x\) and \(p\). Therefore the \(x\) and \(p\) dependences of \(K_{\rm ab}\), \(K_{\rm c}\) and \(K_{\rm ave}\) can be attributed to the spin part of \(K\). Given those dependences, the \(N(E_{\rm F})\) in FeSe\({}_{1-x}\)S\({}_{x}\) system has a slight suppression at 295 K with increasing \(x\) and/or \(p\) up to \(\sim\) 2.0 GPa. In contrast, the \(N(E_{\rm F})\) is nearly independent of \(x\) at low temperatures with a tiny decrease with \(p\). We note that the change in \(N(E_{\rm F})\) due to a Lifshitz transition under pressure suggested in the quantum oscillations measurements for \(x=0.11\)[50] and inferred from NMR measurements for \(x=0.12\)[35] was not detected in the \(p\) dependence of \(K\) values within our experimental uncertainty in the measured FeSe\({}_{1-x}\)S\({}_{x}\) systems. It is interesting to point out that \(T_{\rm c}\) varies significantly with \(x\) and \(p\) even though the \(N(E_{\rm F})\) is nearly independent of those. This is in contrast to conventional BCS superconductors, in which \(N(E_{\rm F})\) generally correlates with \(T_{\rm c}\). These results strongly indicate that AFM spin fluctuations play an important role in the appearance of SC in FeSe\({}_{1-x}\)S\({}_{x}\), as will be discussed below.
### \({}^{77}\)Se Spin-lattice Relaxation Rates (\(1/t_{1}\))
Figures 4(a)-4(h) show the \(T\) dependence of \({}^{77}\)Se \(1/T_{1}T\) for \(H\parallel ab\) [\((1/T_{1}T)_{\rm ab}\)] by red circles for the various pressures at \(x=0.15\). In general, \(1/T_{1}T\) is related to the dynamic magnetic susceptibility as \(1/T_{1}T\sim\gamma_{\rm N}^{2}k_{\rm B}\sum_{\bf q}|A({\bf q})|^{2}\chi^{\prime \prime}({\bf q},\omega_{\rm N})/\omega_{\rm N}\), where \(A({\bf q})\) is the wave-vector \({\bf q}\) dependent form factor and \(\chi^{\prime\prime}({\bf q},\omega_{\rm N})\) is the imaginary part of \(\chi({\bf q},\omega_{\rm N})\) at the Larmor frequency \(\omega_{\rm N}\)[53]. Since \(K\) measures the \({\bf q}=0\) uniform magnetic susceptibility, by comparing the \(T\) dependencies between \(1/T_{1}T\) and \(K\), one can obtain information on the \(T\) evolution of \(\sum_{\bf q}\chi^{\prime\prime}({\bf q},\omega_{\rm N})\) with respect to that of \(\chi^{\prime}(0,0)\). Above \(\sim 100\) K, \((1/T_{1}T)_{\rm ab}\) decreases monotonically with decreasing \(T\) similar to \(K_{\rm ab}\) and \(K_{\rm c}\). In contrast, different \(T\) dependences below \(\sim 100\) K were observed: \(K_{\rm ab}\) and \(K_{\rm c}\) continue to decrease on lowering \(T\) and show a nearly constant behavior below 50 K for \(K_{ab}\) and 25 K for \(K_{\rm c}\) respectively, whereas \((1/T_{1}T)_{\rm ab}\) starts increasing around \(\sim 100\) K when decreasing \(T\). This deviation is
Figure 1: Pressure (\(p\)) and \(x\) dependence of \({}^{77}\)Se NMR spectra of FeSe\({}_{1-x}\)S\({}_{x}\) at a temperature of 20 K under the magnetic field of 7.4089 T applied along the tetragonal [110] direction in the \(ab\) plane. The spectra for (a) \(x=0\) was taken from Ref. [44] and (b) \(x=0.09\) from Ref. [33] whereas the spectra for FeSe\({}_{1-x}\)S\({}_{x}\) at ambient pressure were taken from Ref. [12]. Red points represent the spectra in the nematic state which are fitted as the sum (red lines) of two Lorentzian peaks (blue lines). Green line in (a) is the spectrum in antiferromagnetic state. Single peak spectra in the HTT phase are shown with black lines.
attributed to the presence of AFM spin fluctuations as in \(x=0\) and 0.09 in FeSe\({}_{1-x}\)S\({}_{x}\)[12; 33]. The decrease in \(1/T_{1}T\) observed around or below \(T_{\rm c}\) marked by black arrows is due to SC transition. The suppression of \(1/T_{1}T\) above \(T_{\rm c}\) is also observed [see Fig. 4(a)] whose origin has been discussed in terms of a possible SC fluctuation effect [51] or pseudogap behavior [52].
The temperature dependence of \((1/T_{1}T)_{\rm ab}\) at low temperatures changes when \(p\) is passed \(p_{\rm c}\sim 0.2\) GPa for \(x=0.15\). As shown in Fig. 4(a), at ambient \(p\), the \((1/T_{1}T)_{\rm ab}\) at low \(T\) shows a Curie-Weiss like enhancement which is expected for two-dimensional AFM spin fluctuations [53]. On the other hand, for \(p>p_{\rm c}\) shown in Figs. 4(c)-4(h), \((1/T_{1}T)_{\rm ab}\) moderately increases below \(\sim 100\) K before exhibiting a constant behavior below a characteristic temperature \(T_{\rm FL}\) marked by blue arrows. Here we determined \(T_{\rm FL}\) from the \(1/T_{1}T\) data for \(H||ab\) since \(1/T_{1}T\) are more sensitive to AFM fluctuations for \(H||ab\) than \(H||c\)[12]. A similar temperature dependence of \(1/T_{1}T\) has been reported for \(x=0.09\)[33] and the state below \(T_{\rm FL}\) was regarded as a Fermi liquid (FL) state because it follows the Korringa behavior where both \(1/T_{1}T\) and \(K^{2}\) are constant, similar to the FL state in exchange enhanced metals [53; 54]. It is important to note that while the SC state with C4 symmetry arises from a FL state with AFM spin fluctuations, the SC state with C2 symmetry arises from the nematic state with AFM spin fluctuations that show a Curie-Weiss-like behavior.
Similarly, Figs. 5(a)-5(g) show the \(T\) dependence of \((1/T_{1}T)_{\rm ab}\) by red circles for the measured pressures at \(x=0.29\). At high temperatures above \(\sim 100\) K, \((1/T_{1}T)_{\rm ab}\) increases monotonically with increasing \(T\), similar to \(K_{ab}\) and \(K_{c}\) for all the measured \(p\) region up till 1.95 GPa. As in the case of \(x=0.15\), the deviation between \((1/T_{1}T)_{\rm ab}\) and \(K\) is seen below \(\sim 100\) K. The characteristic \(T_{\rm FL}\) is also observed at \(\sim 20\)-30 K for all
Figure 3: Pressure and \(x\) dependences of the Knight shift (\(K\)) under a magnetic field (\(H\)) of 7.4089 T for (a) \(H\parallel ab\), (b) \(H\parallel c\), and (c) the average value of Knight shift (\(K_{\rm avg}\equiv\frac{2K_{\rm ab}+K_{c}}{3}\)) at different temperatures at 295 K (top) and 20 K (bottom). The data for \(x=0\) and 0.09 are from Ref. [44] and Ref. [33] respectively and the \(K_{\rm c}\) values under \(p\) for \(x=0\) are missing due to the unavailability of the data.
Figure 2: Temperature and pressure dependencies of \({}^{77}\)Se NMR Knight shift (\(K\)) at various measured pressures under a magnetic field (\(H\)) of 7.4089 T for (a) \(K_{\rm ab}\) and (b) \(K_{\rm c}\) for \(x=0.15\). (c) \(K_{\rm ab}\) and (d) \(K_{\rm c}\) for \(x=0.29\).
\(p\) values, where a constancy in \(1/T_{1}T\) and \(K\) (both \(K_{\rm ab}\) and \(K_{\rm c}\)) is observed.
Since the \(T\) dependence of \(1/T_{1}T\) at low temperatures seems to be different below and above \(x_{\rm c}\) and/or \(p_{\rm c}\), it is important to see whether the wave vectors associated with the AFM spin fluctuations change or not. According to previous NMR studies on Fe pnictides [55; 56; 57] and related materials [58; 59; 60], the ratio, \(R\equiv\frac{T_{1,c}}{T_{1,ab}}\) provides valuable information on the \({\bf Q}\) dependence of the AFM spin fluctuations, where \(T_{1,c}\) and \(T_{1,ab}\) represent the \(T_{1}\) measured under \(H\parallel c\) and \(H\parallel ab\), respectively. If AFM spin fluctuations are characterized by stripe-type with \({\bf Q}_{1}\) and/or \({\bf Q}_{2}\), an \(R=1.5\) is expected for isotropic fluctuations while, in the case of Neel type AFM spin fluctuations with the wave vector \({\bf Q}=(\pi,\,\pi)\), \(R\) is expected to be 0.5. Therefore we measured \(1/T_{1}T\) for \(H\parallel c\) for both \(x=0.15\) and 0.29. The results are shown by gray circles in Figs. 4 and Figs. 5, respectively where the \(T\) dependence of \(R\) at different pressures is shown in the corresponding inset. At high temperatures above \(\sim\) 100 K, \(1/T_{1}T\) values are nearly same for \(H\parallel ab\) and \(H\parallel c\) directions and \(R\sim 1\) for both \(x=0.15\) and 0.29. When AFM spin fluctuations start developing as \(T\) is lowered below 100 K, a difference is seen for the two measured directions, and \(R\) becomes \(\sim 1.5\) or more. This is very similar to what was observed for \(x=0.09\)[33] and suggests that AFM spin fluctuations are characterized by the stripe-type ones with \({\bf Q}_{1}\) and/or \({\bf Q}_{2}\) in FeSe\({}_{1-x}\)S\({}_{x}\) throughout the measured \(p\) region up to \(\sim\) 2 GPa.
## IV Discussion
In this section, we show the relationship between AFM spin fluctuations, nematicity and superconductivity in FeSe\({}_{1-x}\)S\({}_{x}\). To discuss the magnitude of AFM spin fluctuations, we estimate the contribution of AFM spin fluctuations from the observed \(1/T_{1}T\) assuming that the observed \(1/T_{1}T\) can be decomposed into two components of the AFM [\((1/T_{1}T)_{\rm AFM}\)] and the \(q\) independent [\((1/T_{1}T)_{0}\)] components: \(1/T_{1}T=(1/T_{1}T)_{\rm AFM}\) + \((1/T_{1}T)_{0}\). The second term \((1/T_{1}T)_{0}\) is expected to be proportional to \(K_{\rm spin}^{2}\). Here \(K_{\rm spin}\) is given by \(K-K_{0}\) where \(K_{0}\) is the \(T\)-independent orbital shift. Therefore, by comparing \(1/T_{1}T\) to \(K_{\rm spin}^{2}\), one can estimate \((1/T_{1}T)_{\rm AFM}\). The green curves in Figs. 4 and 5 are the estimated temperature dependence of \(CK_{\rm spin}^{2}\); the values are shown on right axes. \(C\) is a proportional constant introduced to fit the temperature dependence of \((1/T_{1}T)_{\rm ab}\) at high temperatures above \(\sim\)100 K. From the fittings, we found that \(K_{0}=0.175\) % is independent of \(p\) and \(x\) and the \(p\) independent values of \(C=8.5\) and 7.75 for for \(x=0.15\)
Figure 4: Temperature dependence of \({}^{77}\)Se \(1/T_{1}T\) (left axes) for \(x=0.15\) at various pressures under a magnetic field (\(H\)) = 7.4089 T, with \(H\parallel ab\) (red circles) and \(H\parallel c\) (gray circles). The green line for each panel (right axes) shows the temperature dependence of \(CK_{\rm spin,c}^{2}\) where \(K_{\rm spin,c}\) is the spin part of \(K\) estimated by subtracting the \(T\) independent \(K_{0}\) and \(C\) is a scaling factor. For all pressures measured, \(K_{0}\) and \(C\) are found to be constant, 0.175 % and 8.5, respectively. Downward black arrows show \(T_{\rm c}\) under \(H\parallel ab=7.4089\) T determined by the _in situ_ ac susceptibility measurements [49] and downward blue arrows show the \(T\) below which \(1/T_{1}T\) = constant (black dashed line) behavior is observed, defined as \(T_{\rm FL}\). The inset in each panel shows the \(T\) dependence of the ratio \(R\equiv(1/T_{1}T)_{ab}/(1/T_{1}T)_{\rm c}\). The black and red horizontal lines represent the expected values for stripe-type (\(R=1.5\)) and Néel-type (\(R=0.5\)) antiferromagnetic fluctuations, respectively.
and 0.29, respectively. Here we used \(K_{c}\) data to compare the temperature dependence of \((1/T_{1}T)_{\rm ab}\) since the enhancements in \(1/T_{1}T\) due to the stripe-type AFM spin fluctuations are prominent along the \(c\) axis [56]. For the cases where \(K_{c}\) values were not available as in Figs. 4(b), 4(e), 4(g) and Figs. 5 (b), 5(e), 5(f), smooth extrapolations were made to estimate \(K_{c}\) based on the data at nearest pressures for each compound. Thus, the differences between \((1/T_{1}T)_{\rm ab}\) (red circles) and green curves are attributed to the contributions of AFM spin fluctuations, \((1/T_{1}T)_{\rm AFM}\).
The estimated AFM contributions to \(1/T_{1}T\) are shown with the phase diagram in Figs. 6(b)-6(d) where contour plots of \((1/T_{1}T)_{\rm AFM}\) are shown for \(x\) = 0.15 and 0.29 as a function of \(p\) together with the data for \(x\) = 0 and 0.09 estimated from our previous papers [33; 44]. The \(p-x-T\) phase diagram of FeSe\({}_{1-x}\)S\({}_{x}\) in Fig. 6(a) was constructed from NMR measurements from ambient to \(p\sim\) 2 GPa as well as the data available from literature. \(T_{\rm c}\) (closed magenta circles) [49; 33] and \(T_{\rm N}\) (closed green triangles) [44] were deduced using _in situ_ ac susceptibility measurements at zero field, and \(T_{\rm s}\) (closed orange squares) [33; 44] were determined from the NMR spectra. For FeSe under pressure, transition temperatures for the nematic (open orange triangles), AFM (green crosses) and superconducting states (magenta crosses) determined from resistivity curves were taken from Ref. [61]. Similarly, for \(p=0\) in FeSe\({}_{1-x}\)S\({}_{x}\), the transition temperatures for the nematic (orange pluses) and superconducting (magenta pluses) were taken from Ref. [22] which were also based on resistivity measurements.
In the contour plots [Figs. 6(b)-6(e)], we also show \(T_{\rm s}\) and \(T_{\rm c}\) by symbols in orange and magenta, respectively. The white squares represent the characteristic temperature (\(T_{\rm FL}\)) below which FL states are observed. It is clearly seen in the contour plots that AFM spin fluctuations are more enhanced inside the nematic state than outside the nematic state. One of the interesting phenomena in FeSe\({}_{1-x}\)S\({}_{x}\) is that a FL state is observed only in the C4 state after suppressing the nematic state with \(p\) or \(x\) throughout the presented \(p-x-T\) phase diagram. A FL behavior was also seen through resistivity measurements for \(x=0.11\) under pressure [50]. This was recently attributed as the signatures of quantum Griffiths phase close to the nematic quantum phase transition through magnetoresistivity measurements [62]. It is interesting to note that the smooth extrapolation of \(T_{\rm FL}\) to \(T_{\rm FL}=0\) seems to cross the nematic QPT pressure of \(p_{\rm c}\sim\) 0.5 and 0.2 GPa for \(x=0.09\) and 0.15, respectively. Therefore our results also suggest the strong connection between nematic QPT and the appearance of FL states.
Now we discuss the relationship between \(T_{\rm c}\) and the magnitude of AFM spin fluctuations. In Fig. 7, we plot \(T_{\rm c}\) at zero magnetic field against the magnitude of AFM spin fluctuations. Here we used the values of \((1/T_{1}T)_{\rm AFM}\) at \(T=15\) K to represent the magnitude of AFM spin fluctuations. As has been pointed out in Ref. [33], the data points are mainly divided into two groups with the open
Figure 5: Temperature dependence of \({}^{77}\)Se \(1/T_{1}T\) (left axes) for \(x=0.29\) at various pressures under a magnetic field (\(H\)) of 7.4089 T, with \(H\parallel ab\) (red circles) and \(H\parallel c\) (gray circles). The green line for each panel (right axes) shows the temperature dependence of \(CK_{\rm spin,c}^{2}\) where \(K_{\rm spin,c}\) is the spin part of \(K\) estimated by subtracting the \(T\) independent \(K_{0}\) and \(C\) is a scaling factor. For all pressures measured, \(K_{0}\) and \(C\) are found to be constant, 0.175 % and 7.75, respectively. Downward blue arrows show the \(T_{\rm FL}\) below which \(1/T_{1}T=\) constant (black dashed line) behavior is observed. The insets in (a), (c), (e), and (g) show the \(T\) dependence of the ratio \(R\equiv(1/T_{1}T_{1})_{ab}/(1/T_{1}T)_{c}\). The black and red horizontal lines in the insets represent the expected values for stripe-type (\(R\) = 1.5) and Néel-type (\(R\) = 0.5) AFM spin fluctuations, respectively.
Figure 6: (a) \(x\)-pressure(\(p\))-temperature(\(T\)) phase diagram of FeSe\({}_{1-x}\)S\({}_{x}\) determined from NMR and resistivity measurements. The nematic transition temperatures \(T_{\rm s}\) was determined by the splitting of the \({}^{77}\)Se NMR spectra with \(H\parallel ab\) in the tetragonal [110] direction which includes data from Refs. [33; 44]. The SC transition temperature \(T_{\rm c}\) was determined by _in situ_ ac susceptibility (closed magnetic circles) measurements at \(H=0\)[49; 33] and resistivity measurements from Ref. [61] (magenta \(\times\)) and Ref. [22] (magenta \(+\)). \(T_{\rm N}\) for AFM transition temperature was determined by \(\chi_{\rm AC}\) (closed green triangles) [44] and resistivity measurements (green \(\times\)) [61]. (b)-(e) \(p-T\) Contour plots showing the magnitude of antiferromagnetic spin fluctuations for various values of \(x\) along with the transition temperatures as described in (a). The \(1/T_{1}T\) data for \(x=0\) and 0.09 are from Ref. [44] and Ref. [33], respectively. The Fermi liquid (FL) temperature (\(T_{\rm FL}\)) is the temperature at which Korringa relation was observed where both \({}^{77}\)Se nuclear relaxation rate divided by temperature (\(1/T_{1}T\)) and \({}^{77}\)Se Knight shift values are constant. The temperature region where Korringa relation is not satisfied, is indicated as non Fermi Liquid (nFL) state. The solid and dotted lines are guides for the eyes.
(C2 state) and closed (C4 state) squares, and \(T_{\rm c}\) seems to be proportional to AFM spin fluctuations in both C2 and C4 states as shown in the blue and black lines, respectively. However, clearly the slope for the C4 states is higher that that for the C2 state. This indicates that the AFM spin fluctuations without nematicity enhance \(T_{\rm c}\) much more than those with nematicity. Here, the black line has a \(7\pm 2\) times higher slope than the blue line. It is interesting to point out that the three points inside the black dashed oval between the two lines correspond to the data close to \(p_{\rm c}\), suggesting a smooth crossover between the two linear behaviors at the nematic QPT.
One of the reasons why AFM spin fluctuations in the C4 state lead to higher \(T_{\rm c}\) compared to those in the C2 state in FeSe\({}_{1-x}\)S\({}_{x}\) could be due to the different Fermi surfaces in the paramagnetic tetragonal and nematic orthorhombic phases. Figures 8(a) and 8(b) show the schematic Fermi surfaces at \(q_{z}=0\) for the C4 and C2 states based on Refs. [22; 63]. The main differences are seen at \({\bf q}=({\bf q_{x}},{\bf q_{y}})=({\bf 0},{\bf 0})\) (\(\Gamma\) point) and at \({\bf q}=\pm{\bf Q}_{1}\) and \({\bf q}=\pm{\bf Q}_{2}\) (Z points in the C4 state and A points in the C2 state). For simplification, the orbital contribution to the Fermi surface are not indicated.
In the C4 state of FeSe, two circular hole pockets (red lines) at \(\Gamma\) point and two oval shaped electron pockets (blue lines) at the Z points are found in ARPES measurements [63]. This structure of the Fermi surface is also observed in ARPES measurements with \(x=0.18\) at the low \(T\) of 10 K where nematicity is absent [22]. When the hole pocket at \(\Gamma\) point is translated by either \({\bf Q}_{1}\) or \({\bf Q}_{2}\), 16 intersections with the electron pockets at the Z points can exist as shown in Fig. 8(c). Here \(n_{\rm hs}=16\) denotes the number of intersections when the hole pockets are translated by one wave vector. Since the four wave vectors (\(\pm{\bf Q}_{1}\) and \(\pm{\bf Q}_{1}\)) are allowed for C4 symmetry, a total of \(16\times 4=64\) possible intersections can be found.
In the C2 nematic state, on the other hand, an oval hole pocket (red line) at the \(\Gamma\) point and a single peanut shaped electron pocket (blue line) at the A points are present as shown in Fig. 8(b) [63]. When the hole pocket at \(\Gamma\) is translated by either \({\bf Q}_{1}\) or \({\bf Q}_{2}\) to the electron pocket at A points in the nematic state, there are 4 possible intersections (that is, \(n_{\rm hs}=4\)) for one wave vector as shown in Fig. 8(d). Since two wave vectors (either \(\pm{\bf Q}_{1}\) or \(\pm{\bf Q}_{1}\)) are prominent in the nematic state, a total of \(4\times 2=8\) intersections can be present.
These intersections that meet the nesting conditions are called hotspots and are deemed important in determining \(T_{\rm c}\) in high-\(T_{\rm c}\) cuprates where AFM spin fluctuations are relevant [11]. The total number of the hotspots in the C4 symmetric tetragonal state increases by a factor of 64/8=8 with respect to the C2 orthorhombic nematic
Figure 7: Plot of \(T_{\rm c}\) at zero field versus \((1/T_{1}T)_{\rm AFM}\) at \(T=15\) K. The values for \(x=0\) are taken from Refs [44; 48; 61] and for \(x=0.09\) from Ref. [33]. Black and blue lines show linear fits for C4 (closed symbols) and C2 (open symbols) phases, respectively. The three points inside the dashed black oval denote the values taken close to the critical pressure for nematic quantum phase transition and were not included in the linear fits.
Figure 8: Schematic Fermi surfaces and nesting with stripe-type wave vector at \(q_{z}=0\) in the FeSe\({}_{1-x}\)S\({}_{x}\) system. (a) Schematic Fermi surface for the tetragonal state based on Refs. [22; 63] where \(q_{x}\) and \(q_{y}\) are in units of \(\pi/a\) where a is related to the tetragonal lattice constant \(d\), by \(a\equiv d/\sqrt{2}\). (b) Schematic Fermi surface for the detwinned nematic state based on Ref. [63] where \(q_{x}\) and \(q_{y}\) are in units of \(\pi/a\) and \(\pi/b\) where \(a\) and \(b\) are orthorhombic lattice constants. Here red lines represent the hole pockets at the \(\Gamma\) point and blue lines represent the electron pockets at the Z and A points for the tetragonal and orthorhombic states respectively. (c-d) The number of hotspots (\(n_{\rm hs}\)) due to the intersection of single hole and single electron pockets when either is translated by the wave vectors \({\bf Q}_{1}\) or \({\bf Q}_{2}\) in (c) tetragonal and (d) nematic states.
state. This number seems to be consistent with the experimentally determined factor of \(\sim 7\pm 2\) in the slopes of the lines of Fig. 7. Although this is quite qualitative, our results suggest that "hotspots" are one of the key parameters to determine \(T_{\rm c}\) in Fe-based superconductors. More detailed theoretical studies for quantitative analysis about the dependency of \(T_{\rm c}\) on \(n_{\rm hs}\) are highly called for.
Finally we comment on the \(x\) and \(p\) dependences of AFM correlation length \(\xi_{\rm AFM}\). Figure 9 shows the \(x\) and \(p\) dependent of \((1/T_{1}T)_{\rm AFM}\) at \(T=15\) K in FeSe\({}_{1-x}\)S\({}_{x}\) where the open and closed symbols represent the C2 and C4 states, respectively. It is clearly seen that the values of \((1/T_{1}T)_{\rm AFM}\) in the C2 phase are higher than those in the C4 phase. According to Millis-Monien-Pines (MMP) model, \((1/T_{1}T)_{\rm AFM}\) is proportional to the square of \(\xi_{\rm AFM}\) in real space \((1/T_{1}T\propto\xi_{\rm AFM}^{2})\)[64; 65]. Therefore our results may suggest the \(\xi_{\rm AFM}\) in the C4 state is shorter than in the nematic C2 state. It is interesting to point out that SC states with nematicity (C2 state) and without nematicity (C4 state) are induced by AFM spin fluctuations with different values of \(\xi_{\rm AFM}\). It would also be interesting if the change in \(\xi_{\rm AFM}\) between the C4 and C2 states relate to the BCS-BEC transition at \(x_{\rm c}\sim 0.17\) observed by ARPES measurements at ambient pressure [66] where longer \(\xi_{\rm AFM}\) is suggested when BCS superconductivity is present and a shorter \(\xi_{\rm AFM}\) is suggested in the BEC case. If so, the shortening of \(\xi_{\rm AFM}\) under \(p\) in FeSe\({}_{1-x}\)S\({}_{x}\) would be compatible to the BCS-BEC transition at the respective critical pressure for nematic QPT around 0.5 GPa for \(x~{}=~{}0.09\) and 0.2 GPa for \(x~{}=~{}0.15\). Further studies are called for searching BCS-BEC transition under pressure in FeSe\({}_{1-x}\)S\({}_{x}\).
## V Summary
In summary, \({}^{77}\)Se-NMR measurements were carried out in the FeSe\({}_{1-x}\)S\({}_{x}\) system under pressure for \(x=0.15\) and 0.29, and were analyzed together with the previously reported measurements for \(x=0\)[44] and 0.09 [33], to provide a comprehensive study of the interrelationships between nematicity, magnetism and superconductivity in this system. We established the \(p-x-T\) phase diagram featuring the presence of nematic quantum phase transitions, the appearance of Fermi liquid phases, superconductivity with C4 and C2 rotational symmetries and the contour plots of the magnitude of AFM spin fluctuations. When analyzing the dependence of superconducting transition temperature on the magnitude of AFM spin fluctuations, the different linear relationships on the basis of symmetry are apparent. The C4 symmetric AFM spin fluctuations are more effective in enhancing \(T_{\rm c}\) by a factor of 7 \(\pm\) 2 higher than compared to their C2 symmetric counterparts. This was explained in terms of the presence of higher total number of hotspots in the C4 symmetric Fermi surface compared to the Fermi surface with C2 symmetry in the FeSe\({}_{1-x}\)S\({}_{x}\) system by a factor of 8. These results indicate that the hotspots may play an important role in determining \(T_{\rm c}\) in FeSe\({}_{1-x}\)S\({}_{x}\). Future theoretical investigations will be interesting to clarify the relationship between the total number of hotspots and \(T_{\rm c}\), leading a step closer towards understanding the details of Cooper-pairing mechanism in unconventional superconductors.
## VI Acknowledgments
We appreciate Rafael Fernandes for helpful discussions. The research was supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames Laboratory is operated for the U.S. DOE by Iowa State University under Contract No. DE-AC02-07CH11358.
|
2307.08635 | Lightweight ML-based Runtime Prefetcher Selection on Many-core Platforms | Modern computer designs support composite prefetching, where multiple
individual prefetcher components are used to target different memory access
patterns. However, multiple prefetchers competing for resources can drastically
hurt performance, especially in many-core systems where cache and other
resources are shared and very limited. Prior work has proposed mitigating this
issue by selectively enabling and disabling prefetcher components during
runtime. Traditional approaches proposed heuristics that are hard to scale with
increasing core and prefetcher component counts. More recently, deep
reinforcement learning was proposed. However, it is too expensive to deploy in
real-world many-core systems. In this work, we propose a new phase-based
methodology for training a lightweight supervised learning model to manage
composite prefetchers at runtime. Our approach improves the performance of a
state-of-the-art many-core system by up to 25% and by 2.7% on average over its
default prefetcher configuration. | Erika S. Alcorta, Mahesh Madhav, Scott Tetrick, Neeraja J. Yadwadkar, Andreas Gerstlauer | 2023-07-17T16:52:13Z | http://arxiv.org/abs/2307.08635v1 | # Lightweight ML-based Runtime Prefetcher Selection on Many-core Platforms
###### Abstract
Modern computer designs support composite prefetching, where multiple individual prefetcher components are used to target different memory access patterns. However, multiple prefetchers competing for resources can drastically hurt performance, especially in many-core systems where cache and other resources are shared and very limited. Prior work has proposed mitigating this issue by selectively enabling and disabling prefetcher components during runtime. Traditional approaches proposed heuristics that are hard to scale with increasing core and prefetcher component counts. More recently, deep reinforcement learning was proposed. However, it is too expensive to deploy in real-world many-core systems. In this work, we propose a new phase-based methodology for training a lightweight supervised learning model to manage composite prefetchers at runtime. Our approach improves the performance of a state-of-the-art many-core system by up to 25% and by 2.7% on average over its default prefetcher configuration.
## I Introduction
Hardware data prefetching can reduce memory latency and significantly improve the performance of many applications, provided it accurately and promptly detects their memory access patterns. However, individual prefetchers typically target specific or limited sets of patterns [12]. To address this limitation, modern processors combine multiple prefetcher components, thus covering a wider range of access patterns than monolithic prefetchers [12]. Increasing the number of prefetches in the system can lead to higher contention and pollution of shared resources like memory bandwidth and cache space [9, 14]. Furthermore, in multi-core systems, enabling prefetching can sometimes hurt performance depending on the workload [10]. Consequently, modern processors offer users the ability to adjust prefetcher components through registers [7, 14], but selecting when to enable or disable prefetcher components for any program application is a challenging task.
The variety of dynamic workload behaviors in program applications is very large, and the best prefetcher selection may change depending on the workload behavior. For example, Fig. 1 shows the execution time of 10 programs from the SPEC CPU Int Rate 2017 multi-programmed benchmark suite [18] running on a many-core hardware platform with three different prefetcher configurations: _ON_, _OFF_, and _Def. ON_ enables all prefetcher components; _OFF_ disables all prefetcher components; and, _Def_ sets the default configuration, which enables one prefetcher. The figure depicts that the best selection is different for each program. While this example only compares three configurations, modern systems offer more options, increasing the complexity of runtime decisions that map workload behaviors to prefetcher selection.
Previous research has explored various techniques for tuning prefetcher components at runtime to maximize performance, a task commonly known as runtime adaptive prefetching. Some studies use heuristics and explore all or a subset of configurations during program execution to make decisions [11, 14, 15]. However, exploring configurations during runtime misses performance opportunities and does not scale with increasing configurations and core counts. More recent works have used machine learning (ML) models to train a policy offline and evaluate it online [7, 9]. However, they do not provide sufficient proof that their models are generalized enough to handle unseen workloads, and their proposed models are too expensive to implement on a real-world platform. Furthermore, none of these runtime adaptive prefetching studies have investigated many-core platforms, which present unique challenges that do not manifest at lower core counts.
In this work, we propose a runtime prefetcher selection approach that uses a low-overhead machine learning model to enable or disable prefetcher components based on their expected performance improvement on a state-of-the-art many-core platform. We collect hardware counter data to monitor the system workload and propose a new methodology that uses phase classification [1] and supervised learning to correlate workload phases with the best selection of prefetcher components. We demonstrate the effectiveness of our approach by deploying a software-based version on a state-of-the-art cloud-scale hardware platform. Our approach can also be implemented in hardware on future processor designs.
We summarize the contributions of this paper as follows:
Fig. 1: Speedup (higher is better) of SPEC CPU Int Rate 2017 benchmarks with all prefetchers disabled (OFF) and all prefetchers enabled (ON) compared to the default config (Def). The best configuration depends on the workload.
1. We propose phase classification to group similar workload behaviors and find the best prefetcher selection for each phase using a supervised learning formulation.
2. We implement a decision tree model that is lightweight, requiring only 42 bytes of storage, yet accurate enough to improve the execution time of cloud workloads running on a 160-core AmpereOne, a state-of-the-art many-core platform.
3. We demonstrate our model's ability to generalize and improve the performance of workloads that were not seen during training. Our evaluation includes data collected from diverse multi-programmed and multi-threaded workloads. Our results show that our model can improve the performance of new workloads by up to 25% over the platform's default prefetcher and by 2.7% on average.
## II Related Work
Prior work has proposed numerous approaches to reduce the contention generated by prefetchers in multi-core systems. Some work is concerned with extending the design of prefetchers [3, 4, 5, 6, 13, 16, 19] while others have proposed prefetcher-aware cache insertion and eviction policies to manage cache contention [8, 19]. While these solutions focus on tuning an individual prefetcher, our approach is concerned with managing the components of composite prefetchers.
Various studies in composite prefetching management propose heuristics to select prefetchers at runtime [11, 14, 15]. These approaches study different metrics to rank prefetcher configurations based on performance [11, 15] or other heuristics [14]. The ranking is obtained during execution time by performing an exhaustive search that executes every prefetcher configuration for one sample. The best-ranked configuration is selected for a pre-determined period of time. This process is repeated after either a fixed time window [14] or a phase change, defined by a fixed percentage change in system performance [11] or annotated in code [15]. However, exhaustively searching multiple configurations during runtime is not scalable as the number of prefetchers and applications increases. Additionally, the time spent searching necessarily misses optimization opportunities. Lastly, ranking prefetcher configurations based on the performance of a single sample fails to acknowledge short-term performance variations in workloads [1], which may lead to selecting the wrong configuration.
More recent work has introduced ML-based composite prefetcher management approaches. These models eliminate the need to search the configuration space exhaustively by learning to generalize from fewer samples. In [7], the authors proposed formulating the problem with contextual bandits. They train one model per prefetcher component while other prefetchers are always on. However, they do not evaluate the coordination of prefetchers, since the models are not enabled simultaneously. Additionally, they found that they never need to disable some prefetchers in their quad-core system. This is not the case in many-core systems, where it is sometimes beneficial to disable all prefetchers, as was shown in Fig. 1. In [9], the authors propose using deep reinforcement learning (RL) to coordinate multiple prefetchers. However, deploying deep RL models on real-world systems is very expensive in terms of training, power, storage, and latency costs. In contrast, we propose a supervised learning model with minimal costs that can be either implemented in existing runtime management systems or easily deployed in hardware. Moreover, these studies [7, 9] only considered multi-programmed workloads and did not investigate whether their models can improve the performance of unseen (i.e., not used for training) program applications. Our work demonstrates that our proposed lightweight runtime prefetcher selection model can generalize its predictions to unseen and multi-threaded workloads.
## III Prefetcher Selection Model Design
The task of selecting a prefetcher configuration during runtime with a model is represented in Fig. 2. The model aims to map a vector of hardware counter values into a prefetcher selection decision. We collect hardware counters by accessing the performance monitoring units (PMU) of the system and set the prefetcher decision through a model-specific register (MSR). This section outlines our proposed method for designing and training such a model. We start by introducing the problem formulation, followed by an explanation of our approach, which involves both offline analysis and online implementation.
### _Problem Formulation_
The goal of a prefetcher selection policy is to minimize the execution time of a workload, which we define as \(G\). The execution of a workload is represented by a trace of hardware counters, \(U\in\mathbb{R}^{T\times C}\), where \(T\) is the number of samples and \(C\) is the number of collected hardware counters. An observation of \(U\) at time \(t\) is represented as \(U_{t}\). The hardware counters are transformed into features \(X_{t}=\Omega(U_{t}),X_{t}\in\mathbb{R}^{M}\), where \(M\) is the number of features. For example, this transformation \(\Omega\) includes calculating the IPC with the _instructions_ and _cpu-cycles_ hardware counters. We use \(\rho_{t}\) to represent the IPC of a sample at time \(t\), \(\rho_{t}\in X_{t}\). We partition the goal of minimizing the execution time into smaller goals that maximize the IPC of each sample, \(\rho_{t}\), based on the observation that the average IPC is inversely proportional to the execution time.
Fig. 2: Prefetcher selection overview.
At each time step \(t\), a machine learning model, \(f\), predicts an output, \(y_{t+1}\) based on the features \(X_{t}\) with the goal of maximizing \(\rho_{t+1}\). The output is a one-hot encoded vector, \(y_{t}\in\{0,1\}^{N}\), where \(N\) is the number of prefetchers, and each element in the vector indicates whether the prefetcher should be enabled or disabled.
### _Data Analysis and Model Training_
After partitioning our goal of minimizing a workload's execution time into smaller goals that maximize the IPC of each sample of the workload, we need to define a ground truth in order to train a supervised learning model. We propose a method that analyzes data and generates labels to train a runtime prefetcher selection model in an offline fashion. Our method is depicted in Fig. 3, comprising five stages detailed below.
#### Iii-B1 Data Collection
We periodically collected hardware counter data from different workloads to later train our model. For each workload, we collected one trace of hardware counters per prefetcher configuration.
#### Iii-B2 Clustering
In order to compare the samples of different prefetcher configurations, we propose clustering similar PMU behaviors together to find phases within the workloads. Our methodology involves training a clustering model with data from only one prefetcher configuration. We chose _OFF_ as our baseline since it shows workload behaviors without the effects of prefetching. We scaled all features to a range between 0 and 1 using a min-max scaler and clustered all the workload traces of the baseline configuration using _k-means_. This produces a table of cluster centers, which is then used for phase classification.
#### Iii-B3 Phase Classification
Once the cluster centers have been generated using data from the baseline configuration, we use them to classify the phases of data samples in all traces. Next, we group all samples in the same phase and prefetcher configuration and calculate the average IPC per phase. This allows us to compare the performance of different prefetcher configurations across workload phases.
#### Iii-B4 Training Set Generation
We use the phase classification labels to determine the best prefetcher configuration for each sample, which we define as the configuration that yields the highest average IPC for the corresponding phase. We consider this definition as our ground truth. Associating each sample and its phase classification with the best prefetcher configuration generates a supervised training set that assigns each sample's features \(X_{t}\) to the ground truth prefetcher selection, \(y_{t}\).
#### Iii-B5 Model Training
We use our generated data set to train a decision tree model. We found that it only needs four input features instead of seven while maintaining high prediction accuracy. This reduces the number of hardware counters that we need to collect during runtime.
### _Runtime Implementation_
We implemented our prefetcher selection model as a program with a thread that is invoked every 100 ms. The thread accesses hardware counter values using _perf's_ system call. Then, it transforms the counters into features and performs inference on the decision tree. Finally, it writes the decision tree output to the corresponding fields in the prefetcher MSR.
## IV Experimental Results
We collected data from one multi-programmed benchmark suite, SPEC CPU Int Rate 2017 [18], and two multi-threaded Java benchmark suites, DaCapo [2] and Renaissance [17], to evaluate our approach. We use SPEC CPU workloads for training and validation and DaCapo and Renaissance for testing. All workloads run on AmpereOne, a cloud-scale many-core platform with 160 ARMv8.6+ ISA cores, 2MB of L2 cache per core, 64MB of system-level cache, and 256GB of DDR5-4800 memory running Fedora Linux 36. The platform has 12 different prefetcher configurations, which can be tuned with a hardware register. For each prefetcher configuration, we collected one trace of hardware counters per workload, resulting in a total of 120 traces (12 prefetcher options \(\times\) 10 workloads). Each trace consisted of \(C=7\) hardware counters
Fig. 3: Proposed analysis to generate our runtime prefetcher selection model.
collected periodically every 100 ms with Linux's _perf_ tool. The hardware counters were transformed into \(M=7\) features. See Table I for the lists of hardware counters and features.
Fig. 4 shows the speedup of all SPEC CPU benchmarks when prefetcher selection is enabled and exploring the decision tree depth hyperparameter with depths of 1, 2, and 4. The results are normalized to the system's default prefetcher. The geomean is shown on the right side of the plot. On average, enabling system-wide runtime adaptive prefetching improves the performance of SPEC workloads by 1.9% and up to 5.5% in the best scenario.
We want to measure the ability of the model to improve performance even with system changes. For this test, we evaluated our models on the same programs but with different binaries. Specifically, we recompiled SPEC CPU benchmarks with a different compiler, gcc-12, which introduces several new code optimizations when compared to previous versions, such as improved vectorization and structure splitting. So although the same work is completed, the data access patterns may vary widely as in the case of 505.mcf. Then, we enabled prefetcher selection with the same models that were previously trained on SPEC programs compiled with gcc-10. The results are shown in Fig. 5. The best-performing decision tree has a depth of 4. We observe a similar performance improvement trend between the gcc-10 and gcc-12 experiments and demonstrate that the model still improves performance even when presented with different binary files.
We further test the performance of our model by presenting it with completely new workloads (not used for training). We ran workloads from the DaCapo and Renaissance suites. Note that in addition to being new workloads, they are multi-threaded instead of multi-programmed, written in a different language (Java), and compared to SPEC CPU they spend more time in operating system code, network stack, and synchronization (locking and snooping). We tested our best-performing decision tree with depth 4 on each suite and show our results in Fig. 6 and Fig. 7. For most of the workloads, dynamic prefetcher selection reduces the execution time, with the best scenario being 25%. However, as opposed to SPEC CPU results, some programs lose performance. Nonetheless, the geomean performance improvements for DaCapo and Renaissance suites are 1.7% and 3%, respectively. The improved performance of all these unseen workloads together is 2.7%.
A major benefit of our proposed model, as opposed to prior work, is the lightweight implementation. The decision tree has a maximum depth of 4. It requires storing 15 nodes with two parameters each: the feature ID (in our case, 2 bits for four features) and the compare value (we use 16 bits but can be reduced to 8 bits). Additionally, the eight leaf nodes require storing the prefetcher selection when true or false (4 bits in our case). The total size of our model is only 42 bytes, which makes it easy to fit on any embedded firmware or hardware deployment.
## V Conclusion and Future Work
We proposed a lightweight model for runtime prefetcher selection for many-core platforms. It can improve the performance of unseen workloads by up to 25% and 2.7% on average over the default prefetcher.
These early results suggest that runtime prefetcher selection can be formulated as a workload-agnostic offline supervised learning problem; however, further investigation is required to
Fig. 4: Performance improvement (execution time reduction) of different decision tree model depths over the default prefetcher on SPEC CPU benchmarks.
Fig. 5: Performance improvement (execution time reduction) of different decision tree model depths over the default prefetcher on SPEC CPU benchmarks compiled with gcc-12.
Fig. 6: Performance improvement (execution time reduction) of a runtime prefetcher selection tree of depth 4 trained on SPEC CPU over the default prefetcher on DaCapo benchmarks.
Fig. 7: Performance improvement (execution time reduction) of a runtime prefetcher selection tree of depth 4 trained on SPEC CPU over the default prefetcher on Renaissance benchmarks.
determine why it performed poorly in a few benchmarks. The investigation should determine whether the problem is training coverage, i.e., the input features are in a different distribution from the training set, or the problem is workload specific, i.e., for the same set of input features, the best prefetcher selection is different depending on the running program. Our proposed approach estimates the best prefetcher selection for all the cores in the system. Future work includes investigating lightweight runtime prefetcher selection that is more practical for per-core decisions.
## Acknowledgements
This work was supported in part by Ampere Computing and NSF grant CCF-1763848.
|
2307.06618 | Learning IMM Filter Parameters from Measurements using Gradient Descent | The performance of data fusion and tracking algorithms often depends on
parameters that not only describe the sensor system, but can also be
task-specific. While for the sensor system tuning these variables is
time-consuming and mostly requires expert knowledge, intrinsic parameters of
targets under track can even be completely unobservable until the system is
deployed. With state-of-the-art sensor systems growing more and more complex,
the number of parameters naturally increases, necessitating the automatic
optimization of the model variables. In this paper, the parameters of an
interacting multiple model (IMM) filter are optimized solely using
measurements, thus without necessity for any ground-truth data. The resulting
method is evaluated through an ablation study on simulated data, where the
trained model manages to match the performance of a filter parametrized with
ground-truth values. | André Brandenburger, Folker Hoffmann, Alexander Charlish | 2023-07-13T08:35:40Z | http://arxiv.org/abs/2307.06618v2 | # Learning IMM Filter Parameters from Measurements using Gradient Descent
###### Abstract
The performance of data fusion and tracking algorithms often depends on parameters that not only describe the sensor system, but can also be task-specific. While for the sensor system tuning these variables is time-consuming and mostly requires expert knowledge, intrinsic parameters of targets under track can even be completely unobservable until the system is deployed. With state-of-the-art sensor systems growing more and more complex, the number of parameters naturally increases, necessitating the automatic optimization of the model variables. In this paper, the parameters of an interacting multiple model (IMM) filter are optimized solely using measurements, thus without necessity for any ground-truth data. The resulting method is evaluated through an ablation study on simulated data, where the trained model manages to match the performance of a filter parametrized with ground-truth values.
Data Fusion, Tracking, Optimization, Machine Learning
## I Introduction
The parametrization of complex models is a prevalent topic of discussion in many research areas. While the machine-learning community already utilizes powerful automated optimization frameworks, the classical data fusion models are often parametrized manually. In this paper, we present a method to tune the popular interacting multiple model (IMM) filter [1] solely using measurements. By defining an appropriate loss function, gradient descent can be used to update the parameters successively. A schematic overview of the algorithm is shown in Fig. 1.
While the IMM filter can already be seen as an approximator of target noise [2, 3, 4], the performance of the filter still strongly depends on its parametrization. Since the IMM filter allows for different noise characteristics in its modes, it is intuitively able to interpolate between the noise covariances of the modes. However, this comes with two major caveats. Firstly, the mode transition probabilities have to be set accordingly, as these are not automatically estimated, thus still requiring manual tuning. Secondly, using the IMM filter for noise estimation utilizes the mode weight as an interpolation scale, rather than an estimate of the target mode. This is an unwanted side-effect, as the mode probability is of interest in many
Fig. 1: An overview of the IMM filter optimization strategy. Based on its parametrization \(\mathbf{\theta}\), the IMM filter generates tracks (top) which are utilized by the gradient descent optimizer (bottom). The optimizer successively updates the IMM filter parameters with the goal to minimize the negative log-likelihood of the measurements.
applications, for instance when actions are triggered based on detected maneuvers.
A classical approach to filter optimization is the method of _expectation maximization_, where the model is optimized by successive calculation of the expected parameter likelihood, followed by its maximization [5, 6, 7]. In the literature, this optimization is realized with a potentially complex analytical calculation of the maximum, which may need to be recalculated if the underlying model or sensor architecture changes.
More recently, Barrat et al. [8] have applied convex optimization tools to train Kalman smoother parameters. While it is simpler to employ, the method also requires analytical calculations, which can impede the use of more complex architectures.
Similarly, Abbeel et al. [9] have shown that an extended Kalman filter (EKF) can be trained on data using coordinate ascent and Greenberg et al. [10] utilized gradient descent to optimize a Kalman filter. While the filter can be trained on ground-truth data [9, 10], for instance using the residual error, Abbeel et al. also present a method to optimize the measurement likelihood without need for ground-truth data [9]. When applied on a real system, [9] was able to outperform hand-tuned parametrizations.
Closely related, Xu et al. [11] presented _EKFNet_, which fits the parameters of an EKF using gradient descent, analogous to the optimization of a neural network. While this approach is mostly independent to changes in the sensor system, the method utilizes parameter regularization functions, which are problem specific and need to be defined before the method can be employed.
In addition, Coskun et al. [12] proposed an online generation of the Kalman filter matrices using long-short-term-memory (LSTM) networks. While the method outperforms traditional Kalman filters and LSTM networks, the generation of the noise covariances using neural networks can lead to an arbitrary and unexpected filter behavior. This is also the case when neural networks are directly applied to replace parts or even a complete filter [13, 14, 15]. While this is the state-of-the-art for very complex environments and dynamical latent states [16, 17], the trained neural network is naturally opaque. This means, that significant efforts have to be taken to estimate the behavior of the model, which can lead to issues in explainability and can result in uncertainability. Furthermore, these methods typically require training on ground-truth data, which is only rarely available.
In this paper, we present a method to learn parameters of an IMM filter from measurements using gradient descent. Similar to [9], this method utilizes a loss based on the measurement likelihood, however applied on the more difficult IMM filter. This loss function does neither necessitate regularization nor ground-truth data. The optimization of the filter is independent to the sensor system, only requiring the dynamics- and measurement model to be differentiable. The learned parameters are traditional variables of the IMM filter, which can be easily interpreted and confirmed by operators. Experiments support that the optimized IMM filter performs better than an optimized single-mode Kalman filter. In addition, it is shown that the optimized filter matches the performance of ground-truth parameters.
## II Interacting Multiple Model Filter
The Kalman filter [18] is the foundation of many tracking algorithms. Given a target state \(\mathbf{x}_{t}\in\mathbb{R}^{n}\) at time step \(t\), the Kalman filter assumes linear state dynamics and a linear measurement process, both corrupted by Gaussian noise. For non-linear differentiable state dynamics, the EKF can be utilized, where the state transition is defined as
\[\mathbf{x}_{t+1}=f(\mathbf{x}_{t})+\mathbf{u}_{t}\,,\quad\mathbf{u}_{t}\sim\mathcal{N}\left( 0,\mathbf{Q}\right)\,, \tag{1}\]
for a differentiable state transition function \(f\) and Gaussian process noise \(\mathbf{u}_{t}\) parametrized by the covariance matrix \(\mathbf{Q}\). Furthermore, a measurement \(\mathbf{z}_{t}\in\mathbb{R}^{l}\) of the true state can be observed at each time step \(t\), where
\[\mathbf{z}_{t}=h(\mathbf{x}_{t})+\mathbf{v}_{t}\,,\quad\mathbf{v}_{t}\sim\mathcal{N}\left(0, \mathbf{R}\right)\,, \tag{2}\]
for a differentiable measurement model \(h\) and measurement noise \(\mathbf{v}_{t}\) with corresponding covariance matrix \(\mathbf{R}\). The construction of a Bayesian filter on these assumptions leads to the EKF equations
\[\mathbf{F}_{t} =\frac{\partial f}{\partial\mathbf{x}}\left(\mathbf{x}_{t-1|t-1}\right) \tag{3}\] \[\mathbf{x}_{t|t-1} =f(\mathbf{x}_{t-1|t-1})\] (4) \[\mathbf{P}_{t|t-1} =\mathbf{F}_{t}\mathbf{P}_{t-1|t-1}\mathbf{F}_{t}^{T}+\mathbf{Q} \tag{5}\]
\[\mathbf{H}_{t} =\frac{\partial h}{\partial\mathbf{x}}\left(\mathbf{x}_{t|t-1}\right) \tag{6}\] \[\mathbf{S}_{t} =\mathbf{H}_{t}\mathbf{P}_{t|t-1}\mathbf{H}_{t}^{T}+\mathbf{R}\] (7) \[\mathbf{K}_{t} =\mathbf{P}_{t|t-1}\mathbf{H}_{t}^{T}\mathbf{S}_{t}^{-1} \tag{8}\]
\[\mathbf{x}_{t|t}=\mathbf{x}_{t|t-1}+\mathbf{K}_{t}\left(\mathbf{z}_{t}-h(\mathbf{x}_{t|t-1})\right) \tag{9}\] \[\mathbf{P}_{t|t}=\left(\mathbf{I}-\mathbf{K}_{t}\mathbf{H}_{t} \right)\mathbf{P}_{t|t-1}\left(\mathbf{I}-\mathbf{K}_{t}\mathbf{H}_{t}\right) ^{T}+\mathbf{K}_{t}\mathbf{R}\mathbf{K}_{t}^{T}\,, \tag{10}\]
for a state estimate \(\mathbf{x}\) with covariance \(\mathbf{P}\), where the subscript \(\bullet_{t|t}\) denotes the posterior at time \(t\) and \(\bullet_{t+1|t}\) the prediction respectively. Note that Eq. (10) is the _Joseph form_ of the covariance update which is stable with regard to suboptimal Kalman gains \(\mathbf{K}_{t}\)[1, p. 206]. This is required for iterative parameter optimization, as the Kalman gain is suboptimal per definition, due to the incorrect parameters of the filter before training.
A popular extension to the Kalman filter is the IMM filter. By defining multiple state transition models \(f^{i}\) it is possible to model different dynamical modes \(i=1,\ldots,m\) of the system. Typically this corresponds to maneuvering modes of a target, but it can also be applied to system
state estimation [19]. At each time step the IMM filter assumes the system dynamics to be defined as
\[P(i_{t+1}=j|i_{t}=i)=p^{ij} \tag{11}\] \[\boldsymbol{x}_{t+1}=f^{i_{t}}(\boldsymbol{x}_{t})+\boldsymbol{u} _{t}\,,\quad\boldsymbol{u}_{t}\sim\mathcal{N}\left(0,\mathbf{Q}^{i_{t}}\right)\,. \tag{12}\]
Following Eq. (11), the subsequent dynamics mode \(i_{t+1}\) is sampled according to the Markov chain defined with the transition probabilities \(p^{ij}\) relative to the currently active dynamics mode \(j=i_{t}\). Consequently, the mode transition probabilities \(p^{ij}\) define the probability of switching from mode \(i\) to mode \(j\) over the course of a single time step. Thus, \(p^{ij}\) satisfies \(\forall i=1\ldots m:\sum_{j=1}^{m}p^{ij}=1\). Depending on the active mode, specific state transition functions \(f^{i}\) are used in the process model. As shown in Eq. (12), these different transition functions are parametrized by a mode-specific process noise \(\mathbf{Q}^{i}\).
During the IMM filter iteration, the states of the individual modes are calculated separately through the parallel execution of \(m\) differently parametrized Kalman filters. For each mode \(i\), a corresponding weight \(\mu_{t}^{i}\) is assigned with respect to the mode transition probabilities. The IMM filter recursion can be expressed as
\[\mu_{t|t-1}^{j}=\sum_{i=1}^{m}p^{ij}\mu_{t-1|t-1}^{i} \tag{13}\] \[\mu_{t-1|t-1}^{i|j}=p^{ij}\frac{\mu_{t-1|t-1}^{i}}{\mu_{t|t-1}^{ j}}\] (14) \[\boldsymbol{x}_{t-1|t-1}^{0j}=\sum_{i=1}^{m}\mu_{t-1|t-1}^{i|j} \boldsymbol{x}_{t-1|t-1}^{i}\] (15) \[\mathbf{P}_{t-1|t-1}^{0j}=\sum_{i=1}^{m}\mu_{t-1|t-1}^{i|j}\Big{(} \mathbf{P}_{t-1|t-1}^{i}+\] (16) \[\left[\boldsymbol{x}_{t-1|t-1}^{i}-\boldsymbol{x}_{t-1|t-1}^{0j} \right]\Big{[}\boldsymbol{x}_{t-1|t-1}^{i}-\boldsymbol{x}_{t-1|t-1}^{0j}\Big{]} ^{T}\Big{)}\] \[\Lambda_{t}^{j}=\mathcal{N}\left(\boldsymbol{z}_{t};h(\boldsymbol {x}_{t|t-1}^{0j}),\mathbf{S}_{t}^{0j}\right)\] (17) \[\mu_{t|t}^{j}=\frac{1}{c}\Lambda_{t}^{j}\mu_{t|t-1}^{j}\qquad c= \sum_{j=1}^{m}\Lambda_{t}^{j}\mu_{t|t-1}^{j} \tag{18}\]
\[\boldsymbol{x}_{t|t}=\sum_{j=1}^{m}\mu_{t|t}^{j}\boldsymbol{x}_{t|t}^{j} \tag{19}\] \[\mathbf{P}_{t|t}=\sum_{j=1}^{m}\mu_{t|t}^{j}\Big{(}\mathbf{P}_{t| t}^{j}+\left[\boldsymbol{x}_{t|t}^{j}-\boldsymbol{x}_{t|t}\right]\Big{[} \boldsymbol{x}_{t|t}^{j}-\boldsymbol{x}_{t|t}\Big{]}^{T}\Big{)}\,, \tag{20}\]
where \(\mathbf{S}_{t}^{0j}\) corresponds to the predicted innovation covariance with respect to \(\boldsymbol{x}_{t-1|t-1}^{0j}\) (see Eq. (7)). A detailed explanation of the IMM framework can be found in [1, pp. 453-457]. Note that Eq. (19) and Eq. (20) only have to be computed when a condensed state representation is required, but are not required for the IMM filter recursion.
## III IMM Parameter Optimization
Each optimization technique requires an objective- or loss-function. If ground-truth data is supplied, a loss can be easily implemented through the mean-squared-error (MSE) between the ground-truth and the filter outputs. In reality, however, ground-truth data is only rarely available and the true state can only be partially observed through noisy measurements. While [11] has shown that single-mode models can also be optimized with the MSE loss between the measurements and filter output, the optimization of the IMM filter is more challenging. Specifically, it has to be ensured that the training does not converge to a single-mode representation, i.e. \(\forall i:p^{ii}<1\) and \(\forall i\neq j:\mathbf{Q}^{i}\neq\mathbf{Q}^{j}\).
Consequently, the loss should not only be defined on the error with respect to the state, but must also contain information about the distinct modes. In our approach, this is achieved by utilizing the measurement likelihood with respect to the predicted measurement. Namely, the filter predicts the moment-matched measurement distribution
\[\hat{\boldsymbol{x}}_{t|t-1}=\sum_{i=1}^{m}\mu_{t|t-1}^{i} \boldsymbol{x}_{t|t-1}^{i} \tag{21}\] \[\boldsymbol{\nu}_{t|t-1}^{i}=h\left(\boldsymbol{x}_{t|t-1}^{i} \right)-h\left(\boldsymbol{\hat{x}}_{t|t-1}\right)\] (22) \[\hat{\mathbf{S}}_{t|t-1}=\sum_{i=1}^{m}\mu_{t|t-1}^{i}\left( \mathbf{S}_{t}^{i}+\boldsymbol{\nu}_{t|t-1}^{i}\left(\boldsymbol{\nu}_{t|t-1}^ {i}\right)^{T}\right)\,, \tag{23}\]
where \(h\) corresponds to the measurement function and \(\mathbf{S}_{t}^{i}\) is the covariance of the expected measurement in mode \(i\) (see Eq.(7)). Given the model parameters \(\boldsymbol{\theta}\) and using this distribution, the likelihood of the observed measurement is evaluated
\[\Psi_{t}\left(\boldsymbol{\theta}\right)=\mathcal{N}\left(\boldsymbol{x}_{t};h \left(\hat{\boldsymbol{x}}_{t|t-1}\left(\boldsymbol{\theta}\right)\right),\hat{ \mathbf{S}}_{t|t-1}\left(\boldsymbol{\theta}\right)\right)\,. \tag{24}\]
The joint likelihood over multiple time steps can be calculated through the multiplication of the corresponding likelihoods. Since the product of probabilities can quickly approach zero, a log-space transform can be applied to keep the values numerically representable for longer time sequences and particularly suboptimal parameter initializations. This leads to a negative-log-likelihood loss
\[\mathcal{L}\left(\boldsymbol{\theta}\right)=-\log\prod_{t}\Psi_{t}\left( \boldsymbol{\theta}\right)=-\sum_{t}\log\Psi_{t}\left(\boldsymbol{\theta} \right)\,. \tag{25}\]
While the derivation of the loss is focused on the measurement likelihood and thus directly encodes the effects of the measurement model, the state prediction is also influenced by the mode transition and process model. Therefore, it is possible to optimize all model parameters with the single loss function \(\mathcal{L}\left(\boldsymbol{\theta}\right)\).
Due to its simplicity and broad applicability, gradient descent is one of the most prevalent methods for optimizing differentiable models. This is further reinforced by
the easy implementation through widely available autodifferentiation frameworks. In each training iteration \(k\), the parameters \(\mathbf{\theta}_{k}\) of the model are updated according to
\[\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k}-\eta\nabla\mathbf{\phi}_{\mathbf{\theta}_{k}}\mathcal{L }\left(\mathbf{\theta}_{k}\right)\,, \tag{26}\]
where \(\eta\) is the learning rate.
It is visible in Section II that most operations in the IMM filter are linear. In addition, the state dynamics, measurement model and the evaluation of the likelihood on a multivariate Gaussian distribution are differentiable. Thus, the gradient \(\nabla_{\mathbf{\theta}}\mathcal{L}\left(\mathbf{\theta}\right)\) of the loss with respect to the model parameters \(\mathbf{\theta}\) can be calculated as \(\frac{\partial\mathcal{L}\left(\mathbf{\theta}\right)}{\partial\mathbf{\theta}}\) using the repeated application of the chain rule. This process can also be handled automatically by utilizing autodifferentiation. Using autodifferentiation adds to the flexibility of the method for practical operation, just as well as optimization, since the dynamics- and measurement model can be adapted, without requiring manual recalculation of the gradient functions.
## IV Experimental Results
To assess the capabilities of the method, 100 IMM filters with linear state dynamics and measurement models are trained on simulated datasets. In addition, the quality of the loss function is evaluated and the performance of an optimized IMM filter is compared to a trained Kalman filter.
### _Data Generation_
Each of the 100 datasets consists of 60 trajectories, for each of which 120 positional measurements are generated. In each dataset, the trajectories are generated with identical parameters, which are sampled uniformly at random prior to the trajectory generation. The full state vector consists of position and velocity in two dimensions. The used kinematic model is the 2-dimensional discretized white noise acceleration model [1, p. 270], which is defined for each mode \(i\) as
\[\mathbf{F}=\begin{bmatrix}1&\tau&0&0\\ 0&1&0&0\\ 0&0&1&\tau\\ 0&0&0&1\end{bmatrix} \tag{27}\] \[\mathbf{Q}^{i}=\left(\sigma_{v}^{i}\right)^{2}\begin{bmatrix} \frac{1}{3}\tau^{3}&\frac{1}{2}\tau^{2}&0&0\\ \frac{1}{2}\tau^{2}&\tau&0&0\\ 0&0&\frac{1}{3}\tau^{3}&\frac{1}{2}\tau^{2}\\ 0&0&\frac{1}{2}\tau^{2}&\tau\end{bmatrix}\,. \tag{28}\]
In addition, a linear measurement model is utilized, s.t.
\[\mathbf{H}=\begin{bmatrix}1&0&0&0\\ 0&0&1&0\end{bmatrix}\,,\quad\mathbf{R}=\begin{bmatrix}\sigma_{r}^{2}&0\\ 0&\sigma_{r}^{2}\end{bmatrix}\,. \tag{29}\]
The initial parameters are sampled uniformly at random from the intervals
\[\sigma_{v}^{0}\in[10^{-3},0.98]\,,\quad\sigma_{v}^{1}\in[9.81,49.1 ]\,, \tag{30}\] \[p^{ii}\in[0.95,0.999]\,,\quad\quad\sigma_{r}\in[1,25]\,, \tag{31}\]
where \(\sigma_{v}^{i}\) corresponds to the process noise parameter of dynamics mode \(i\) and \(\sigma_{r}\) sets a diagonal measurement standard deviation. The range of the process noise magnitude is set s.t. the non-maneuvering state generates peak accelerations in the order of 0.1 g, while 1-5 g are allowed when the target is maneuvering (see [1, p. 270]). The simulation step interval is \(\tau=1\,\mathrm{s}\) and the trajectories are generated according to Eq.(11) and Eq. (12). An example trajectory is shown in Fig. 2. In total, the parameters of the utilized model can be summarized as \(\mathbf{\theta}=\{\sigma_{v}^{0},\sigma_{v}^{1},p^{00},p^{11},\sigma_{r}\}\).
### _Loss Evaluation_
To analyze the quality of the loss function, it is evaluated for different IMM parameters in proximity of the dataset parameters. In each experiment, only one variable is altered, while all other parameters are fixed to the ground-truth. Consequently, the resulting graphs in Fig. 3 correspond to a projection of the loss function on each parameter dimension. It is visible that the local minima of the loss function are located reasonably close to the ground-truth parameters of the dataset. Furthermore, the shown projections are smooth and mostly convex, indicating that the gradient descent optimizer is well suited.
The local minima of the posterior RMSE with respect to the ground-truth trajectories are similarly close to the minima of the loss function, with the exception of the process noise and mode transition probability for the first mode. Here it is important to underline that this evaluation is conveyed on a projection of the loss function and thus only provides partial insight on its general shape. In particular, these projections do not capture the correlations that exist between the variables, such as the process noise and mode transition probabilities. In addition, since only a limited number of trajectories are generated, sampling error can also have an effect. Still, it seems that under-estimating the process noise is preferred for minimization of the RMSE for this dataset.
### _Training_
Each dataset is split \(1\,:\,1\) into train- and test-set. Thus, the filter is trained on 1 hour of data (3600 measurements) in total. The model is initialized with parameters that are drawn analogously to the dataset variables following Eq. (30) and Eq. (31). In the following ablation study, whenever a model parameter is not trained, it is initialized with the true value that is used for dataset generation. The training is performed over \(K=1000\) epochs using the AMSGrad optimizer [20] with a learning rate of \(\eta=2.0\times 10^{-2}\). The training takes approximately three minutes on four CPU cores with \(2.6\,\mathrm{GHz}\) clock frequency.
Table I shows the method performance relative to the initial parameters and a filter based on the parameters that generated the corresponding dataset. Furthermore, an ablation study over the combinations of either training the measurement noise or process noise, just as well as a
comparison to a single-mode Kalman filter is made. In all cases, the state estimation quality of the model improves compared to the initial parameters, most significantly by 16% for the two-mode IMM filter where all parameters are trained. In a direct comparison, optimizing the measurement noise has a stronger impact than optimizing the process noise parameters. However, the optimization of the process model proves to be beneficial for the estimation of the mode probabilities where it enables the model to outperform the filter with ground-truth parameters by 7% in estimation of the mode weights. A possible explanation for this improvement can be the dissimilarity between the modes of the optimized process noise parameters, which appears larger than in the ground-truth. Therefore, the filter may estimate the individual modes with higher confidence. In combination with the sub-optimality of the IMM filter, this can explain the improvement to the filter parametrized with true variables [21].
In addition, Table I shows that a single-mode Kalman filter can also be trained using this approach. For this setting, the datasets are restricted to a single mode. Since these experiments consider linear state dynamics and measurement models, the model with true parameters produces optimal results up to a sampling error. Remarkably, the trained filter closely matches the performance of this optimal filter.
\begin{table}
\begin{tabular}{c c c|c c|c c} Number of & \multicolumn{2}{c}{Train \(\mathbf{Q}^{i}\)} & \multicolumn{2}{c}{Train \(\mathbf{R}\)} & \multicolumn{2}{c}{Metric} & \multicolumn{2}{c}{Trained model} & \multicolumn{2}{c}{Trained model} \\ & modes & and \(p^{ij}\) & & & & vs. & \\ \hline \multirow{3}{*}{2} & \multirow{3}{*}{Yes} & \multirow{3}{*}{Yes} & State Prediction RMSE & -16.07\% & -0.53\% \\ & & & State Posterior RMSE & -15.30\% & -0.78\% \\ & & & Mode Prediction MAE & -07.01\% & -6.99\% \\ & & & Mode Posterior MAE & -44.68\% & -6.52\% \\ \hline \multirow{3}{*}{2} & \multirow{3}{*}{Yes} & \multirow{3}{*}{No} & State Prediction RMSE & -3.67\% & -0.49\% \\ & & & State Posterior RMSE & -3.87\% & -0.76\% \\ & & & Mode Prediction MAE & -23.87\% & -7.29\% \\ & & & Mode Posterior MAE & -24.50\% & -7.01\% \\ \hline \multirow{3}{*}{2} & \multirow{3}{*}{No} & \multirow{3}{*}{Yes} & State Prediction RMSE & -13.46\% & -0.04\% \\ & & & State Posterior RMSE & -11.42\% & -0.03\% \\ & & & Mode Prediction MAE & -23.57\% & +0.18\% \\ & & & Mode Posterior MAE & -28.78\% & +0.29\% \\ \hline \multirow{3}{*}{1} & \multirow{3}{*}{Yes} & \multirow{3}{*}{Yes} & State Prediction RMSE & -8.11\% & -0.02\% \\ & & & State Posterior RMSE & -7.16\% & +0.07\% \\ \hline \multirow{3}{*}{1} & \multirow{3}{*}{Yes} & \multirow{3}{*}{No} & State Prediction RMSE & -3.67\% & -0.02\% \\ & & & State Posterior RMSE & -2.70\% & +0.00\% \\ \hline \multirow{3}{*}{1} & \multirow{3}{*}{No} & \multirow{3}{*}{Yes} & State Prediction RMSE & -7.07\% & -0.01\% \\ & & & State Posterior RMSE & -6.37\% & +0.02\% \\ \end{tabular}
\end{table} TABLE I: Performance comparison of method ablations. Each ablation is evaluated on 100 differently parametrized datasets. The entries show the change in root-mean-squared-error (RMSE) for the state and the mean-absolute-error (MAE) for the mode probability estimate. Each ablation is compared to the performance of the filter before training and to a filter that is parametrized with the true dataset variables. Negative values indicate an improvement from the baselines.
Fig. 2: A sample from the datasets. The ground-truth trajectory is shown on the left and starts at the coordinate origin. The mode switch is highlighted by the red circle. The ground-truth target motion mode and the absolute acceleration (process noise) are displayed on the right.
### _Comparison of IMM and Single-Mode Kalman Filter_
In contrast to [9], [11], the proposed approach utilizes an IMM filter. To assess the necessity of optimizing an IMM filter versus a Kalman filter on data that features multiple motion modes, experiments for a direct comparison have been conducted. As shown in Table II, the optimized IMM filter outperforms the optimized Kalman filter by more than 16% in all settings. It is clearly visible, that the relative performance of the Kalman filter and the IMM filter is not affected by enabling or disabling the optimization of the measurement model. This is expected, as the measurement models of the IMM filter and Kalman filter are identical. Altogether, this indicates that the measurement models of the two filters converge similarly. The optimization without the motion model is clearly in favor of the IMM filter which outperforms the Kalman filter by more than 22%. While the IMM filter is initialized with the correctly parametrized two-mode motion model, it is crucial to note that the Kalman filter is initialized with a single mode corresponding to the non-maneuvering mode. Since the motion models are not optimized, the Kalman filter is not able to improve on this faulty initialization. In summary, the observations in this experiment are twofold. Firstly, the results confirm the importance of utilizing an IMM filter on data that is generated with multiple motion models. Secondly, it is shown that the optimization of a single-mode Kalman filter can help to alleviate the effects of modeling errors, such as violations of the assumptions on the target motion model.
The results of the experiments clearly show that the proposed strategy and the formulated loss function are well suited for IMM filter optimization. Furthermore, the resulting optimized filter matches the track accuracy of the ground-truth parameters. In addition, the optimized filter shows a significant improvement regarding motion mode prediction.
\begin{table}
\begin{tabular}{c c|c|c} Train \(\mathbf{Q}^{i}\) and \(p^{ij}\) & Train \(\mathbf{R}\) & Metric & IMM filter \\ \hline Yes & Yes & State Prediction RMSE & -17.33\% \\ & & State Posterior RMSE & -16.88\% \\ \hline Yes & No & State Prediction RMSE & -17.34\% \\ & & State Posterior RMSE & -16.83\% \\ \hline No & Yes & State Prediction RMSE & -29.33\% \\ & & State Posterior RMSE & -22.74\% \\ \hline \end{tabular}
\end{table} TABLE II: Direct comparison between a trained Kalman filter and the ground-truth IMM filter on targets with two motion states. The IMM filter outperforms the Kalman filter in all cases. Note for the last comparison, that the IMM filter is initialized with the true motion model, while the Kalman filter is initialized with the motion model of the first mode. Since the Kalman filter can not optimize on this initialization, the difference between the Kalman filter and IMM filter are most apparent.
Fig. 3: A projection of the loss for different parameters. For each graph, a single variable is assessed for a fixed dataset. The corresponding ground-truth value of the respective IMM parameter in the analyzed dataset is marked red. The loss with respect to the IMM parameter is shown in blue and the posterior RMSE to the ground-truth trajectories is shown in orange.
## V Conclusion
The proposed method enables the tuning of IMM filter parameters based only on measurement data, without the need for hand tuning parameters or ground-truth data. This is of significant practical importance due to the typical abundance of measurement data, while ground-truth data is rare at most. Furthermore, hand tuning filter parameters can easily lead to modeling errors and degraded system performance. Through the performed experiments, a significant improvement in accuracy has been observed in comparison to mismatched (i.e. incorrect) model parameters. It is particularly encouraging that the method demonstrates a performance close to the ideal, but unrealistic case of known parameters. This result confirms the practical viability of this method for tuning IMM tracking filters. In the scope of future work, the proposed IMM optimization approach can be applied to more complex sensor architectures, where it could also alleviate sensor degradation, potentially even while the sensor is still in service.
|
2306.09602 | Existence and Construction of a Gröbner Basis for a Polynomial Ideal | This extended abstract gives a construction for lifting a Gr\"obner basis
algorithm for an ideal in a polynomial ring over a commutative ring R under the
condition that R also admits a Gr\"obner basis for every ideal in R. | Deepak Kapur, Paliath Narendran | 2023-06-16T03:13:42Z | http://arxiv.org/abs/2306.09602v1 | # Existence and Construction of a Grobner Basis for a Polynomial Ideal+
# Existence and Construction of a Grobner Basis for a Polynomial Ideal+
Footnote †: Abstract of a paper presented at the workshop, _Combinatorial Algorithms in Algebraic Structures_, Europäische Akademie, Otzenhausen, West Germany, Sept. 30 - Oct. 4, 1985.
_Deepak Kapur_
_Paliath Narendran_
_Computer Science Research Branch_
_Corporate Research and Development_
_General Electric Company_
_Schenectady, NY 12345_
## 1 Introduction
We show the existence of a Grobner basis for a polynomial ideal over \(R[x_{1},\ldots,x_{n}]\) assuming the existence of a Grobner basis for every ideal over the underlying ring \(R\). We also give an algorithm for computing a Grobner basis assuming there is an algorithm to compute a Grobner basis for every ideal over \(R\). We show how most of the properties that hold for Grobner bases for ideals over \(R\) extend to Grobner bases for polynomial ideas over \(R[x_{1},\ldots,x_{n}]\). These results are structurally similar to Hilbert's Basis Theorem.
A Grobner basis of a polynomial ideal I can be defined in two ways: either as (a) any basis \(B\) of \(I\) such that every polynomial \(p\) in \(I\) reduces to \(0\) with respect to \(B\), or as (b) any basis \(B\) of \(I\) such that every polynomial in the ring has a unique normal form with respect to \(B\). We call bases satisfying (a) _weak Grobner bases_ and those satisfying (b) _strong Grobner bases_. When Buchberger introduced the concept of Grobner basis [1] for the case when \(R\) is a field, he defined them to satisfy (a). Later [2] he showed that they also satisfy condition (b). Weak Grobner bases for various kinds of rings have since been studied in [6, 7, 8, 9, 13]. In [10] we defined weak Grobner bases over first-order rings and showed their importance in theorem proving in first-order predicate calculus. Algorithms for computing strong Grobner bases have been given in [4] (for reduction rings) and [5] (for Euclidean rings). In this paper we unify these two streams and show how to construct weak (strong) Grobner bases over
\(R[x_{1},\ldots,x_{n}]\) provided weak (strong) Grobner bases can be constructed over the underlying structure \(R\).
## 2 Assumptions about the Underlying Ring R
Let \(R\) be a commutative ring with 1. We assume that for every ideal over \(R\), there exists a Grobner basis for the ideal. Let \(\rightarrow\) stand for a _reduction_ relation induced by a basis on \(R\). We will assume the following properties of \(\rightarrow\):
* A non-zero element \(a\) in \(R\)_reduces to 0_ by another element \(b\) if \(a\) is a multiple of \(b\).
* If \(a\to c\), then there must exist \(b_{1},\ldots,b_{k}\in R,k\geq 1\), such that \(a-c\in(b_{1},\ldots,b_{k})\); we then say that _a reduces to c using \(\{b_{1},\ldots,b_{k}\}\)_ in \(R\). Usually \(k=1\); in that case, we say that \(a\to c\) by \(b_{1}\).
* If an element \(a\) reduces using \(\{b_{1},\ldots,b_{k}\}\) and each \(b_{i}\) reduces by \(\{c_{1},\ldots,c_{j}\}\), then \(a\) also reduces by \(\{c_{1},\ldots,c_{j}\}\). In particular, if \(a\) reduces by \(b\) and \(b\) reduces by \(c\), then \(a\) also reduces by \(c\).
**Definition:** A basis \(B\) of an ideal \(I\) over \(R\) is a _weak Grobner basis_ iff every element in the ideal has 0 as a normal form using \(B\).
**Definition:** A basis \(B\) of an ideal \(I\) over \(R\) is a _strong Grobner basis_ iff every element in \(R\) has a unique normal form using \(B\).
A well-founded reduction relation \(\rightarrow\) induces a well-founded ordering on \(R\) as follows: \(a<c\) iff there exists a reduction sequence \(c\to c_{1}\rightarrow\cdots\to a\). Elements of \(R\) in normal form wrt \(\rightarrow\) are the minimal elements wrt \(<\).
We will assume that we know whether the definition of reduction of elements of \(R\) wrt other elements in \(R\) admits weak Grobner bases or strong Grobner bases. Let Grobner be a function which takes a finite basis \(B\) of an ideal over \(R\) as the input and gives a Grobner basis of the ideal as output; obviously (Grobner(B))=(B).
## 3 Preliminaries
Let \(R[x_{1},\ldots,x_{n}]\) be a polynomial ring over \(R\). We assume a total ordering on indeterminates which can be extended in many different ways to a total ordering on terms [4,5,12]. The ordering on \(R\) induced by \(\rightarrow\) and the ordering on terms can be used to define an ordering on monomials and polynomials. Given a polynomial \(p\), let \(HD(p),HT(p)\), and \(HC(p)\) stand for its head-monomial, head-term, and head-coefficient, respectively. Let \(Rest(p)\) be \(p-HD(p)\). We assume that the functions \(HD,HT,\mbox{ and }HC\) can be used on a set of polynomials also; for instance, \(HD(I)=\{m\mid m\mbox{ is a head-monomial of a polynomial in }I\}\).
Let \(B\) be a finite set of polynomials in \(R[x_{1},\ldots,x_{n}];B=\{p_{1},\ldots,p_{m}\}\). Let \(I=(p_{1},\ldots,p_{m})\).
Using a well-founded relation \(\rightarrow\) on \(R\), we can define a well-founded reduction relation on polynomials in \(R[x_{1},\ldots,x_{n}]\); we will denote it also by \(\rightarrow\). A polynomial \(p\to q\) using another polynomial \(p_{1}\) iff:
* There is a monomial \(ct\) in \(p\) i.e., \(p=ct+p^{\prime}\), such that \(HT(p_{1})\) divides \(t\), i.e., \(t=t^{\prime}\cdot HT(p_{1})\), and
* There exists a \(d\) such that \(c\to d\) using \(HC(p_{1})\) and \(q=p-k\cdot t^{\prime}\cdot p_{1}\), where \(c=k\cdot HC(p_{1})+d\).
The above definition extends to the case when more than one polynomial is used for reduction. Weak and strong Grobner bases can be defined for polynomial ideals in the same way as definitions given above.
Let \(MinMon(M)=\{m\mid m\) is a monomial in \(M\) and there is no other monomial \(m^{\prime}\) in \(M\) that divides \(m\}\). Similarly, let \(MinTerm(T)=\{t\mid t\) is a term in \(T\) and there is no \(t^{\prime}\) in \(T\) that divides \(t\}\). It follows easily from Hilbert's Basis Theorem as well as from Dickson's Lemma that for any set of terms \(T\), \(MinTerm(T)\) is finite.
## 4 Properties of Monomials in \(R[x_{1},\ldots,x_{n}]\)
Given a term \(t\), let \(C(t,I)=\{c\mid ct\) in \(HD(I)\}\). It is easy to see that \(C(t,I)\) is an ideal over \(R\); further, for any two distinct terms \(t_{1}\) and \(t_{2}\), such that \(t_{1}\) divides \(t_{2}\), \(C(t_{1},I)\subseteq C(t_{2},I)\). For every term \(t\) appearing in some monomial in \(MinMon(HD(I))\), i.e., \(t\) in \(HT(MinMon(HD(I)))\), let \(D(t,I)\) be the set of all coefficients of \(t\) in \(MinMon(HD(I))\). Let \(Divisors(t)\) be the set of all terms including \(t\) which can divide \(t\). Note that for some \(t\), \(C(t,I)\) may not be empty, whereas \(D(t,I)\) may be empty.
**Lemma 4.1**.: For each \(c\) in \(C(t,I)\), there exists \(t^{\prime}\) in \(Divisors(t)\) and an element \(d\) in \(D(t^{\prime},I)\) such that \(d\) divides \(c\).
**Lemma 4.2**.: The set \(HT(MinMon(HD(I)))\) is finite.
**Lemma 4.3**.: For any term \(t\), \(C(t,I)\) is the same as the ideal generated by the union of \(D(t^{\prime},I)\) for all \(t^{\prime}\) in \(Divisors(t)\).
**Lemma 4.4**.: \(G=\bigcup\left(\mathrm{Grobner}(D(t^{\prime},I)),t^{\prime}\in Divisors(t)\right)\) is a Grobner basis for \(C(t,I)\).
Existence of Grobner basis for an Ideal over \(R[x_{1},\ldots,x_{n}]\)
Given an ideal \(I\), we construct its Grobner basis \(GB\) as follows: For each \(t\) in \(HT(MinMon(HD(I)))\), let \(G(t,I)\) be a Grobner basis of the ideal generated by \(D(t,I)\) over \(R\); i.e., \(G(t,I)=\mbox{Grobner}(D(t,I))\); for each element \(g\) in \(G(t,I)\), include in \(GB\) a minimal polynomial in \(I\) with \(g\cdot t\) as its head-monomial.
**Theorem 5.1**.: The set \(GB\) is finite.
**Theorem 5.2**.: For every polynomial \(p\) in \(I\), \(p\) reduces to \(0\) using \(GB\).
**Theorem 5.3**.: If \(R\) admits strong Grobner bases, then \(GB\) is also a strong Grobner basis.
Is such a \(GB\) unique for an ideal \(I\)? No, because Grobner bases for the same ideal may differ modulo units. However, if we assume that \(R\) admits strong Grobner bases and such a Grobner basis is unique for every ideal, then \(GB\) is unique for a particular term ordering.
**Theorem 5.4**.: If \(R\) admits stronger Grobner bases and every ideal has a unique Grobner basis, then every ideal over \(R[x_{1},\ldots,x_{n}]\) also has a unique Grobner basis.
## 6 An Algorithm for Computing a Grobner Basis
We assume that (i) there is a way to compute a Grobner basis \(G\) from a basis \((c_{1},\ldots,c_{k})\) over \(R\); furthermore, it is also possible to compute representations of \(c_{1},\ldots,c_{k}\) in terms of the elements of \(G\) as well as representations of every element \(g\) in \(G\) in terms of \(c_{1},\ldots,c_{k}\); these constructions are indeed possible from the construction of a Grobner basis as illustrated by Buchberger in [12]. In addition, (ii) it should also be possible to generate a finite basis for a module of solutions to linear homogenous equations over \(R\).
### Critical Pairs
For any finite subset \(F\) of \(B\), we define, two types of _critical pairs_ among polynomials in the basis. called _G-polynomials_ and _M-polynomials_, (to stand for Grobner-polynomials and Module-polynomials, respectively) as follows: Let F consist of the polynomials
\[p_{1}=c_{1}\cdot t_{1}+r_{1}\,\ p_{2}=c_{2}\cdot t_{2}+r_{2},\ \ldots\,\ p_{j}=c_{j}\cdot t_{j}+r_{j}\]
where \(HD(p_{i})=c_{i}\cdot t_{i},\ HT(p_{i})=t_{i},\ HC(p_{i})=c_{i}\), and \(Rest(p_{i})=r_{i}\).
Let \(t=lcm(t_{1},t_{2},\ldots,t_{j})=s_{1}\cdot t_{1}=s_{2}\cdot t_{2}=\cdots=s_{j} \cdot t_{j}\) for some terms \(s_{1},\ldots,s_{j}\).
**G-Polynomial**: Let \(\{g_{1},\ldots,g_{m}\}\) be a Grobner basis of \(\{c_{1},\ldots,c_{j}\}\); each \(g_{i}\) can be written (represented) as \(g_{i}=h_{i,1}\cdot c_{1}+h_{i,2}\cdot c_{2}+\cdots+h_{i,j}\cdot c_{j}\) for some \(h_{i,1},\ldots,h_{i,j}\). Then, for each \(i\), \(q_{i}=h_{i,1}\cdot s_{1}\cdot p_{1}+h_{i,2}\cdot s_{2}\cdot p_{2}+\cdots+h_{i, j}\cdot s_{j}\cdot p_{j}\) is a G-polynomial. Note that the head-monomial of \(q_{i}\) is \(g_{i}\cdot t\).
**M-Polynomials**: Consider the module \(M=\{\langle a_{1},\ldots,a_{j}\rangle\mid a_{1}\cdot c_{1}+\cdots+a_{j}\cdot c _{j}=0\}\). Let the vectors \(A_{1},A_{2},\ldots,A_{v}\) represent its basis, where for every \(i\), \(A_{i}=\langle b_{i,1},\ldots,b_{i,j}\rangle\) for some \(b_{i,1},\ldots,b_{i,j}\) in \(R\). Then, for each \(i\),
\(q_{i}=b_{i,1}\cdot s_{1}\cdot p_{1}+b_{i,2}\cdot s_{2}\cdot p_{2}+\cdots+b_{i,j}\cdot s_{j}\cdot p_{j}\) is an M-polynomial. Note that the head-term of \(q_{i}\) is less than \(t\) in the term ordering, since the monomials involving \(t\) get cancelled in the summation.
We say that _q is a G-polynomial_ (respectively, _M-polynomial_) _of B_ if \(q\) is a G-polynomial (respectively, M-polynomial) of some finite subset \(F\) of \(B\).
**Theorem 6.1**.: Let \(B=\{p_{1},\ldots,p_{m}\}\) be a set of polynomials such that all the G-polynomials and M-polynomials of \(B\) reduce to \(0\) using \(B\). Then every polynomial in \((B)\) reduces to \(0\) using \(B\), thus implying that \(B\) is a weak Grobner basis.
**Theorem 6.2**.: Let \(B\) be a basis as stated in Theorem 6.1 above. If the function Grobner gives a strong Grobner basis on \(R\), then \(B\) is a strong Grobner basis.
Furthermore, if for every ideal in \(R\), the function Grobner gives a unique strong Grobner basis, then from Theorem 5.4 and Theorems 6.1 and 6.2, it follows that \(B\) is also a unique strong Grobner basis.
The completion algorithm for obtaining Grobner bases should be obvious by now: given a basis \(B\), compute the G-polynomials and M-polynomials, and their normal forms. Add all non-zero normal forms to the basis and repeat this step until no new polynomails are added to the basis. This process of adding new polynomials will terminate because of the finite ascending chain property of Noetherian rings.
## 7 Instances of the Algorithm over Various Structures
It can be shown that most of the known algorithms are instances of the above algorithm with additional properties assumed on the ring \(R\). For instance, when \(R\) is a PID, then a Grobner basis of a basis over \(R\) can be obtained as the gcd of the elements in the basis. This can be computed by considering pairs of elements at a time. Further, a module basis of a finite set of non-zero elements in a PID can also be computed by considering pairs of elements. For any two non-zero elements, \(a\) and \(b\), \(\{\langle lcm(a,b)/a,-lcm(a,b)/b\rangle\}\) generates their module basis. It can be shown that if G-polynomials and M-polynomials from every pair of polynomials in a basis \(B\) reduce to \(0\), then G-polynomials and M-polynomials for
every non-empty subset \(B^{\prime}\) of \(B\) also reduce to 0. So, it is sufficient to consider pairs of polynomials at a time for critical-pair computation. These considerations also apply to algorithms over fields and Euclidean rings since both are instances of PIDs.
Zacharias [7], Trinks (as quoted in [8]), and Schaller [6] (as quoted in [4]) have given weak Grobner basis algorithms for a ring \(R\) on which the ideal membership problem is solvable as well as a finite set of generators for the solutions of linear equations in \(R\) can be found algorithmically. These algorithms can also be viewed as special cases of our algorithm where only M-polynomials are considered.
|
2308.10827 | The Temporal Continuum | The continuum has been one of the most controversial topics in mathematics
since the time of the Greeks. Some mathematicians, such as Euclid and Cantor,
held the position that a line is composed of points, while others, like
Aristotle, Weyl and Brouwer, argued that a line is not composed of points but
rather a matrix of a continued insertion of points. In spite of this
disagreement on the structure of the continuum, they did distinguish the
temporal line from the spatial line. In this paper, we argue that there is
indeed a difference between the intuition of the spatial continuum and the
intuition of the temporal continuum. The main primary aspect of the temporal
continuum, in contrast with the spatial continuum, is the notion of
orientation.
The continuum has usually been mathematically modeled by Cauchy sequences and
the Dedekind cuts. While in the first model, each point can be approximated by
rational numbers, in the second one, that is not possible constructively. We
argue that points on the temporal continuum cannot be approximated by rationals
as a temporal point is a flow that sinks to the past. In our model, the
continuum is a collection of constructive Dedekind cuts, and we define two
topologies for temporal continuum: 1. oriented topology and 2. the ordinary
topology. We prove that every total function from the oriented topological
space to the ordinary one is continuous. | Mohammad Ardeshir, Rasoul Ramezanian | 2023-08-16T18:13:36Z | http://arxiv.org/abs/2308.10827v1 | ###### Abstract
###### Abstract
The continuum has been one of the most controversial topics in mathematics since the time of the Greeks. Some mathematicians, such as Euclid and Cantor, held the position that a line is composed of points, while others, like Aristotle, Weyl and Brouwer, argued that a line is not composed of points but rather a matrix of a continued insertion of points. In spite of this disagreement on the structure of the continuum, they did distinguish the _temporal line_ from the _spatial line_. In this paper, we argue that there is indeed a difference between the intuition of the spatial continuum and the intuition of the temporal continuum. The main primary aspect of the temporal continuum, in contrast with the spatial continuum, is the notion of _orientation_.
The continuum has usually been mathematically modeled by _Cauchy_ sequences and the _Dedekind_ cuts. While in the first model, each point can be approximated by rational numbers, in the second one, that is not possible constructively. We argue that points on the temporal continuum cannot be approximated by rationals as a temporal point is a _flow_ that sinks to the past. In our model, the continuum is a collection of constructive _Dedekind_ cuts, and we define two topologies for temporal continuum: 1. _oriented_ topology and 2. the _ordinary_ topology. We prove that every total function from the _oriented_ topological space to the _ordinary_ one is continuous.
**The Temporal Continuum**
**Mohammad Ardeshir 1**Rasoul Ramezanian 2**
Footnote 1: Sharif University of Technology, 11365-9415, Tehran, Iran, [email protected]
Footnote 2: University of Lausanne, Internef, 1015 Lausanne, Switzerland, [email protected]
## 1 Introduction
Mathematicians have long been divided into two philosophical camps regarding the structure of the continuum. Some, like Euclid [12] and Cantor [7], view the continuum as a composition of points. On the other hand, others, including Aristotle [2], Weyl [4, 19], and Brouwer [5], consider it as a _whole_, not composed of points.
In this paper, we aim to highlight another distinction, namely, between what we refer to as _the spatial continuum_ and _the temporal continuum_. We will model the temporal continuum using a specific type of Dedekind cuts that illustrate this difference.
The main distinction between the spatial continuum and the temporal continuum lies in the notion of _orientation_: the temporal continuum is oriented, moving exclusively from the past to the future, and it is impossible to move in the _opposite_ direction3. Our subjective experience reinforces this clear differentiation between the future and the past. We can vividly remember the past, whereas the future remains uncertain. Our ignorance of
future events, including our own choices and actions, contributes to the concept of _free will_. Conversely, we lack the ability to alter the past due to the absence of free will in that direction.
We can affect the future by our actions: so why can we not by our actions affect the past? The answer that springs to mind is this: you cannot change the past; if a thing has happened, it has happened, and you cannot make it not to have happened [10].
The primary objective of this paper is to propose a model for the continuum that incorporates the concept of orientation. To achieve this, we will introduce a formalization of orientation through a specific type of Dedekind cuts, which we will refer to as oriented Dedekind cuts.
Brouwer and Weyl emphasized two essential aspects of the intuitive continuum, which are _inexhaustibility_ and _non-discreteness_ (also referred to as _non-atomicity_) ([3], page 86). The concept of choice sequences is motivated by these characteristics. Non-lawlike sequences signify the continuum's inexhaustibility, and identifying points with unfinished sequences of nested intervals reflects its non-discreteness ([3], page 87). Defining points as choice sequences of nested intervals captures the notion that a point on the continuum is not a dimension-less atom but rather a _halo_.
As choice sequences are non-predetermined and unfinished entities, the value of every total function from choice sequences to natural numbers relies on an initial segment of sequences. Brouwer leverages this property to demonstrate that every total function over the continuum is continuous.
Both the spatial continuum and the temporal continuum share the aforementioned aspects. However, a defining characteristic of the temporal continuum is the concept of _orientation_. "Duration" is a continuous _becoming_, always moving towards the future. In the spatial continuum, movement is possible in both directions. To introduce witnesses for points on the spatial line, one may proceed by introducing witnesses for all points in the segment \([-1,0]\), then for points in the segment \([1,2]\), and subsequently for points in the segment \([0,1]\), and so on. There is no requirement to adhere to any specific direction. However, for the temporal line, this approach is not feasible. Let the locations of a moving ball in space serve as witnesses for moments. If we are at the moment \(t_{0}\), and \(t_{1}\) is a future moment, we cannot determine the location of the ball at time \(t_{1}\) at the present moment. We must wait until we reach that moment, and only then will the locations of the ball in all moments before \(t_{1}\) be determined. _We are constantly moving towards the future_. The following _assertion_ may provide further clarity regarding our understanding of the notion of orientation. We leverage notion of orientation to demonstrate that every total function over the temporal continuum is continuous.
In considering how we experience a point on the temporal continuum, we can conceptualize a point \(t\) as the moment of occurrence of an event \(E\):
t:="The moment of the occurrence of the event E"
For instance, assume it is currently 8:00 am, and I am aware that the event \(E\) will occur sometime after 8:00 am but before 12:00 pm. As time passes, I experience the point \(t\) in the following manner: I examine the occurrence of \(E\) at several consecutive times. To do this,
I plan a strategy and choose specific moments to observe the event. The choice of these moments is up to me.
For example, I may decide to examine the occurrence of \(E\) at times like 10:15 am, 10:20 am, 10:40 am, 11:30 am, and so on. Suppose it is now 11:30 am, and the last time I observed the occurrence of \(E\) was at 10:40 am, where I found no evidence of it happening yet. Now, suppose it is 11:30 am again, and I observe the event and find evidence of its occurrence. However, at this point, I _cannot move to the past_ to examine its occurrence at any previous moments.
All the information I have about \(t\) is that it lies _sometime_ within the interval \((10:40,11:30]\). I cannot distinguish the moment \(t\) from other moments within this interval. In other words, _the point \(t\) has "sunk back" to the past, and I cannot experience it any further._ It is _absolutely undecidable_ whether \(t\) is before 11:00 am or after it. The moment \(t\) cannot be estimated more accurately, and consequently, I cannot approximate \(t\) using rational moments.
Furthermore, if I have a constructive method to introduce witnesses for all moments on the temporal line, then the witness for the moment \(t\) is also the witness for all moments within the interval \((10:40,11:30]\). Thus, a point on the temporal continuum is like a _flow_ that sinks back to the past eventually.
We consider the above _explanation_ as our understanding of the notion of _orientation_ and attempt to formalize it in this paper. In our perspective, we regard \(t\) as a _flow_ from the past, meaning it encompasses all moments that have occurred before \(t\). If \(t\) is a past moment, we cannot obtain any additional information about which moments belong to this collection and which ones do not. If \(t\) is a future moment, we still have time to gather data about the elements of the collection before \(t\) eventually sinks back to the past. During this time, we may examine the membership status of certain moments. However, as soon as \(t\) sinks back to the past, we can no longer acquire any new data.
This passage discusses the experience of a point on the temporal continuum, emphasizing the concept of _orientation_ and how it is related to the passage of time and our ability to observe and distinguish moments. It also introduces the idea of a point being a _flow_ from the past, capturing the continuous and irreversible nature of time.
Brouwer's perspective on the continuum is that it is intuitively given as a flowing medium of cohesion between two events, not comprised of points (events) itself, but rather an inexhaustible matrix allowing for a continued insertion of points. Originally, there are no points on the continuum, but we can construct points on it or indicate a position within it. Brouwer emphasized that the intuition of the continuum is the intuition of the medium of cohesion between two events. He distinguishes between two things: the medium of cohesion (first thing) and the continuum (second thing), or _primum_ and _secundum_ as he puts it ([14], page 70). Brouwer utilized the category of choice sequences and the continuity principle to provide a mathematical analysis of continua. Using choice sequences to define points on the continuum, a point is modeled as a _halo_. Consequently, to construct a witness for a point on the continuum, the witness is constructed for a halo around it. Brouwer proposed his famous continuity principle for choice sequences and used it to prove that every total function over the real line is continuous ([17], page 305).
Motivated by the characteristic aspect of the temporal continuum, i.e., the _orientation_, as explained above, we introduce oriented cuts to model points on the temporal continuum
as _flows_. The traditional modeling of the continuum using the _Cauchy_ fundamental sequences of rationals [17] allows every real number to be approximated by rational numbers. However, since moments on the temporal continuum sink back to the past, they _cannot_ be approximated by rationals. Consequently, the _Cauchy_ sequences appear unsuitable for modeling the temporal continuum. Instead, Dedekind cuts prove to be more appropriate for this purpose. Among the constructive Dedekind lines introduced in [17], only \(\mathbb{R}^{d}\) can be positively approximated by rationals due to its _locatedness_ property ([17], page 270), while \(\mathbb{R}^{e}\) and \(\mathbb{R}^{be}\) can be approximated by rationals only up to the double negation via the _strong monotonicity_ property. We demonstrate that the collection of oriented Dedekind cuts cannot be approximated by rationals, as desired in this case. As discussed earlier, points on the temporal continuum are like flows that sink back to the past. Once they have sunk back, we cannot acquire new data about them, and thus, they cannot be approximated by rationals.
In addition to our main objective of introducing a mathematical model for the temporal continuum, we also aim to demonstrate that every total function over the temporal line is continuous, similar to the Brouwerian real line. For this purpose, we require a _continuity principle_ for the Dedekind cuts. Hence, we propose a principle called the _oriented continuity principle_ (**OCP** ) and define two topologies for the temporal continuum: 1. the _oriented topology_ and 2. the _ordinary topology_. Utilizing **OCP**, we establish that every total function from the oriented topological space to the ordinary one is continuous.
The sequel of the paper is organized as follows. In section 2, we define the notion of the oriented cuts, and justify a continuity principle for them. The oriented continuity principle, **OCP**, expresses formally the feature we have in mind about a continuity principle. We show that the oriented reals cannot be approximated by rationals. in section 3, the _oriented topology_ and the _ordinary topology_ are defined, and then some consequences of **OCP** for the temporal continuum are demonstrated.
We argue in the context of constructive logic. As far as possible, the standard notations in [17] are used in this paper.
## 2 Oriented Cuts
In this section, we introduce a type of left cuts of \(\mathbb{Q}\), named _oriented cuts_, and state the oriented continuity principle.
**Definition 2.1**: _We let \(\mathbb{R}^{o}\) be the set of all strictly increasing bounded sequences of rational numbers, i.e. \(\alpha\in\mathbb{R}^{o}\) if and only if \(\exists M\forall n(\alpha(n)<\alpha(n+1)<M)\). For all \(\alpha,\beta\in\mathbb{R}^{o}\) we define:_
* \(\alpha\leq\beta\) _if and only if_ \(\forall m\exists n(\alpha(m)<\beta(n))\)_,_
* \(\alpha<\beta\) _if and only if_ \(\exists n\forall m(\alpha(m)<\beta(n))\)_, and_
* \(\alpha=^{o}\beta\) _if and only if_ \(\alpha\leq\beta\wedge\beta\leq\alpha\)_._
_We call the set \(\mathbb{R}^{o}\), regarding equality \(=^{o}\), the set of all oriented reals \((\)cuts\()\)._
Oriented reals, bounded strictly increasing sequences of rationals, are supposed to model the passing of time. The increase of sequences reflects the passing of time and the strictness of increase ensures that time cannot rest.
**Definition 2.2**: _For a rational number \(q\in\mathbb{Q}\), let \(\hat{q}\) be the oriented real defined by \(\hat{q}(n)=q-\frac{1}{n+1}\), for all \(n\). Let \(q\to\hat{q}\) be the mapping which assigns the oriented real \(\hat{q}\) to \(q\). For each \(q\in\mathbb{Q}\) and \(\alpha\in\mathbb{R}^{o}\), we say_
1. \(q<\alpha\) _if and only if_ \(\hat{q}<\alpha\)_,_
2. \(\alpha\leq q\) _if and only if_ \(\alpha\leq\hat{q}\)_,_
3. \(\alpha<q\) _if and only if_ \(\alpha<\hat{q}\)_, and_
4. \(q\leq\alpha\) _if and only if_ \(\hat{q}\leq\alpha\)_._
**Proposition 2.3**: _For \(q\in\mathbb{Q}\) and \(\alpha\in\mathbb{R}^{o}\),_
1. \(q<\alpha\) _if and only if_ \(\exists n(q<\alpha(n))\)_,_
2. \(\alpha\leq q\) _if and only if_ \(\forall n(\alpha(n)<q)\)_,_
3. \(\alpha<q\) _if and only if_ \(\exists p\in\mathbb{Q}(p<q\wedge\alpha\leq p)\)__
4. \(q\leq\alpha\) _if and only if_ \(\forall p\in\mathbb{Q}(p<q\to p<\alpha)\)_._
**Proof.** For each \(q\in\mathbb{Q}\) consider its map \(\hat{q}\) in oriented reals. Items (1) and (2) are straight forward.
(3). If \(\alpha<\hat{q}\) then by definition 2.1, there exists \(m\) such that for all \(n\), \(\alpha(n)<q-\frac{1}{m+1}\). Let \(p=q-\frac{1}{m+1}\). By (2), we have \(\alpha\leq p\). The converse is straight forward.
(4). If \(\hat{q}\leq\alpha\) then for all \(m\) there exists \(n_{m}\) such that \(q-\frac{1}{m+1}<\alpha(n_{m})\). For \(p<q\), there exists \(m_{0}\) such that \(p<q-\frac{1}{m_{0}+1}<\alpha(n_{m_{0}})\). By (1), \(p<\alpha\). For the converse assume \(\forall p\in\mathbb{Q}(p<q\to p<\alpha)\). Then for all \(m\), \(q-\frac{1}{m+1}<\alpha\). By (1), there exists \(n_{m}\) such that \(q-\frac{1}{m+1}<\alpha(n_{m})\). \(\dashv\)
For each \(\alpha\in\mathbb{R}^{o}\), let \(A_{\alpha}=\{q\in\mathbb{Q}\mid\exists n(q<\alpha(n))\}\) be the _cut_ specified by \(\alpha\). Note that \(\alpha=^{o}\beta\) if and only if \(A_{\alpha}=A_{\beta}\) as sets.
**Proposition 2.4**: _For \(\alpha\in\mathbb{R}^{o}\),_
1. \(\forall q\in\mathbb{Q}\)__\((q\in A_{\alpha}\to\exists p\in\mathbb{Q}\)__\((p>q\wedge p\in A_{\alpha}))\)__\(\ \mbox{(openness)}\)_,_
2. \(\forall p,q\in\mathbb{Q}\)__\((p<q\wedge q\in A_{\alpha}\to p\in A_{\alpha})\)__\(\ \mbox{(monotonicity)}\)_,_
3. \(\exists p,q\in\mathbb{Q}\)__\((p\in A_{\alpha}\wedge q\not\in A_{\alpha})\)__\(\ \mbox{(boundedness)}\)_._
**Proof.** It is straightforward. \(\dashv\)
**Lemma 2.5**: _For all \(\alpha,\beta\in\mathbb{R}^{o}\), there exists \(\gamma\in\mathbb{R}^{o}\), which specifies the cut \(A_{\alpha}\cap A_{\beta}\)._
**Proof.** The proof is easy. \(\dashv\)
**Lemma 2.6**: _Let \(\gamma:\mathbb{N}\rightarrow\mathbb{Q}\) be an upper bounded sequence, i.e., \(\exists M\in\mathbb{Q}\forall n(\gamma(n)<M)\), then there exists \(\alpha\in\mathbb{R}^{o}\) which specifies \(A=\{q\in\mathbb{Q}\mid\exists n\in\mathbb{N}(q<\gamma(n))\}\)._
**Proof.** Let \(\alpha(n)=max\{\gamma(0),\gamma(1),...,\gamma(n)\}-\frac{1}{n+1}\). The sequence \(\alpha\) is strictly increasing and the set \(A\) would be equal to \(\{q\in\mathbb{Q}\mid\exists n\in\mathbb{N}(q<\alpha(n))\}\). \(\dashv\)
The main difference between the collection of oriented reals \(\mathbb{R}^{o}\) and other collections of Dedekind reals such as \(\mathbb{R}^{d}\), extended reals \(\mathbb{R}^{be}\) and classical reals \(\mathbb{R}^{e}\) is that the former satisfies the _monotonicity_ property (\(\forall\alpha\in\mathbb{R}^{o}\forall p,q\in\mathbb{Q}(p<q\wedge q\in A_{ \alpha}\to p\in A_{\alpha})\)), whereas the others satisfy _strong monotonicity_ (for each Dedekind cut \(A\), \(\forall p,q\in\mathbb{Q}(p<q\wedge\neg\neg q\in A\to p\in A)\)). It is known that \(\mathbb{R}^{d}\subset\mathbb{R}^{be}\subset\mathbb{R}^{e}\) ([17], page 270). In the following proposition, by using the Markov Principle ([17], page 204):
\[\mathbf{MP}\qquad\forall\beta\in\mathbb{N}^{\mathbb{N}}(\neg\neg\exists k \beta(k)=0\rightarrow\exists k\beta(k)=0),\]
we show that \(\mathbb{R}^{o}\subseteq\mathbb{R}^{be}\).
**Proposition 2.7**: (**MP**) _For \(\alpha\in\mathbb{R}^{o}\), the cut \(A_{\alpha}\) satisfies strong monotonicity._
**Proof.** We show that for every \(\alpha\in\mathbb{R}^{o}\), \(A_{\alpha}\) satisfies _strong monotonicity_, i.e., \(\forall p,q\in\mathbb{Q}(p<q\wedge\neg\neg q\in A_{\alpha}\to p\in A_{\alpha})\). Since \(q\in A_{\alpha}\leftrightarrow\exists k\in\mathbb{N}(q<\alpha(k))\), we have \(\neg\neg q\in A_{\alpha}\leftrightarrow\neg\neg\exists k\in\mathbb{N}(q< \alpha(k))\). By \(\mathbf{MP}\), \(\neg\neg\exists k\in\mathbb{N}(q<\alpha(k))\leftrightarrow\exists k\in\mathbb{ N}(q<\alpha(k))\), (assume \(\beta(k)=\{^{0}_{1}_{1}\)\(\begin{array}{c}q<\alpha(k)\\ otherwise\end{array}\}\)). So \(\neg\neg q\in A_{\alpha}\leftrightarrow q\in A_{\alpha}\), for any arbitrary \(q\in\mathbb{Q}\). Then \(p<q\wedge\neg\neg q\in A_{\alpha}\leftrightarrow p<q\wedge q\in A_{\alpha}\), and since \(A_{\alpha}\) satisfies _monotonicity_, we derive \(p\in A_{\alpha}\). \(\dashv\)
For choice sequences, if \(\Phi\) is a total function from the collection of choice sequences to natural numbers, the value of \(\Phi\) for a sequence \(\alpha\) just depends on a finite segment of \(\alpha\). Now this question seems natural in constructive mathematics:
_how can we **construct** a **total** mapping \(\Phi:\mathbb{R}^{o}\rightarrow\mathbb{N}\)?_
Assume we have a strategy to construct a total mapping \(\Phi:\mathbb{R}^{o}\rightarrow\mathbb{N}\). Then for any arbitrary sequence \(\alpha\in\mathbb{R}^{o}\), we must be able to construct a witness for \(\alpha\). Since \(\Phi\) is well-defined, for any sequence \(\beta=^{o}\alpha\), we will have \(\Phi(\beta)=\Phi(\alpha)\). Therefore, our strategy cannot depend on any finite segment of \(\alpha\). Because of this obstacle, one may expect that it is not possible construct \(\Phi\) unless \(\Phi\) happens to be constant. We fulfill this intention, using Brouwer's weak continuity principle ([17], page 209):
\[\mathbf{WCN}\quad\ \forall\alpha\in\mathbf{T}\exists y(\Phi(\alpha)=y) \rightarrow\forall\alpha\in\mathbf{T}\exists x\forall\beta\in\mathbf{T}( \bar{\beta}x=\bar{\alpha}x\rightarrow\Phi\beta=\Phi\alpha),\]
where \(\mathbf{T}\) is a spread and for each sequence \(\alpha\), \(\bar{\alpha}x=\langle\alpha(0),\alpha(1),...,\alpha(x-1)\rangle\).
**Proposition 2.8**: (**WCN**) _Any (constructive) total function \(\Phi:\mathbb{R}^{o}\rightarrow\mathbb{N}\) is constant._
**Proof.** Let \(\alpha,\beta\in\mathbb{R}^{o}\) be arbitrary, \(\alpha\leq\beta\), \(M\in\mathbb{Q}\) be an upper-bound for \(\beta\). The set \(\mathbb{R}^{o}_{M}=\{\gamma\in\mathbb{R}^{o}\mid\gamma<M\}\) is a spread. Since \(\Phi\) is a total, by **WCN**, there exists \(t\) such that, for all \(\gamma\in\mathbb{R}^{o}_{M}\), if \(\bar{\gamma}t=\bar{\alpha}t\) then \(\Phi(\gamma)=\Phi(\alpha)\). Find \(\gamma\in\mathbb{R}^{o}_{M}\) passing through \(\bar{\alpha}t\) such that \(\gamma=^{o}\beta\). By well-definedness of \(\Phi\) we conclude \(\Phi(\beta)=\Phi(\alpha)\). So, for all \(\alpha,\beta\in\mathbb{R}^{o}\), if \(\alpha\leq\beta\) then \(\Phi(\beta)=\Phi(\alpha)\). Now, for arbitrary \(\alpha,\beta\in\mathbb{R}^{o}\), define \(\gamma(n)=min(\alpha(n),\beta(n))\). Here, \(\gamma\leq\alpha\) and \(\gamma\leq\beta\). Then \(\Phi(\gamma)=\Phi(\alpha)\) and \(\Phi(\gamma)=\Phi(\beta)\). Consequently \(\Phi(\beta)=\Phi(\alpha)\). \(\dashv\)
We showed that any total function from \(\mathbb{R}^{o}\) to natural numbers is constant in the presence of the Weak Continuity Principle, that is., if one has a constructive method that for each oriented cut is able to introduce a witness, a natural number, then the witness is unique. To avoid this, as a replacement for natural numbers, we consider another category of objects called _almost natural numbers_, and construct witnesses for oriented reals from this category.
**Definition 2.9**: _We let \(\mathbb{N}^{*}\) be the set of all functions \(\xi\) from \(\mathbb{N}\) to \(\mathbb{N}\) such that, for some \(k\), for all \(n\), \(\xi(n)\leq\xi(n+1)\leq k\). For all \(\xi,\nu\) in \(\mathbb{N}^{*}\) we define:_
\(\xi\leq\nu\) _if and only if \(\forall m\exists n(\xi(m)\leq\nu(n))\), and_
\(\xi=^{*}\nu\) _if and only if \((\xi\leq\nu)\wedge(\nu\leq\xi)\)._
_We call \(\mathbb{N}^{*}\), regarding equality \(=^{*}\), the set of almost natural numbers._
It is clear that \(\mathbb{N}^{*}\) is classically isomorphic to \(\mathbb{N}\) as a set. In fact, classically, elements in \(\mathbb{N}^{*}\) are increasing sequences that converge.
**Lemma 2.10**: _For every \(\xi\in\mathbb{N}^{*}\),_
1. \(\neg\neg\exists n\forall m>n[\xi(m)=\xi(n)]\)_,_
2. \(\neg\forall n\exists m>n\ (\xi(m)\neq\xi(m+1))\)_._
**Proof.**
1. To show \(\forall\xi\in\mathbb{N}^{*}\neg\neg\exists n\forall m>n[\xi(m)=\xi(n)]\), it is enough to show \(\neg\exists\xi\in\mathbb{N}^{*}\neg\exists n\forall m>n[\xi(m)=\xi(n)]\), by the intuitionistic valid statement \(\neg\exists xA(x)\leftrightarrow\forall x\neg A(x)\). So assume for some \(\xi_{0}\in\mathbb{N}^{*}\), \(\neg\exists n\forall m>n[\xi_{0}(m)=\xi_{0}(n)]\). Then, \(\forall n\neg\forall m>n[\xi_{0}(m)=\xi_{0}(n)]\). Since \(\xi_{0}\in\mathbb{N}^{*}\), there exists \(k_{0}\in\mathbb{N}\) such that \(\xi_{0}(n)\leq\xi_{0}(n+1)\leq k_{0}\), for all \(n\in\mathbb{N}\). We prove that for every \(k\in\mathbb{N}\), \((\forall n(\xi_{0}(n)\leq k))\rightarrow(\forall n(\xi_{0}(n)<k))\) (1). Assume there exists \(t\in\mathbb{N}\) such that \(\xi_{0}(t)=k\). Since \(\xi_{0}\) is nondecreasing and \(\forall n(\xi_{0}(n)\leq k))\), we have \(\forall n>t[\xi_{0}(n)=k]\). It contradicts with \(\forall n\neg\forall m>n[\xi_{0}(m)=\xi_{0}(n)]\). Hence \(\forall n(\xi_{0}(n)<k)\). Now let \(k=k_{0}\). By (1), we derive \(\forall n(\xi_{0}(n)\leq k_{0}-1)\). Repeating using (1), we have got \(\forall n(\xi_{0}(n)=0)\). It contradicts with \(\forall n\neg\forall m>n[\xi_{0}(m)=\xi_{0}(n)]\).
2. Let \(\xi\in\mathbb{N}^{*}\), there exists \(k\in\mathbb{N}\) such that for all \(n\), \(\xi(n)<k\). Assume \[\forall n\exists m>n\ (\xi(m)\neq\xi(m+1)).\ (2)\]
Define \(f:\mathbb{N}\rightarrow\mathbb{N}\) as
\[f(n)=\min\{m\mid(m>n\wedge\xi(m)\neq\xi(m+1))\}+1.\]
Due to the assumption (2), the function \(f\) is a constructive well defined function. It is easy to see that \(n<f(n)\) and \(\xi(n)<\xi(f(n))\). Therefore,
\[\xi(1)<\xi(f(1))<\xi(f(f(1)))<\xi(f(f(f(1))))<....\]
It contradicts with the fact that for all \(n\), \(\xi(n)<k\).
\(\dashv\)
It is worth mentioning that if \(\gamma\) is an almost natural number then the set \(I=\{n\in\mathbb{N}\mid\exists k\ (n\leq\gamma(k))\}\) is not necessary finite, but it is _quasi-finite_ in the sense of [18], i.e., it is a subset of a finite set. Moreover, its complement is _almost full_[18], meaning that for every strictly increasing sequence \(\alpha\), one may find a natural number \(n\) such that \(\alpha(n)\) belongs to the complement of \(I\).
Until now, the two notions of oriented reals and almost natural numbers have been defined. The following proposition (accompanied by its proof) shows that how these two notions are related to our intuition of orientation. In contrast with proposition 2.8, we have
**Proposition 2.11**: _There exists a non-constant total function \(\Phi:\mathbb{R}^{o}\rightarrow\mathbb{N}^{*}\)._
**Proof.** Choose \(d_{1}<d_{2}<...<d_{j}\) from \(\mathbb{Q}\) for some fixed \(j\in\mathbb{N}\). For any \(\beta\in\mathbb{R}^{o}\), we define
\[\Phi(\beta)(0)=0\]
, and
\[\Phi(\beta)(n)=max(\{i\mid i\leq j\wedge(d_{i}\leq\beta(n))\}\cup\{0\})\mbox{ for all }n\geq 1.\]
One may easily check that the followings hold true for \(\Phi\):
1. \(\Phi(\beta)\in\mathbb{N}^{*}\).
2. \(\Phi\) is well-defined, i.e., if \(\alpha=^{o}\beta\) then \(\Phi(\alpha)=^{*}\Phi(\beta)\).
3. \(\Phi\) is non-constant. Assume two different oriented cuts \(\alpha,\beta\), such that \(d_{1}\not\in A_{\alpha}\) and \(d_{1}\in A_{\beta},d_{2}\not\in A_{\beta}\). Then \(\forall n\in\mathbb{N}\ \Phi(\alpha)(n)=0\), whereas, \(\exists k\forall n>k\ \Phi(\beta)(n)=1\).
The function \(\Phi\) can be computed by the following algorithm as well:
1. Put \(\Phi(\beta)(0):=0\);
2. put \(t=1\);
3. For i=1 to j do: \[\{\] while \((\beta(t)<d_{i})\) do: \[\{\] define \(\Phi(\beta)(t):=i-1\); put \(t=t+1\);
\}
[MISSING_PAGE_POST]
\(}\)\)
[MISSING_PAGE_POST]
\(}\)\)
[MISSING_PAGE_POST]
\(}\)\)
[MISSING_PAGE_POST]
\}\)
[MISSING_PAGE_POST]
(}\)
[MISSING_PAGE_POST]
\(\}\)
\(\}\)\)
\(\}\}\)
\(\}\)\)
\(\}\)\(\}\)
\(\}\)\)
\(\}\}\)\)
\(\(\}\)\)
\(\}\)\)
\(\}\)\(\}\)
\(\}\)\)
\(\}\}\)\)
\(\}\)\)
\(\}\)
\(\}\)\)
\(\}\)\)
\(\}\)
\(\}
**Proposition 2.15**: _(\(\neg\exists\)-Pem) It is false that every oriented real can be approximated by rationals._
**Proof.** For an arbitrary bounded increasing sequence \(\alpha\) of rationals, let \(\beta_{\alpha}\) be an oriented real defined by \(\beta_{\alpha}(n)=\alpha(n)-\frac{1}{n+1}\), for \(n\in\mathbb{N}\), which specifies the cut \(A=\{q\in\mathbb{Q}\mid\exists k(q<\alpha(k))\}\). If \(\beta_{\alpha}\) can be approximated by rationals, then \(\beta_{\alpha}\) is a Cauchy sequence. Consequently, the sequence \(\alpha\) is also Cauchy and so it converges. Since \(\alpha\) was arbitrary, it would imply that every bounded monotone sequence has a limit in Brouwerian real line. That contradicts with \(\neg\exists\)-PEM ([17], page 268). \(\dashv\)
**Definition 2.16**: _We say that an oriented real \(\alpha\) can be approximated by almost rationals, if \(\forall n\exists\zeta\in\mathbf{Q}(\hat{\zeta}\leq\alpha\wedge\alpha\leq(\hat{ \zeta}+2^{-n}))\)._
**Proposition 2.17**: _Every oriented real can be approximated by almost rationals numbers._
**Proof.** Let \(\beta\in\mathbb{R}^{o}\), and \(M\in\mathbb{Q}\) be an upper bound for \(\beta\). We define \(\zeta\) by induction. Let \(\zeta(0)=\beta(0)\), and assume we have defined \(\zeta(k)\). We define
\[\zeta(k+1)=\zeta(k)+2^{-n}t\text{, where }t\in\mathbb{N}\text{ and }\zeta(k)+2^{-n}t\leq\beta(k+1)<\zeta(k)+2^{-n}(t+1).\]
Note that \(\zeta\in\mathbf{Q}\), since \(image(\zeta)\subseteq\{\beta(0)+2^{-n}t\mid\beta(0)+2^{-n}t<M,t\in\mathbb{N}\}\) and the latter is not infinite. We claim that \(\hat{\zeta}\leq\beta\wedge\beta\leq\hat{\zeta}+2^{-n}\). For \(\hat{\zeta}\leq\beta\), we have for each \(k\in\mathbb{N}\) there exists \(m\) such that \(\zeta(k)<\beta(m)\). For the second clause, i.e., \(\beta\leq\hat{\zeta}+2^{-n}\), we have \(\beta(k)<\zeta(k)+2^{-n}\).\(\dashv\)
A subset \(B\) of \(\mathbb{N}\) is called (intuitionistically) enumerable if it is the image of a function on natural numbers (see [1]). Assuming the Kripke schema ([17], page 236):
\[\mathbf{KS}\hskip 14.226378pt\forall X\exists\alpha\forall n[n\in X\leftrightarrow \exists m(\alpha(n,m)=0)].\]
every inhabited 4 subset of natural numbers is (intuitionistically) enumerable [1].
Footnote 4: A set \(B\) is inhabited if \(\exists x(x\in B)\). We indicate that by \(B\#\emptyset\).
**Lemma 2.18**: _If \(B\subseteq\mathbb{Q}\) is (intuitionistically) enumerable and bounded from above, then there exists an oriented real number \(\alpha\) such that \(A_{\alpha}=C(B)=\{q\in\mathbb{Q}\mid\exists p(p\in B\wedge q<p)\}\)._
**Proof.** As \(B\subseteq\mathbb{Q}\) is (intuitionistically) enumerable, there exists a sequence \(\gamma:\mathbb{N}\rightarrow\mathbb{Q}\) such that \(\forall q\in\mathbb{Q}[q\in B\leftrightarrow\exists m(\gamma(m)=q)]\). Since \(B\) is bounded from above, the sequence \(\gamma\) has an upper bound. By lemma 2.6, there exists an oriented number \(\alpha\) such that \(A_{\alpha}=C(B)\). \(\dashv\)
Assume \(B\subseteq\mathbb{Q}\) is (intuitionistically) enumerable and upper bounded. We define the supremum of \(B\), \(sup(B)\), to be the oriented real number \(\alpha\) that \(A_{\alpha}=C(B)\). Assuming the Kripke schema, \(\mathbf{KS}\), every upper bounded subset of \(\mathbb{Q}\) has supremum.
**Proposition 2.19**: \(\mathbf{KS}\)_. Assume \(B\subseteq\mathbb{Q}\) is upper bounded. Then for \(\alpha=sup(B)\), the following hold._
1. \(\forall q\in\mathbb{Q}(q\in B\to q\leq\alpha)\)_,_
2. \(\forall p\in\mathbb{Q}(p<\alpha\rightarrow\exists q\in\mathbb{Q}(q\in B\wedge p<q))\)_._
**Proof.** (1.) Assume \(p\in B\). For any \(q<p\), we have \(q\in C(B)=A_{\alpha}\). Then there exists \(n\) such that \(q<\alpha(n)\). By items 1 and 4 of proposition 2.3, we have \(p\leq\alpha\).
(2.) Assume \(p<\alpha\). Then \(p\in A_{\alpha}\). As \(A_{\alpha}=C(B)\), there exists \(q\in B\) such that \(p<q\). \(\dashv\)
**Definition 2.20**: _Let \(D\subseteq\mathbb{Q}\) is lower bounded, and \(B=\{p\in\mathbb{Q}\mid\forall q(q\in D\to q\geq p)\}\). We define the infimum of \(D\), \(inf(D)\), to be the supremum of \(B\)._
**Proposition 2.21**: \(\mathbf{(KS)}\)_. Assume \(D\subseteq\mathbb{Q}\) is lower bounded. Then for \(\alpha=inf(D)\), the following hold._
1. \(\forall q\in\mathbb{Q}(q\in D\rightarrow\alpha\leq q)\)_,_
2. \(\forall p\in\mathbb{Q}[(\forall q\in\mathbb{Q}(q\in D\to p\leq q)) \to p\leq\alpha]\)_._
**Proof.** (1.) \(\alpha=inf(D)\) is an oriented real such that \(A_{\alpha}=C(B)=\{q\in\mathbb{Q}\mid\exists p(p\in B\wedge q<p)\}\), where \(B=\{p\in\mathbb{Q}\mid\forall q(q\in D\to q\geq p)\}\). Let \(q\in D\). Since \(A_{\alpha}=\{q\in\mathbb{Q}\mid\exists n(q<\alpha(n))\}\) and \(\alpha\) is strictly increasing, we have \(\alpha(n)\in A_{\alpha}\), for every \(n\). By the equality \(A_{\alpha}=C(B)\), it is derived that for each \(n\), there exists \(p\in B\) such that \(\alpha(n)<p\). Then by definition of \(B\), \(\alpha(n)<q\).
(2.) Assume \(p\in\mathbb{Q}\) is such that for all \(q\in D\), \(p\leq q\). Then \(p\in B\), and thus for all \(p_{0}<p\), \(p_{0}\in C(B)=A_{\alpha}\). According to definition of \(A_{\alpha}\), there exists \(n\) such that \(p_{0}<\alpha(n)\). Then, \((\forall p_{0}<p)\exists n(p_{0}<\alpha(n))\). By items 1 and 4 of proposition 2.3, we have \(p\leq\alpha\).\(\dashv\)
**Theorem 2.22**: \(\mathbf{(KS)}\)_. **The monotone convergence theorem**. For every upper bounded nondecreasing sequence \((\alpha_{n})_{n\in\mathbb{N}}\) of oriented reals, there exists an oriented real \(\alpha\) such that_
1. \(\forall n(\alpha_{n}\leq\alpha)\)_,_
2. \((\forall p\in\mathbb{Q})[(p<\alpha)\rightarrow(\exists m\forall n(n\geq m \to p<\alpha_{n}))]\)_._
**Proof.** Let \(B=\{p\in\mathbb{Q}\mid\exists n(p<\alpha_{n})\}\). The set \(B\) is upper bounded, so let \(\alpha=sup(B)\). For all \(n,m\in\mathbb{N}\), we have \(\alpha_{n}(m)\in B\), since \(\alpha_{n}\) is a strictly decreasing sequence of rationals. By proposition 2.19, we have for each \(n\), for all \(m\), \(\alpha_{n}(m)<\alpha_{n}(m+1)\leq\alpha\). So, by items (1) and (4) of proposition 2.3, for each \(n\), \(\alpha_{n}\leq\alpha\).
If \(p<\alpha\), then by proposition 2.19, there exists \(q\in B\) such that \(p<q\). By definition of \(B\), there exists \(m\in\mathbb{N}\), \(p<q<\alpha_{m}\). As \((\alpha_{n})_{n\in\mathbb{N}}\) is a non-decreasing sequence, we have for all \(n\geq m\), \(p<\alpha_{n}\). \(\dashv\)
Among the Dedekind lines \(\mathbb{R}^{e}\), \(\mathbb{R}^{be}\), and \(\mathbb{R}^{d}\) introduced in [17], the line \(\mathbb{R}^{be}\) is much similar to \(\mathbb{R}^{o}\). The only difference between the cuts of \(\mathbb{R}^{be}\) and the cuts of \(\mathbb{R}^{o}\) is that the first one satisfies _strong monotonicity_, whereas the second one just fulfils monotonicity. On the other hand, as we have already noted, the reals in \(\mathbb{R}^{be}\) cannot be approximated by rationals, as desired as a requirement for the temporal line. Hence, the line \(\mathbb{R}^{be}\) could be assumed as an
appropriate mathematical model for the temporal line if there was a _continuity principle_ for it, like Brouwer's continuity principle for _Cauchy_ reals \(\mathbb{R}\). We note that the line \(\mathbb{R}\) is the _Cauchy_ completion of \(\mathbb{Q}\) and the line \(\mathbb{R}^{be}\) is the _order completion_ of \(\mathbb{Q}\).
Let us define a mapping \(\Psi:\mathbb{R}^{o}\rightarrow\mathbb{R}^{be}\) as follows:
\[\Psi(\alpha):=\{r\in\mathbb{Q}\mid(\exists s\in\mathbb{Q})[r<s\wedge\neg \neg\exists n(s<\alpha(n))]\}.\]
One can easily check that \(\Psi\) is well-defined and for all \(\alpha\in\mathbb{R}^{o}\), \(\Psi(\alpha)\in\mathbb{R}^{be}\).
**Proposition 2.23**: (**KS**). _The mapping \(\Psi:\mathbb{R}^{o}\rightarrow\mathbb{R}^{be}\) is surjective._
**Proof.** Assuming the Kripke schema, it can be shown that every inhabited subset of natural numbers is (intuitionistically) enumerable [1].
Let \(S\in\mathbb{R}^{be}\), and \(\gamma:\mathbb{N}\to S\) enumerates \(S\), i.e., \(\forall q\in\mathbb{Q}[q\in S\leftrightarrow\exists m(\gamma(m)=q)]\). Define
\[\alpha(n)=max\{\gamma(0),\gamma(1),...,\gamma(n)\}-\frac{1}{n+1}.\]
The sequence \(\alpha\) is in \(\mathbb{R}^{o}\). We claim \(\Psi(\alpha)=S\).
Assume \(r\in\Psi(\alpha)\). Then there exists \(s\in\mathbb{Q}\) such that \(r<s\) and \(\neg\neg\exists n(s<\alpha(n))\). For the sake of argument, assume \(\exists n(s<\alpha(n))\). Note that for all \(m\), \(\alpha(m)\in S\). By the _monotonicity_ of \(S\), we have \(s\in S\). Thus \(\neg\neg s\in S\). By the _strong monotonicity_, we have \(r\in S\).
For the converse, assume \(r\in S\). Let \(k\) be such that \(\gamma(k)=r\). By openness, there exists \(r^{\prime}\in S\) such that \(r<r^{\prime}\), and let \(\gamma(m)=r^{\prime}\) for some \(m\). Choose \(n>m\) such that \(r^{\prime}-r<\frac{1}{n+1}\). Then we have \(r<\alpha(n)\) and thus \(r\in\Psi(\alpha)\). \(\dashv\)
**Proposition 2.24**: _For all \(\alpha,\beta\in\mathbb{R}^{o}\)_
\[\alpha<\beta\mbox{ implies }\Psi(\alpha)<\Psi(\beta).\]
**Proof.** The relation "\(<\)" for cuts \(S,T\) is defined as follows: \(S<T:=\exists r>0(S+r\subset T)\) (see definition 5.4 of [17]). Suppose \(\alpha<\beta\). Then for some \(n\), we have \(\forall m(\alpha(m)<\beta(n))\). Let \(r=\beta(n+1)-\beta(n)\). It is easy to check that \(\Psi(\alpha)+r\subset\Psi(\beta)\). \(\dashv\)
### Arithmetic in \(\mathbb{R}^{o}\)
As explained in the introduction, we propose \(\mathbb{R}^{o}\) as a model for the temporal line. Each oriented real illustrates a moment. Then what does it mean to add or multiply two moments? What is a proper arithmetic of the temporal line?
The real line \(\mathbb{R}^{be}\) is a _field_ due to the _strong monotonicity_ property of its elements. A field \(\langle F,+,\cdot\rangle\) is an algebraic structure with two _functions_\(+\) and \(\cdot\) from \(F\times F\) to \(F\), where \(F\) equipped with the functions \(+\) or \(\cdot\), is a group, and \(\cdot\) is distributed over \(+\). In our model, \(\mathbb{R}^{o}\), the operations \(+\) and \(\cdot\) are _relations_ instead of being functions, i.e, it is an algebraic structure known as a_hyperstructure_5.
**Definition 2.25**: _A h-field is a tuple \(\langle F,+,*,0,1\rangle\) where \(+\subseteq F\times F\times F\) and \(*\subseteq F\times F\times F\) satisfy the following axioms:_
* Inhabitance. \[(\forall x,y)(\exists z)+(x,y,z)\qquad(\forall x,y)(\exists z)*(x,y,z).\]
* Identity. \[(\forall x)+(x,0,x)\qquad(\forall x)*(x,1,x).\]
* Inverse. \[(\forall x\exists y)+(x,y,0)\qquad(\forall x\exists y)*(x,y,1).\]
* Commutativity. \[(\forall x,y,z)(+(x,y,z)\leftrightarrow+(y,x,z))\qquad(\forall x,y,z)(*(x,y, z)\leftrightarrow*(y,x,z)).\]
* Associativity. \[(\forall x,y,z,w,v,u)((+(x,y,w)\wedge+(w,z,v)\wedge+(y,z,u)) \rightarrow+(x,u,v))\] \[(\forall x,y,z,w,v,u)((+(x,y,w)\wedge+(x,u,v)\wedge+(y,z,u)) \rightarrow+(w,z,v))\]
* Associativity. \[(\forall x,y,z,w,v,u)((*(x,y,w)\wedge*(w,z,v)\wedge*(y,z,u)) \rightarrow*(x,u,v))\] \[(\forall x,y,z,w,v,u)((*(x,y,w)\wedge*(x,u,v)\wedge*(y,z,u)) \rightarrow*(w,z,v)).\]
* Distributivity. \[(\forall x,y,z,w,v,u,r)((*(x,v,w)\wedge+(y,z,v)\wedge*(x,y,u) \wedge*(x,z,r))\rightarrow+(u,r,w))\] \[(\forall x,y,z,w,v,u,r)((+(u,r,w)\wedge+(y,z,v)\wedge*(x,y,u) \wedge*(x,z,r))\rightarrow*(x,v,w)).\]
We define an _addition relation_\(+\), and a _multiplication relation_\(*\) on \(\mathbb{R}^{o}\) as follows:
For \(\alpha,\beta,\gamma\in\mathbb{R}^{o}\), we let
1. \(+(\alpha,\beta,\gamma)\) if and only if \(\Psi(\alpha)+\Psi(\beta)=\Psi(\gamma)\), and
2. \(*(\alpha,\beta,\gamma)\) if and only if \(\Psi(\alpha)\cdot\Psi(\beta)=\Psi(\gamma)\).
**Proposition 2.26**: \((\mathbf{KS})\)_. \(\langle\mathbb{R}^{o},+,*,\hat{0},\hat{1}\rangle\) is a h-field._
**Proof.** The proof is straightforward by using the fact that \(\mathbb{R}^{be}\) is a field. \(\dashv\)
### The Oriented Continuity Principle
In this part, we propose the oriented continuity principle which expresses formally our sense of the notion of orientation. To do this, we study total functions \(\Phi\) from \((0,1]^{o}\) to \(\mathbb{N}^{*}\), where \((0,1]^{o}=\{\alpha\in\mathbb{R}^{o}\mid\hat{0}<\alpha\leq\hat{1}\}\). Similarly, we let \([0,1]^{o}=\{\alpha\in\mathbb{R}^{o}\mid\hat{0}\leq\alpha\leq\hat{1}\}\). We assume such function \(\Phi\) is well-defined, i.e., \(\alpha=^{o}\beta\) implies \(\Phi(\alpha)=^{*}\Phi(\beta)\).
**Lemma 2.27**: \((\mathbf{WCN})\)_. Let \(\Phi\in(0,1]^{o}\to\mathbb{N}^{*}\) be total. Then \(\forall\alpha,\beta\in(0,1]^{o}(\alpha\leq\beta\to\Phi(\alpha)\leq\Phi(\beta))\)._
**Proof.** For \(\theta\in\mathbb{N}^{*}\) define \(N_{\theta}=\{m\in\mathbb{N}\mid\exists n(m\leq\theta(n))\}\). We need to show that \(\alpha\leq\beta\) implies \(N_{\Phi(\alpha)}\subseteq N_{\Phi(\beta)}\). Assume \(\Phi(\alpha)(n)=k\) for some \(n,k\in\mathbb{N}\). The set \((0,1]^{o}\) is a spread and \(\Phi\) is total, so by \(\mathbf{WCN}\) we can find a \(t\) such that for each \(\delta\in(0,1]^{o}\), \(\bar{\delta}t=\bar{\alpha}t\) implies \(\Phi(\delta)(n)=\Phi(\alpha)(n)\). We have, for each \(\delta\in(0,1]^{o}\), if \(\bar{\delta}t=\bar{\alpha}t\) then \(k\in N_{\Phi(\delta)}\). Since \(\alpha\leq\beta\) there exists \(\lambda\in(0,1]^{o}\) such that \(\bar{\lambda}t=\bar{\alpha}t\) and \(\lambda=^{o}\beta\). Note that \(\Phi\) is well-defined, therefore \(k\in N_{\Phi(\beta)}\). Since we assumed \(k\in\mathbb{N}\) to be arbitrary, we have \(N_{\Phi(\alpha)}\subseteq N_{\Phi(\beta)}\). \(\dashv\)
The following theorem is by W. Veldman from our correspondence with him.
**Theorem 2.28**: \((\mathbf{WCN})\)_. Let \(\Phi\) be a total well-defined function from \((0,1]^{o}\) to \(\mathbb{N}^{*}\). Then_
\[\forall\alpha\in(0,1]^{o}\neg\neg\exists q\in\mathbb{Q}[q<\alpha\wedge\forall \beta\in(0,1]^{o}[(q<\beta\leq\alpha\to\Phi(\alpha)=^{*}\Phi(\beta)]].\]
**Proof.** Let \(\alpha\in(0,1]^{o}\). Since \(\Phi(\alpha)\in\mathbb{N}^{*}\), by lemma 2.10, \(\neg\neg\exists n\forall m>n[\Phi(\alpha)(m)=\Phi(\alpha)(n)]\). For the sake of the argument, assume \(\exists n\forall m>n[\Phi(\alpha)(m)=\Phi(\alpha)(n)]\), and let \(n_{0}\) be such that \(\forall m>n_{0}[\Phi(\alpha)(m)=\Phi(\alpha)(n_{0})]\). By \(\mathbf{WCN}\), there is a \(t\) such that for all \(\beta\) in the spread \((0,1]^{o}\), if \(\bar{\beta}t=\bar{\alpha}t\) then \(\Phi(\beta)(n_{0})=\Phi(\alpha)(n_{0})\). It follows that, for all \(\beta\in(0,1]^{o}\) passing through \(\bar{\alpha}t=\langle\alpha 0,\alpha 1,...,\alpha(t-1)\rangle\), \(\Phi(\alpha)\leq\Phi(\beta)\).
Define \(q:=\alpha(t-1)\). Assume that \(\beta\in(0,1]^{o}\) and \(q<\beta\leq\alpha\). Find \(\lambda\) passing through \(\bar{\alpha}t\) such that \(\lambda=^{o}\beta\), and conclude \(\Phi(\alpha)\leq\Phi(\lambda)=^{*}\Phi(\beta)\). On the other hand, by lemma 2.27, since \(\beta\leq\alpha\) we have \(\Phi(\beta)\leq\Phi(\alpha)\). Hence, for every \(\beta\) satisfying \(q<\beta\leq\alpha\), \(\Phi(\beta)=^{*}\Phi(\alpha)\). This conclusion is obtained from the assumption: \(\exists n\forall m>n[\Phi(\alpha)(m)=\Phi(\alpha)(n)]\). As we know \(\neg\neg\exists n\forall m>n[\Phi(\alpha)(m)=\Phi(\alpha)(n)]\), we may conclude \(\forall\alpha\in(0,1]^{o}\neg\neg\exists q\in\mathbb{Q}[q<\alpha\wedge\forall \beta\in(0,1]^{o}[(q<\beta\leq\alpha\to\Phi(\alpha)=^{*}\Phi(\beta)]]\). \(\dashv\)
Note that for any \(\Phi:(0,1]^{o}\to\mathbb{N}^{*}\), there exists \(k\in\mathbb{N}\) such that for all \(n\), \(\Phi(\hat{1})(n)\leq k\), by definition of almost natural numbers.
**Proposition 2.29**: \((\mathbf{WCN})\)_. Let \(\Phi:(0,1]^{o}\to\mathbb{N}^{*}\) be total, \(k\in\mathbb{N}\) be such that for all \(n\), \(\Phi(\hat{1})(n)\leq k\), and \(T_{i}=\{q\in\mathbb{Q}\mid\Phi(\hat{q})=^{*}\underline{i}\}\), for \(0\leq i\leq k\), where \(\underline{i}=\langle i,i,i,...\rangle\in\mathbb{N}^{*}\). Then_
1. _if_ \(i<j\)_,_ \(q\in T_{i}\) _and_ \(p\in T_{j}\)_, then_ \(q<p\)_,_
2. _for each_ \(\alpha\in(0,1]^{o}\)_, if_ \(A_{\alpha}\cap T_{i}\#\emptyset\)_, then_ \(\Phi(\alpha)\geq\underline{i}\)_,_
3. _for each_ \(\alpha\in(0,1]^{o}\)_, if_ \(\Phi(\alpha)=^{*}\underline{i}\)_, then_ \(\neg(A_{\alpha}\cap T_{i}=\emptyset)\)_,_
4. _for each_ \(\alpha\in(0,1]^{o}\)_,_ \(\neg\neg\exists i[(0\leq i\leq k)\wedge(\Phi(\alpha)=^{*}\underline{i})]\)_._
**Proof.** (a) and (b) are derived by lemma 2.27. (c) follows from theorem 2.28, and (d) is a consequence of the definition of almost natural numbers, and the fact \(\forall\alpha\in(0,1]^{*}(\Phi(\alpha)\leq\Phi(\hat{1})\leq\underline{k})\). \(\dashv\)
**Theorem 2.30**: \(({\bf WCN+KS})\)_. Assume \(\Phi:(0,1]^{o}\rightarrow\mathbb{N}^{*}\). Let \(\delta_{i}=inf(T_{i})\), for \(T_{i}\)s \(0\leq i\leq k\), defined above, and \(E=\{\delta_{i}\mid 0\leq i\leq k\}\). For each \(\alpha\in(0,1]^{o}\), define \(L_{\alpha}=\{\gamma\in(0,1]^{o}\mid\gamma<\alpha\}\). Then_
\[\forall\alpha,\beta\in(0,1]^{o}[(L_{\alpha}\cap E=L_{\beta}\cap E)\rightarrow \neg\neg(\Phi(\alpha)=^{*}\Phi(\beta))].\]
**Proof.** First, observe that for \(\alpha\in(0,1]^{o}\) and \(0\leq i\leq k\),
* \(\delta_{i}\in L_{\alpha}\rightarrow\neg(A_{\alpha}\cap T_{i}=\emptyset)\), and
* \(A_{\alpha}\cap T_{i}\#\emptyset\rightarrow\delta_{i}\in L_{\alpha}\)
To prove the theorem, let \(\alpha,\beta\in(0,1]^{o}\) such that \(L_{\alpha}\cap E=L_{\beta}\cap E\) and \(\neg(\Phi(\alpha)=^{*}\Phi(\beta))\) (1). we want to derive a contradiction. By proposition 2.29(d), we have \(\neg\neg\exists i[(0\leq i\leq k)\wedge(\Phi(\alpha)=^{*}\underline{i})]\) (2), and \(\neg\neg\exists i[(0\leq i\leq k)\wedge(\Phi(\beta)=^{*}\underline{i})]\) (3). Applying the intuitionistic valid statement \(\neg\neg(\varphi\wedge\psi)\leftrightarrow\neg\neg\varphi\wedge\neg\neg\psi\) to (1), (2) and (3), gives \(\neg\neg(\exists i,j[(0\leq i,j\leq k)\wedge(\Phi(\alpha)=^{*}\underline{i}) \wedge(\Phi(\beta)=^{*}\underline{j})\wedge\neg(\Phi(\alpha)=^{*}\Phi(\beta))])\), which implies
\[\neg\neg(\exists i,j[(0\leq i,j\leq k)\wedge(\Phi(\alpha)=^{*}\underline{i}) \wedge(\Phi(\beta)=^{*}\underline{j})\wedge\neg(\underline{i}=^{*}\underline {j})]).\]
Now, for the sake of argument, assume
\[\psi\equiv\exists i,j[(0\leq i,j\leq k)\wedge(\Phi(\alpha)=^{*}\underline{i}) \wedge(\Phi(\beta)=^{*}\underline{j})\wedge\neg(\underline{i}=^{*}\underline{ j})].\]
Either \(i<j\) or \(j<i\), and assume the first case. By proposition 2.29(b), \(\neg(A_{\alpha}\cap T_{j}\#\emptyset)\), i.e, \(A_{\alpha}\cap T_{j}=\emptyset\). So by \((a)\), \(\neg(\delta_{j}\in L_{\alpha})\), and by the assumption \(L_{\alpha}\cap E=L_{\beta}\cap E\), we have \(\neg(\delta_{j}\in L_{\beta})\). By \((b)\), \(\neg(A_{\beta}\cap T_{j}\#\emptyset)\), that is, \(A_{\beta}\cap T_{j}=\emptyset\). But it contradicts with proposition 2.29(c), and thus \(\neg\neg(\Phi(\alpha)=^{*}\Phi(\beta))\). By assuming \(\psi\), we derived a contradiction. By \((\psi\rightarrow\varphi)\rightarrow(\neg\neg\psi\rightarrow\neg\neg\varphi)\), assuming \(\neg\neg\psi\) yield also a contraction. So \(\neg\neg(\Phi(\alpha)=^{*}\Phi(\beta))\).\(\dashv\)
The theorem says that if \(\Phi\) is a total constructive function from the temporal interval \((0,1]^{o}\) to \(\mathbb{N}^{*}\) then there exists a _non-infinite_ subset \(E\) of \([0,1]^{o}\) such that
\[\forall\alpha,\beta\in(0,1]^{o}[(L_{\alpha}\cap E=L_{\beta}\cap E)\rightarrow \neg\neg(\Phi(\alpha)=^{*}\Phi(\beta))].\]
As mentioned in the introduction, a moment \(\alpha\) on the temporal continuum is a _flow_ from the past, i.e., the moment \(\alpha\) is adhered to the collection of all moments that happened before it. We think up of this flow as \(L_{\alpha}\). To construct a witness for a moment \(\alpha\), the occurrence of \(\alpha\) is compared with the occurrence of _non-infinitely_ many moments in \(E\). If the results of the comparison are the same for two moments \(\alpha\) and \(\beta\), then it is not false that the witness constructed for \(\alpha\) is also a witness for \(\beta\). In this way, the theorem 2.30 formalizes our sense of the orientation discussed in the introduction.
We believe that the following form of theorem 2.30 is plausible to be accepted as a principle, which we name the _oriented continuity principle_, **OCP** :
**Oriented Continuity Principle:**
For every \(\Phi:(0,1]^{o}\rightarrow\mathbb{N}^{*}\),
if \(\forall\alpha\in(0,1]^{o}\)\(\exists\xi\in\mathbb{N}^{*}\)\([\Phi(\alpha)=^{*}\xi]\) then there exists a _non-infinite_ subset \(E\subseteq[0,1]^{o}\) such that \(\forall\alpha,\beta\in(0,1]^{o}\)\([(L_{\alpha}\cap E=L_{\beta}\cap E)\rightarrow(\Phi(\alpha)=^{*}\Phi(\beta))]\).
Topologies For Continuum
What is the proper topology of the intuitive temporal continuum? Clearly, the topology must display the notion of orientation. As is emphasized in the introduction, since \(t\) sinks back to the past, the moment \(t\) is not distinguishable from moments in \((10:40,11:30]\). Therefore, if we have a constructive method to introduce witnesses for all moments on the temporal line then the witness for the moment \(t\) is also the witness for all moments in \((10:40,11:30]\). Thus, for a suitable topology of the temporal continuum, it seems that every total function is continuous.
As Brouwer distinguishes between the intuitive continuum and the "full continuum" of the unfinished elements of the unit segment (intuitionistic real line) ([14], page 74), we do not claim that our oriented line, equipped with topologies defined below is exactly the intuitive temporal continuum based on our intuition. We believe that the only difference between the intuitive continuum and the intuitive temporal continuum is taking into account the notion of _orientation_ as a new aspect of the continuum besides _inexhaustibility_ and _non-discreteness_. Our oriented line and the following topologies are our suggestions for modeling the temporal continuum.
### The Oriented Topology
**Definition 3.1**: _The Oriented Topology on the temporal interval \((0,1]^{o}\). A subset \(U\) of \((0,1]^{o}\) is called open if and only if for every \(\alpha\) in \(U\) there exists a non-infinite set \(E\subseteq[0,1]^{o}\) such that \(S_{E}(\alpha)\subseteq U\), where \(S_{E}(\alpha)=\{\beta\in(0,1]^{o}\mid L_{\alpha}\cap E=L_{\beta}\cap E\}\). We indicate the set of all open subsets by \({\cal T}_{1}\), and refer to \(((0,1]^{o},{\cal T}_{1})\) as the oriented topological space._
**Example 3.2**: The interval \((\frac{1}{4},\frac{1}{2}]^{o}=\{\alpha\in(0,1]^{o}\mid\frac{\hat{1}}{4}<\alpha \leq\frac{\hat{1}}{2}\}\) is open. For \(\alpha\in(\frac{1}{4},\frac{1}{2}]^{o}\), let \(E=\{\frac{\hat{1}}{4},\frac{\hat{1}}{2}\}\). Then \(L_{\alpha}\cap E=\{\frac{\hat{1}}{4}\}\), and \(S_{E}(\alpha)\subseteq(\frac{1}{4},\frac{1}{2}]\).
We must show that the class of all open subsets of \((0,1]^{o}\) is a topology. It is obvious that the empty set and \((0,1]^{o}\) belong to \({\cal T}_{1}\). One may easily see that the topology \({\cal T}_{1}\) is closed under arbitrary union. The next proposition shows that it is also closed under finite intersection.
**Proposition 3.3**: _The class of open subsets of \((0,1]^{o}\) is closed under finite intersection._
**Proof.** Let \(U_{1}\) and \(U_{2}\) be open, and \(\alpha\in U_{1}\cap U_{2}\). Let the non-infinite sets \(E_{1},E_{2}\) be such that \(S_{E_{1}}(\alpha)\subseteq U_{1}\), and \(S_{E_{2}}(\alpha)\subseteq U_{2}\). It is easily seen that \(S_{E_{1}\cup E_{2}}(\alpha)\subseteq U_{1}\cap U_{2}\). \(\dashv\)
### The Ordinary Topology
We first define the notion of semi-metric spaces, and then introduce the _ordinary_ topology based on this notion.
**Definition 3.4**: _Assume \({\mathbb{Q}}^{+}=\{q\in{\mathbb{Q}}\mid q>0\}\). A semi-metric space is a couple \((I,{\bf d})\), where \(I\) is a set and \({\bf d}\) is a relation \({\bf d}\subseteq I\times I\times{\mathbb{Q}}^{+}\) satisfying the following properties_
1. \(\forall x\in I\forall q\in\mathbb{Q}^{+}{\bf d}(x,x,q)\)_,_
2. \(\forall x,y\in I\exists q\in\mathbb{Q}^{+}{\bf d}(x,y,q)\)_,_
3. \(\forall x,y\in I\forall q,p\in\mathbb{Q}^{+}({\bf d}(x,y,q)\wedge q<p\to{\bf d}( x,y,p))\)_,_
4. \(\forall x,y\in I\forall q\in\mathbb{Q}^{+}({\bf d}(x,y,q)\leftrightarrow{\bf d }(y,x,q))\)_,_
5. \(\forall x,y,z\in I\forall q,p\in\mathbb{Q}^{+}({\bf d}(x,y,q)\wedge{\bf d}(y,z, p)\to{\bf d}(x,z,p+q))\) (triangle inequality)_,_
The intended meaning of \({\bf d}(x,y,q)\) is 'the distance of \(x\) from \(y\) is less than \(q\)'. Then by this intention, all properties P1-P5 are understood clearly. For \(x\in I\) and \(p\in\mathbb{Q}^{+}\), let \(S_{p}(x)=\{y\in I\mid{\bf d}(x,y,p)\}\).
**Remark 3.5**: Note that two notions of metric space and semi-metric space are _classically_ the same. Assume \(\mathbb{R}\) is the classical real line. For a semi-metric space \((I,{\bf d})\), we can define a metric function \(dist:I\times I\to\mathbb{R}\), such that \(dist(x,y)=inf\{p\mid{\bf d}(x,y,p)\}\). One may easily verify that \(dist\) is a (classical) metric. Also if \((I,dist)\) is a (classical) metric space, defining \({\bf d}\subseteq I\times I\times\mathbb{Q}^{+}\) by \({\bf d}(x,y,q)\leftrightarrow dist(x,y)<q\), would make \((I,{\bf d})\) a semi-metric space.
**Proposition 3.6**: _If \((I,{\bf d})\) is semi-metric space, then the collection of subsets \(U\subseteq I\) satisfying the following property is a topology:_
\[\forall x\in U\exists p\in\mathbb{Q}^{+}(S_{p}(x)\subseteq U)\ \ (*).\]
**Proof.** It is clear that both sets \(I\) and the empty set satisfy \((*)\). We only need to show that the collection is closed under arbitrary union and finite intersection. Being closed under arbitrary union is trivial. We prove that it is closed under finite intersection. Assume \(U_{1}\) and \(U_{2}\) satisfy the property and let \(x\in U_{1}\cap U_{2}\). Since both \(U_{1}\) and \(U_{2}\) satisfy the property, there exist \(p_{1},p_{2}\in\mathbb{Q}^{+}\) such that \(S_{p_{1}}(x)\subseteq U_{1}\) and \(S_{p_{2}}(x)\subseteq U_{2}\). Let \(q=min(p_{1},p_{2})\). Then \(S_{q}(x)\subseteq U_{1}\cap U_{2}\). To show this, assume \(y\in S_{q}(x)\). Then \({\bf d}(x,y,q)\) holds, and by (P3), we have \({\bf d}(x,y,p_{1})\) and \({\bf d}(x,y,p_{2})\), which implies \(y\in S_{p_{1}}(x)\cap S_{p_{2}}(x)\subseteq U_{1}\cap U_{2}\). Hence \(U_{1}\cap U_{2}\) satisfies the property. \(\dashv\)
**Proposition 3.7**: _Let \({\bf d}\subseteq\mathbb{R}^{o}\times\mathbb{R}^{o}\times\mathbb{Q}^{+}\), defined by_
\[{\bf d}(\alpha,\beta,q):=\exists\zeta\in{\bf Q}\exists p\in\mathbb{Q}^{+}(p \leq q\wedge(\hat{\zeta}\leq\alpha,\beta\leq\hat{\zeta}+p))\ 6\lx@note{footnote}{We use expression $a\leq x,y\leq b$ as a short abbreviation of $a\leq x\leq b\wedge a\leq y\leq b$.}\]
_Then \((\mathbb{R}^{o},{\bf d})\) is a semi-metric space._
**Proof.** We must show that \({\bf d}\) satisfies properties P1-P5.
* P1. It follows from proposition 2.17.
* P2. Let \(\alpha,\beta\) be two oriented reals. There exist rational numbers \(p_{1},p_{2},q_{1},q_{2}\) such that \(p_{1}<\alpha<p_{2},q_{1}<\beta<q_{2}\). Let \(\zeta\in{\bf Q}\) defined by \(\zeta(n)=min(p_{1},q_{1})\) for each \(n\in\mathbb{N}\). Then
\[\hat{\zeta}\leq\alpha,\beta\leq(\hat{\zeta}+|min(p_{1},q_{1})|+max(p_{2},q_{2})).\]
* P3. Trivial.
* P4. Trivial.
* P5. Assume \(\alpha,\beta,\delta\in\mathbb{R}^{o}\) and for some \(p,q\in\mathbb{Q}\), \(\mathbf{d}(\alpha,\beta,q)\) and \(\mathbf{d}(\beta,\delta,p)\) hold. Then there exist \(\zeta_{1},\zeta_{2}\in\mathbf{Q}\), \(q_{1}\leq q\) and \(p_{1}\leq p\), such that \[\hat{\zeta_{1}}\leq\alpha,\beta\leq(\hat{\zeta_{1}}+q_{1})\] and \[\hat{\zeta_{2}}\leq\beta,\delta\leq(\hat{\zeta_{2}}+p_{1}).\] Let \(\zeta(n)=min(\zeta_{1}(n),\zeta_{2}(n))\), for each \(n\). We claim that \[\hat{\zeta}\leq\alpha,\delta\leq(\hat{\zeta}+(p_{1}+q_{1})).\] From \(\hat{\zeta_{1}}\leq\alpha\) and \(\hat{\zeta_{2}}\leq\alpha\), we derive \(\hat{\zeta}\leq\alpha\). We also derive \(\hat{\zeta}\leq\delta\) similarly. Assume an arbitrary \(t\in\mathbb{N}\). The fact \(\alpha\leq\hat{\zeta_{1}}+q_{1}\) implies that there exists \(k\in\mathbb{N}\) such that \(\alpha(t)<\hat{\zeta_{1}}(k)+q_{1}\). On the other hand, since \(\hat{\zeta_{1}}\leq\beta\) and \(\beta\leq\hat{\zeta_{2}}+p_{1}\), there exists \(k^{\prime}\in\mathbb{N}\) such that \(\hat{\zeta_{1}}(k)<\hat{\zeta_{2}}(k^{\prime})+p_{1}\), and consequently \(\alpha(t)<\hat{\zeta_{2}}(k^{\prime})+p_{1}+q_{1}\). Let \(k^{\prime\prime}=max(k,k^{\prime})\). Since both of \(\hat{\zeta_{1}},\hat{\zeta_{2}}\) are strictly increasing, we have \(\hat{\zeta_{1}}(k)<\hat{\zeta_{1}}(k^{\prime\prime}),\ \hat{\zeta_{2}}(k^{\prime})<\hat{ \zeta_{2}}(k^{\prime\prime})\), and hence \(\alpha(t)<\hat{\zeta_{1}}(k^{\prime\prime})+q_{1}+p_{1}\) and \(\alpha(t)<\hat{\zeta_{2}}(k^{\prime\prime})+p_{1}+q_{1}\). Then \(\alpha(t)<(min(\zeta_{1}(k^{\prime\prime}),\zeta_{2}(k^{\prime\prime}))-\frac {1}{k^{\prime\prime}+1})+p_{1}+q_{1}\). It is shown that \(\alpha\leq\hat{\zeta}+p_{1}+q_{1}\). Similar argument works for \(\beta\leq\hat{\zeta}+p_{1}+q_{1}\).
\(\dashv\)
**Definition 3.8**: _The Ordinary Topology on \(\mathbb{R}^{o}\). The ordinary topology on \(\mathbb{R}^{o}\) is the topology induced by the semi-metric \((\mathbb{R}^{o},\mathbf{d})\), where \(\mathbf{d}\) is defined above. We show the ordinary topological space by \((\mathbb{R}^{o},\mathcal{T}_{2})\)_
### A consequence of OCP
We use **OCP** to prove that:
**Theorem 3.9**: _Every total function from \(((0,1]^{o},\mathcal{T}_{1})\) to \((\mathbb{R}^{o},\mathcal{T}_{2})\) is continuous._
**Proof.** Let \(f:((0,1]^{o},\mathcal{T}_{1})\rightarrow(\mathbb{R}^{o},\mathcal{T}_{2})\) be total. We prove that for each \(n\in\mathbb{N}\), for every \(\alpha\in(0,1]^{o}\), there exists \(S_{E_{n}}(\alpha)\in\mathcal{T}_{1}\) such that for every \(\beta\in S_{E_{n}}(\alpha)\), \(\mathbf{d}(f(\alpha),f(\beta),2^{-n})\). Consider rational numbers \(q_{i}=i2^{-n}\), \(0\leq i\leq 2^{n}\). For each \(\delta\in(0,1]^{o}\), define \(\phi(f(\delta))(0)=0\) and \(\phi(f(\delta))(k)=i\) if and only if \(q_{i}\leq f(\delta)(k)<q_{i+1}\). The function \(\Phi(\delta)=\varphi(f(\delta))\) from \((0,1]^{o}\) to \(\mathbb{N}^{*}\) is total and well-defined. By **OCP**, there exists a non-infinite subset \(E\subseteq[0,1]^{o}\), such that for every \(\alpha,\beta\), if \(\beta\in S_{E}(\alpha)\) then \(\Phi(\alpha)=^{*}\Phi(\beta)\). It easily seen that for \(\zeta_{1}(i)=q_{\Phi(\alpha)(i)}\in\mathbf{Q}\) and \(\zeta_{2}(i)=q_{\Phi(\beta)(i)}\in\mathbf{Q}\), we have \(\zeta_{1}=\zeta_{2}\) and
\[\hat{\zeta_{1}}\leq f(\alpha),f(\beta)\leq(\hat{\zeta_{1}}+2^{-n}),\]
i.e., \(\mathbf{d}(f(\alpha),f(\beta),2^{-n})\). \(\dashv\)
Concluding Remarks
Our temporal continuum suggests a new framework for constructive analysis. We need to investigate the following questions as further work:
1. Is every total function on \(\mathbb{R}^{o}\) with _one_ topology (either the oriented topology or the ordinary one) continuous, or do we need a third topology?
2. What happens to the _uniform continuity theorem_?
3. What does the _intermediate value theorem_ looks like?
**Acknowledgement**. We are very thankful to Dirk van Dalen, Mark van Atten and Wim Veldman for reading a draft of this paper and for their helpful comments and suggestions. Moreover, our correspondence with Wim Veldman resulted in modifications of some crucial claims in this paper.
|
2306.06531 | AutoTAMP: Autoregressive Task and Motion Planning with LLMs as
Translators and Checkers | For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code. | Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan | 2023-06-10T21:58:29Z | http://arxiv.org/abs/2306.06531v3 | # AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
###### Abstract
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. The recent and remarkable advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, many existing approaches either translate the natural language directly into robot trajectories, or factor the inference process by decomposing language into task sub-goals, then relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making such factorization untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediary task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains.1
Footnote 1: **Datasets and Codes are available at [https://github.com/yongchao98/AutoTAMP](https://github.com/yongchao98/AutoTAMP)**
**Keywords:** Large Language Models, Task and Motion Planning, Human-Robot Interaction
## 1 Introduction
A long-standing goal of robotics has been to provide agents with the ability to find and execute optimal plans for complex tasks. Robots not only need to reason about the task in the environment and find a satisfying sequence of actions but also verify the feasibility of executing those actions according to the robot's motion capabilities. This problem is referred to as task and motion planning (TAMP), and there has been considerable research on efficient algorithms [1]. Classic solutions rely on specifying tasks in a dedicated planning representation amenable to such algorithms [2, 3, 4, 5].
While this approach to task specification has been quite successful, directly using these representations requires training and experience, making them a poor candidate as an interface for non-expert users. As an alternative, natural language (NL) provides an intuitive and flexible mode of task descriptions. Excitingly, pre-trained large language models (LLMs) have demonstrated surprisingly good performance on many language-related tasks [6], and there has been an associated burst of research investigating their various potential applicability to TAMP [7, 8, 9, 10, 11, 12, 13].
Early efforts showed promising results using LLMs as direct task planners [7], i.e., generating a sequence of sub-tasks based on a set of natural language instructions, but were limited by a lack
of feedback and verification of executability of the sub-task sequence. Further research addressed executability by connecting sub-tasks to control policy affordance functions [8], providing environmental feedback of robot actions [10], and interleaving action feasibility checking with LLM action proposals [9]. However, these approaches struggle with various task complexities, such as temporally-dependent multi-step actions, action sequence optimization, task constraints, etc. Furthermore, these frameworks factor the planning problem and use LLMs to infer a task plan separately from the motion plan. In many situations, the task and motion plan must optimized together to fulfill the task. For instance, when the task is'reach all locations via the shortest path', the order of places to be visited (task planning) depends on the geometry of the environment and the related motion optimization. Unfortunately, as we show in this paper, LLMs do not seem capable of directly generating trajectories, possibly related to limitations in complex spatial and numerical reasoning [14; 15].
Notably, the task representations used in classic TAMP, such as PDDL [2] or Temporal Logics [3], are sufficiently expressive to specify task complexities from which existing planning algorithms can find and verify action sequences satisfying those specifications. To benefit from both the user-friendliness of NL and the capabilities of existing TAMP algorithms, we approach the problem by using LLMs to translate from high-level task descriptions to formal task specifications. We are not the first to use LLMs as translators of task specifications [16; 17], but our work contributes several new ideas that improve the performance of such approaches. First, previous work focused on translating natural language into PDDL goals [16] or Linear Temporal Logics (LTL) [18], which only considered the problem of task planning. Here we utilize Signal Temporal Logic (STL) as the intermediary representation, from which the planner directly optimizes the whole trajectory, i.e, task and motion planning together, thus improving planning success rate. Second, we also use re-prompting to automatically correct task specification errors, both via an existing syntactic error correction framework and a novel semantic error checking and correcting loop using LLMs, resulting in significantly improved performance. We also conduct an ablation study integrating a fine-tuned NL-to-STL model [19] with our AutoTAMP approach and show that GPT-4 few-shot learning is competitive with fine-tuning. We conclude that in-context learning with pre-trained LLMs is well suited for language-to-task-specification translation for solving TAMP problems.
In this paper, we contribute the following:
* Pre-trained LLM (GPT-3 and GPT-4) few-shot translation from NL task descriptions to STLs, and execution of trajectories via an STL planner.
* A novel autoregressive re-prompting technique for automatic checking and correcting of semantic errors in generated task specifications via LLMs. We show that, when combined with automatic syntactic correction, this technique greatly improves task success rates.
* Comprehensive experiments in challenging task domains, including several multi-agent tasks. We show that our approach outperforms direct LLM planning methods for tasks with hard geometric and temporal constraints.
* In addition to our code, we publish a dataset of 1400 test cases consisting of the language instruction, environments, produced STL specifications, and resulting trajectories.
## 2 Problem Description
As shown in Figure 1, we aim to convert a natural language instruction, including spatial and temporal constraints, into a motion plan for a robot encoded as a set of timed waypoints, e.g., \((x_{i},y_{i},t_{i})\). The environment state is encoded as set of named obstacles described as polygons and is provided as additional context. Our task is to generate a constraint-satisfying trajectory based on the given instruction and the environment state. The robot must not surpass its maximum velocity, and the total operation time should not exceed the task time limit. We assume that the full trajectory is a linear interpolation between the timed waypoints; complex trajectories can be specified by dense waypoint sequences.
User prompt1:
Task explanation: 'Hope you can help me plan the trajectory to fulfill the instruction -' Few-shot examples + Environment settings: '[name: rooml, color: red, position and size: [0, 0.9, -1, -0.5], function 'kitchen] -' + Instruction: 'Ae some point go to the yellow box, and at some point go to the red box, and then enter the green box, and always do not enter the blue area.'
GPT-4 response1:
[(-1.3, -1.3, 0), (-0.95, -0.2, 0.5), (0.45, 0.95, 0.9), (0.4, 0.95, 1.5)-1 User prompt2:
Your trajectory between subpoint3 and subpoint4 enters the blue box, you should avoid it.
GPT-4 response2:
[(-1.3, -1.3, 0), (-0.95, -0.2, 0.5), (0.45, 0.95, 0.9), (-0.35, 0.55, 1.45)-1
]
## 3 Methods
Figure 2 illustrates three of the approaches we compare in our work, each using LLMs in some capacity. Each takes as input (1) a text-based representation of the global environment state, (2) in-context examples for few-shot learning, and (3) a natural language instruction. For methods using feedback or re-prompting, we describe details of the re-prompt context in Appendix A.
### LLM End-to-end Motion Planning
One natural idea is to use an LLM to handle both the task and motion planning by directly generating a trajectory for a given language instruction; we refer to this as LLM End-to-end Motion Planning. Where the generated trajectory violates constraints, we re-prompt the model with the constraint violation to produce another trajectory, allowing up to five such re-prompts. Figure 1 illustrates this pipeline, including a specific failure case with two constraint-violating trajectories.
### LLM Task Planning
A more common approach is to use an LLM to handle the task planning by directly generating a sequence of sub-tasks from a given language instruction; we refer to this as LLM Task Planning. To generate a final trajectory, the sub-tasks are handled by an independent motion planner. In this work, these sub-tasks are limited to navigation actions, and the motion planning is handled by the STL planner used by our proposed approach; this permits fair comparison of results across methods. Each sub-task is converted to STL to be consumed by the planner (more details are presented in Appendix B). We evaluate and compare against three methods that each use LLMs for task planning: (1) Naive Task Planning, (2) SayCan, and (3) LLM Task Planning + Feedback.
Naive Task PlanningAs proposed by Huang et al. [7], we evaluate using LLMs to generate the entire sub-task sequence without checking for executability.
SayCanAlternatively, an LLM can be autoregressively prompted to generate each subsequent sub-task conditioned on the previous sub-tasks in the sequence. The next sub-task can be selected
Figure 1: GPT-4 failure case for direct end-to-end trajectory planning. The orange line shows the correct path obeying the instruction. The purple and gray dashed lines show the trajectories from GPT-4 after first and second prompts, respectively. GPT-4 generates a list of \((x,y)\) locations with associated timepoints. The initial prompt describes the language modeling task, environment state, and instruction. Each object is a rectangle described by \((x,y)\) boundaries.
from the top K candidates by combining the language model likelihood with a feasibility likelihood of the candidate action and choosing the most-likely next sub-task. This is the method proposed by Ahn et al. [8]. We set K to 5 in our evaluations.
LLM Task Planning + FeedbackA third task planning method combines full sequence generation with feasibility checking to both find sub-task sequences that satisfy the full task and verify their feasibility before execution. For any infeasible sub-tasks, the LLM can be re-prompted with feedback about the infeasible actions to generate a new sub-task sequence. This is similar to the hierarchical method proposed by Lin et al. [9] but with feedback for re-prompting.
### Autoregressive LLM Specification Translation&Checking + Formal Planner
Unlike LLM Task Planning, our approach translates NL to STL with an LLM and then plans the trajectory with an STL planner, as shown in Figure 2. We include two re-prompting techniques to improve translation performance: one for syntactic errors and another for semantic errors. We show one example of the prompt in Figure 3.
Signal Temporal Logic SyntaxIn this work, we use STL [20] as a formal task specification that supports continuous real-time constraints suitable for time-critical missions. An STL formula is defined recursively according to the following syntax:
\[\phi\quad::=\quad\pi^{\mu}\quad|\quad\neg\phi\quad|\quad\phi\ \land\ \varphi\quad|\quad\phi\ \lor\ \varphi\quad|\quad\mathbf{F}_{[a,b]}\phi\quad|\quad\mathbf{G}_{[a,b]}\phi\quad| \quad\phi\mathbf{U}_{[a,b]}\varphi \tag{1}\]
where \(\phi\) and \(\varphi\) are STL formulas, and \(\pi^{\mu}\) is an atomic predicate. \(\neg\) (negation), \(\land\) (and), \(\lor\) (or), \(\Rightarrow\) (imply), and \(\Leftrightarrow\) (equal)) are logical operators. \(\mathbf{F}_{[a,b]}\) (eventually/finally), \(\mathbf{G}_{[a,b]}\) (always/globally), and \(\mathbf{U}_{[a,b]}\) (until) are temporal operators with real-time constraints \(t\in[a,b]\). Temporal operators with time constraints are illustrated by Appendix C, and other operators can be presented using the basic syntax. The action primitives in this work are 'enter(room_name)' and 'not_enter(room_name)'. To avoid issues of parentheses matching, the generated STLs adhere to pre-order form, as discussed in Appendix D.
STL Trajectory PlannerWe use a state-of-the-art STL planner [21] capable of multi-agent planning. It defines the validity of an STL formula with respect to a trajectory and then optimizes the
Figure 2: Illustration of different approaches applying LLMs for task and motion planning.
trajectory to maximize the validity. The planner only considers the state within or out of a region, and outputs a sequence of location & time pairs.
Syntactic Checking & Semantic CheckingOpen-loop translation can suffer from syntactic and semantic errors. We use two re-prompting techniques to automatically correct such errors. Like Skreta et al. [22] we use a verifier to check for syntax errors; any found are provided as feedback when re-prompting the LLM to generate corrected STL. We repeat until no errors are found (up to five iterations). For semantic errors, we propose a novel autoregressive re-prompting technique that feeds the STL planner's generated state sequence (i.e., [[in(road), 0], [in(red kitchen), 0.5], [in(blue restroom2), 1.2]...]) back into the LLM to check whether it satisfies the original instruction. If it does not, the LLM is prompted to modify the STL, which repeats the syntactic and semantic re-prompting. This process terminates in the case of no detected errors or no change in STL (up to three iterations).
Figure 3: An example of full prompt used by our AutoTAMP framework
Experimental Design
Each of our task scenarios is set in a 2D environment and entails navigation of one or more robots; the robots have extent in the environment and are initialized with varying start positions. Each environment is comprised of regions with a shape, location, and characteristics (e.g., color, name, function). For each method we evaluate, the LLM is initially prompted with a description of the language task (e.g. task planning) and five in-context examples (these depend on the language task). To mitigate variance across prompts, we initially tested six different sets of examples for each method and chose the one that performed best. Through this testing, we found that the variance over prompts was insignificant relative to overall performance.
We evaluated the different methods described in Section 3 across six different task scenarios (three single-agent and three multi-agent) with different combinations of geometric and temporal constraints. For each scenario description below, we indicate the presence of these constraints below with G and T, respectively. For each method, we evaluate performance with both GPT-3 and GPT-4 as the LLM. Note that in multi-agent scenarios, we do not test SayCan or LLM Task Planning + Feedback because these methods are not straight-forwardly adaptable for multiple agents. We also terminate and report failure for test cases that take more than 90 minutes. We automatically check resulting trajectories via hard-coded checkers. The full set of experiments took two weeks using four 16-core CPUs; the cost of LLM API calls was ~1500USD.
HouseWorld1 (single-agent)As shown in Figure 4(a), this is a house environment from Finucane et al. [23]. We first manually constructed 10 different instructions of varying complexity (see Appendix E) before prompting GPT-4 to paraphrase each into 9 differently worded instructions with the same meaning, resulting in 100 total instructions for this environment. For each instruction, we
Figure 4: HouseWorld and Chip’s Challenge are single-agent scenarios. Overcooked, Rover, and Wall are multi-agent scenarios. The black square in Overcooked is inadmissible. The lines indicate the correct trajectories following the instructions. For the HouseWorld and Chip’s Challenge environments, the black round dot and pentagonal dot indicate the start and end positions, respectively.
randomly initialize between two start-end position pairs for 200 total test cases. For this scenario, we do not impose a hard time constraint for the planned trajectory.
_Example instruction:_ 'Visit two rooms with color closest to red, but not the pure red color.'
_Corresponding STL_: (finally room_purple and finally room_pink).
**HouseWorld2 (T, single-agent)** This scenario is identical to HouseWorld1, but each planned trajectory is subjected to a hard time constraint. This time limit is pre-determined by completing the correct trajectory with 0.8 maximum velocity.
_Example instruction:_ 'Visit two rooms with color closest to red, but not the pure red color. The task should be completed within 10 seconds.'
_Corresponding STL_: (finally[0, 10] room_purple and finally[0, 10] room_pink).
The remaining task scenarios were designed with specific rules and goals for the agent(s) to follow. For each scenario, GPT-4 was used to paraphrase the original description into 20 uniquely worded variants with the same meaning. We instantiate three different instances of the environment for each scenario and randomize five different start/end location pairs for a total of 300 test cases. Note that the avoidance of collision among multiple agents has been inherently encoded in the STL planner.
**Chip's Challenge (G, single-agent)** Figure 4(b) shows a scenario inspired by a level from Chip's Challenge, a classic puzzle solving game with strict geometric and logical constraints. The robot must reach all goal regions (blue) but must acquire a unique key to pass through the corresponding door. The task planning must obey both the spatial and logical constraints.
_Example instruction:_ 'Try to reach all the goals but you have to reach the corresponding key first to open the specific door. For example, you have to reach key1 ahead to open door1. Also remember always do not touch the walls.'
_Corresponding STL_: finally enter(goal\({}_{i}\)) (i = 1,2,3...) and ( negation enter(door\({}_{j}\)) until enter(key\({}_{j}\)) (j = 1,2,3...) ) and globally not_enter(walls)
**Overcooked (G & T, multi-agent)** Figure 4(c) shows a scenario inspired by Overcooked, a popular cooking simulation game with strict time constraints. The agents must cooperatively gather ingredients and return to CookingRoom in a limited time. In this scenario, the multi-agent motion planning is challenged by limited space for agents to maneuver.
_Example instruction:_ 'Enter all the ingredient room to pick up food. Once entered ingredient rooms, go to cooking room within 3 seconds. And all the agents should not collide each other and black obstacles.'
_Corresponding STL_: finally enter(ingredient\({}_{i}\)) (i = 1,2,3...) and ( enter(ingredient\({}_{j}\)) imply finally[0, 3] enter(CookingRoom) (j = 1,2,3...) ) and globally not_enter(walls)
**Rover (G & T, multi-agent)** Figure 4(d) is a scenario used by Sun et al. [21]. Multiple agents must reach each observation region (blue goals) before transmitting the observations by entering a red region, all while subjected to time and energy constraints.
_Example instruction:_ 'All rovers must reach the blue charging station within 5 units of time each time they exit it. Once they reach their destination, they need to get to a yellow transmitter within 2 time units to send the collected information to the remote control. Rovers must keep clear of black walls and other rovers. All target areas need to be visited.'
_Corresponding STL_: finally enter(goal\({}_{i}\)) (i = 1,2,3...) and ( enter(goal\({}_{j}\)) imply finally[0, 2] ( enter(Transmitter_1) or enter(Transmitter_2) ) (j = 1,2,3...) ) and ( not_enter(Charging) imply finally[0, 5] enter(Charging) and globally not_enter(walls) )
**Wall (G & T, multi-agent)** Figure 4(e) is also a scenario from Sun et al. [21]. Multiple agents must occupy each goal region (blue) while subject to a time constraint and a maneuver bottleneck.
_Example instruction:_ 'Reach all the goals and stay there. Take care of the collision among each agent and to the walls.'
_Corresponding STL_: finally globally enter(goal\({}_{i}\)) (i = 1,2,3...) and globally not_enter(walls)
Appendix F shows key frame sequences of correctly planned trajectories for the three multi-agent scenarios as produced via our AutoTAMP method.
## 5 Results
### Task Success Rates
We report the task success rates for the single-agent and multi-agent scenarios in Table 1 and Table 2, respectively. For HouseWorld1 (Figure 4(a)) with no hard time constraint, we find that all methods using LLMs as task planners outperform our approach; this environment permits direct trajectories between any two positions and thus lacks geometric challenges that such methods will struggle with. When adding a strict time constraint (HouseWorld2), we see that such methods perform much worse while AutoTAMP's success rate persists. For the remaining tasks that include geometric constraints, LLM End-to-end Motion Planning and Naive Task Planning both perform quite poorly. Unsurprisingly, we observe a general trend that GPT-4 outperforms GPT-3 for all methods.
### Failure examples for LLM Task Planning and AutoTAMP Methods
**LLM Task Planning**: We observe that LLM Task Planning methods may fail at tasks that require long-horizon planning. Figure 5 below illustrates two examples. In (a), these methods inefficiently grab the key for a particular door only when that door is the next immediate obstacle. Rather than grab both key1 and key2 that are near the starting position before unlocking door1, it sequences
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \begin{tabular}{c} **HouseWorld1** \\ **(soft time cst.)** \\ \end{tabular} & \begin{tabular}{c} **HouseWorld2** \\ **(hard time cst.)** \\ \end{tabular} &
\begin{tabular}{c} **Chip Challenge** \\ **(hard geo. cst.)** \\ \end{tabular} \\ \hline GPT-3 end2end & \(0\%\) & \(0\%\) & \(0\%\) \\ GPT-3 naive task planning & \(74\%\) & \(36\%\) & \(0\%\) \\ SayCan & \(75.5\%\) & \(36\%\) & \(0\%\) \\ GPT-3 task planning/feed. & **79\%** & \(40\%\) & \(0\%\) \\ GPT-3/STL & \(28\%\) & \(27\%\) & \(29\%\) \\ GPT-3/STL/Syn. & \(49\%\) & \(47\%\) & \(66\%\) \\
**GPT-3/STL/Syn./Sem. (AutoTAMP)** & \(62\%\) & **62\%** & **74.3\%** \\ \hline GPT-4 end2end & \(9.5\%\) & \(9.5\%\) & \(0\%\) \\ GPT-4 naive task planning & \(90\%\) & \(45\%\) & \(0\%\) \\ SayCan & \(90\%\) & \(47.5\%\) & \(0\%\) \\ GPT-4 task planning/feed. & **92\%** & \(49\%\) & \(0\%\) \\ GPT-4/STL & \(43.5\%\) & \(42\%\) & \(42.7\%\) \\ GPT-4/STL/Syn. & \(59.5\%\) & \(59\%\) & \(70\%\) \\
**GPT-4/STL/Syn./Sem. (AutoTAMP)** & \(82.5\%\) & **82\%** & **87.7\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Task success rates in **single-agent** scenarios. The scenarios have hard temporal or geometric constraints.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \begin{tabular}{c} **Overcooked** \\ **(multi-agent, hard time \& geo. cst.)** \\ \end{tabular} & \begin{tabular}{c} **Rover** \\ **(\%)** \\ \end{tabular} &
\begin{tabular}{c} **Wall** \\ **(\%)** \\ \end{tabular} \\ \hline GPT-3 end2end & \(0\%\) & \(0\%\) & \(0\%\) \\ GPT-3 naive task planning & \(13.3\%\) & \(0\%\) & \(7\%\) \\ GPT-3/STL & \(25\%\) & \(22\%\) & \(74\%\) \\ GPT-3/STL/Syn. & \(70\%\) & \(35\%\) & \(85\%\) \\
**GPT-3/STL/Syn./Sem. (AutoTAMP)** & **89\%** & **60.7\%** & **89.7\%** \\ \hline GPT-4 end2end & \(5\%\) & \(0\%\) & \(6\%\) \\ GPT-4 naive task planning & \(17\%\) & \(0\%\) & \(47\%\) \\ GPT-4/STL & \(85\%\) & \(46\%\) & \(95\%\) \\ GPT-4/STL/Syn. & \(94\%\) & \(67\%\) & \(95\%\) \\
**GPT-4/STL/Syn./Sem. (AutoTAMP)** & **100\%** & **79\%** & **100\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Task success rates in **multi-agent** scenarios. Each scenario has hard temporal and geometric constraints.
the actions to grab key1, unlock door1, then grab key2. This inefficiency can lead to a violation of time constraints that are part of the task specification, resulting in task failure. In (b), ingredient1 is unnecessarily (and inefficiently) visited by both agents.
**AutoTAMP**: AutoTAMP (and the ablations) primarily fail because of translation errors. While the re-prompting techniques we use to improve translation are effective, there are still translation errors that result in task failure.
### Planning Time
We report timing information for AutoTAMP with GPT-4 in Table 3. For each scenario, we report 10th, 50th, and 90th percentile runtime cost for the following steps in the overall process: GPT-4 API call, STL planner planning time, code execution time, GPT-4 translation to STL, syntactic check loop, semantic check loop. Overall, the STL planner runtime is the main bottleneck. In cases where the GPT-4 API call takes an excessive amount of time (i.e. more than 100s), we note that this is likely due to unstable latency and may not be a good proxy for decoding runtime.
We also report the number of iterations for both the syntactic and semantic re-prompting techniques for each task scenario. Note that semantic re-prompting subsequently requires an additional round of syntactic re-prompting for the new STL output; thus, syntactic re-prompting can have more total iterations than the limit of 5 since this limit is applied per STL output.
Figure 5: Two examples of failures by LLM Task Planning methods.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Time (s)** & **HouseWorld2** & **Chip.** & **Overcooked** & **Rover** & **Wall** \\ \hline Total Time & \(15/42/221\) & \(403/871/1579\) & \(34/77/157\) & \(79/123/174\) & \(21/43/67\) \\ \hline GPT-4 API & \(2/3/55\) & \(8/21/160\) & \(3/5/7\) & \(10/14/18\) & \(4/6/7\) \\ STL Planner & \(10/35/152\) & \(310/732/1482\) & \(30/74/148\) & \(68/89/142\) & \(17/34/49\) \\ Code Processing & \(3/3/3\) & \(3/4/4\) & \(3/3/3\) & \(3/4/4\) & \(3/3/3\) \\ \hline GPT-4/STL & \(5/9/10\) & \(172/457/681\) & \(16/20/45\) & \(15/18/29\) & \(6/9/15\) \\ Syntactic Check & \(1/3/60\) & \(11/23/163\) & \(4/7/12\) & \(5/6/7\) & \(2/3/5\) \\ Semantic Check & \(6/36/157\) & \(141/630/850\) & \(17/58/106\) & \(75/102/134\) & \(14/31/54\) \\ \hline
**Iterations** & **HouseWorld2** & **Chip.** & **Overcooked** & **Rover** & **Wall** \\ \hline Syntactic Check & \(1/2/2\) & \(3/5/7\) & \(1/2/2\) & \(2/3/6\) & \(1/2/3\) \\ Semantic Check & \(1/2/3\) & \(3/3/3\) & \(2/2/3\) & \(2/3/3\) & \(2/2/3\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Timing information for each step of AutoTAMP using GPT-4; Iteration counts for the syntactic and semantic re-prompting. For each, We report the 10th, 50th, and 90th percentile.
### Ablation Studies
Syntactic and Semantic CheckersIn Table 1 and 2, we also evaluate the impact of syntactic and semantic error correction on using LLMs to translate to STL. The results show that translation with no error correction has modest success across task scenarios, but both syntactic and semantic error correction significantly improve performance; this trend is present across all scenarios.
Fine-tuned NL2TL Translation ModelPrior work (NL2TL [19]) has investigated translating from natural language to STL via a modular pipeline that utilizes a fine-tuned LLM. While incorporating NL2TL into our AutoTAMP framework was not part of our primary investigation, we report here its impact on task success in Table 4. The top three rows report AutoTAMP + NL2TL with ablations over syntactic and semantic re-prompting. The bottom three rows report the original ablation results of AutoTAMP without NL2TL. We see that integrating NL2TL generally provides a modest improvement. However, it is exciting that AutoTAMP appears competitive without the use of a fine-tuned model since it does not rely on a targeted dataset or additional offline training.
## 6 Related Work
LLMs for TAMPRecent claims about the impressive reasoning capabilities of LLMs [6; 24] have led to interest in such models for task and motion planning. One approach is to directly use LLMs as planners [7; 8; 9; 10; 11; 12; 13; 25]. Initial work showed that naive zero-shot generation of a primitive action sequence from a high-level task description had relatively poor executability, but few-shot in-context learning, constraining output to admissible actions, and autoregressive step-by-step action generation significantly improved performance [7]. Subsequent efforts connected to motion planning by grounding the primitive actions to motion control policies and using affordance functions to guide LLM-based task and motion planning [8; 9]; Huang et al. [10] closed the loop by prompting with feedback, such as action success indication, before querying for the next action. Other work focused on how prompting can inform task execution, such as few-shot summarization for user task preferences [11] or extraction of commonsense arrangements for underspecified object placement [13]. Despite these successes, however, there is some evidence that LLMs exhibit poor performance on more realistic tasks [15; 26], motivating different approaches. While we are interested in LLMs for TAMP, our work does not directly use LLMs as planners.
Translating Language to Task RepresentationsA natural alternative is to rely on dedicated planners by mapping from natural language to a planning representation. For over a decade, research has investigated mapping language to lambda calculus [27], motion planning constraints [28], linear temporal logic [29; 30; 31], and signal temporal logic [32; 33], among others. To address challenges of data availability, task generalization, and linguistic complexity, recent work has applied LLMs to this translation problem. Modular approaches have used LLMs to extract referring expressions with corresponding logic propositions to then construct a full temporal logic specification [34; 19]. Relying on LLMs for direct translation, other work has mapped from language to PDDL goals [17] or full PDDL problems [35; 16]. Our work similarly translates to a task specification, but we can
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & **HouseWorld2** & **Chip.** & **Overcooked** & **Rover** & **Wall** \\ \hline GPT-4/STL/NL2TL & \(53\%\) & \(61\%\) & \(87\%\) & \(58\%\) & \(95\%\) \\ GPT-4/STL/Syn./NL2TL & \(64\%\) & \(73\%\) & \(94\%\) & \(70\%\) & \(95\%\) \\
**GPT-4/STL/Syn./Sem./NL2TL & **83.5\%** & **86\%** & **100\%** & **79.7\%** & **100\%** \\
**(AutoTAMP)** & & & & & \\ \hline GPT-4/STL & \(42\%\) & \(42.7\%\) & \(85\%\) & \(46\%\) & \(95\%\) \\ GPT-4/STL/Syn. & \(59\%\) & \(70\%\) & \(94\%\) & \(67\%\) & \(95\%\) \\
**GPT-4/STL/Syn./Sem. & **82\%** & **87.7\%** & **100\%** & **79\%** & **100\%** \\
**(AutoTAMP)** & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies of the AutoTAMP with/without integrating the finetuned NL2TL model.
represent complex constraints (e.g. temporal), and we introduce a novel mechanism for automatic detection and correction of semantic errors.
**Re-prompting of LLMs** The quality of LLM output is greatly improved with useful context, such as few-shot in-context learning for novel tasks [6]. LLMs for TAMP are typically also provided task-relevant information, such as environment state or admissible actions [12]. Re-prompting with additional context based on LLM output has been shown to be extremely beneficial, such as with iterative action generation [7], environmental feedback [10], inadmissible actions [7; 8; 9], unmet action preconditions [36; 35], code execution errors [37], and syntactic errors in structured output [22]. Our work utilizes the same syntactic correction re-prompting technique as Skreta et al. [22], but we further introduce automatic detection and correction of semantic errors via re-prompting.
## 7 Conclusion and Limitations
This paper presented AutoTAMP, a framework for using pre-trained LLMs as translators from language task descriptions to formal task specifications in STL via few-shot learning; a key contribution is that we show that our use of STL significantly improves the performance of the planner, and that the translation via automatic correction of syntax errors and a novel autoregressive re-prompting technique for semantic error checking and correction lead to further performance gains. Our experimental results show using LLMs to translate to task specifications that can be solved via a formal planner outperforms approaches that use LLMs directly as planners when handling tasks with complex geometric and temporal constraints.
**Limitations** 1) Though our results rely on using the best prompt out of several candidates, the prompt used may not be the optimal for eliciting the best performance of our LLMs. As a result, the individual results for each method may be improvable, but we expect the trends to persist and will continue to support the conclusion that LLMs are not well suited for direct complex TAMP. 2) Though the task success rates of AutoTAMP are good, the cost of the planning time is high, especially for multiple syntactic and semantic re-prompting iterations. Further work is needed that focuses on how to accelerate formal planners and decrease LLM inference latency. 3) Some tasks still fail due to translation errors. As part of future work, we aim to explore whether feedback via a human-in-the-loop framework can both improve translation and reduce AutoTAMP iterations.
#### Acknowledgments
We thank OpenAI for approving the usage of GPT-3/4 APIs.
The authors would also like to thank Charles Dawson, Oswin So, and Yue Meng for the early-stage discussion and final reviews of the work.
This work was supported by ONR under Award N00014-22-1-2478 and MIT-IBM Watson AI Lab. However, this article solely reflects the opinions and conclusions of its authors.
|
2306.16722 | Evaluating Paraphrastic Robustness in Textual Entailment Models | We present PaRTE, a collection of 1,126 pairs of Recognizing Textual
Entailment (RTE) examples to evaluate whether models are robust to
paraphrasing. We posit that if RTE models understand language, their
predictions should be consistent across inputs that share the same meaning. We
use the evaluation set to determine if RTE models' predictions change when
examples are paraphrased. In our experiments, contemporary models change their
predictions on 8-16\% of paraphrased examples, indicating that there is still
room for improvement. | Dhruv Verma, Yash Kumar Lal, Shreyashee Sinha, Benjamin Van Durme, Adam Poliak | 2023-06-29T06:48:22Z | http://arxiv.org/abs/2306.16722v1 | # Evaluating Paraphrastic Robustness in Textual Entailment Models
###### Abstract
We present \(\hat{P}aRTE\), a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models' predictions change when examples are paraphrased. In our experiments, contemporary models change their predictions on 8-16% of paraphrased examples, indicating that there is still room for improvement.
## 1 Introduction
Recognizing Textual Entailment (RTE), the task of predicting whether one sentence (_hypothesis_) would likely be implied by another (_premise_), is central to natural language understanding (NLU; Dagan et al., 2005), as this task captures "all manners of linguistic phenomena and broad variability of semantic expression" MacCartney (2009). If an RTE model has a sufficiently high _capacity for reliable, robust inference necessary for full NLU_MacCartney (2009), then the model's predictions should be consistent across paraphrased examples.
We introduce \(\hat{P}aRTE\), a test set to evaluate how _reliable_ and _robust_ models are to paraphrases (Table 1 includes an example). The test set consists of examples from the Pascal RTE1-3 challenges Dagan et al. (2006); Bar-Haim et al. (2006); Giampiccolo et al. (2007) rewritten with a lexical rewriter and manually verified to preserve the meaning and label of the original RTE sentence-pair. We use this evaluation set to determine whether models change their predictions when examples are paraphrased.
While this may not be a sufficient test to determine whether RTE models _fully understand_ language, as there are many semantic phenomena that RTE models should capture Cooper et al. (1996); Naik et al. (2018), it is _necessary_ that any NLU system be robust to paraphrases.
Our experiments indicate that contemporary models are robust to paraphrases as their predictions do not change on the overwhelmingly large majority of examples that are paraphrased. However, our analyses temper this claim as models are more likely to change their predictions when both the premise and hypothesis are phrased compared to when just one of the sentences is rewritten. We release \(\hat{P}aRTE\)1 to encourage others to evaluate how well their models perform when RTE examples are paraphrased.
Footnote 1: [https://dataverse.harvard.edu/dataset.xhtml?](https://dataverse.harvard.edu/dataset.xhtml?) persistentId=doi:10.7910/DVN/HML23
## 2 Related Work
With the vast adoption of human language technology (HLT), systems must understand when different expressions convey the same meaning (paraphrase) and support the same inferences (entailment). Paraphrasing and entailment are closely connected as the former is a special case of the latter where two sentences entail each other Nevervilova (2014); Fonseca and Aluisio (2015); Vita (2015); Ravichander et al. (2022). Para
\begin{table}
\begin{tabular}{p{42.7pt} p{142.3pt}} \hline \hline
**P** & The cost of security when world leaders gather near Auchterrader for next year ’s G8 summit, is expected to top \$150 million. \\
**P’** & The cost of security when world leaders meet for the G8 summit near Auchterrader next year will top \$150 million. \\ \hline
**H** & More than \$150 million will be probably spent for security at next year’s G8 summit. \\
**H’** & At the G8 summit next year more than \$150 million will likely be spent on security at the event. \\ \hline \hline \end{tabular}
\end{table}
Table 1: An original and paraphrased RTE example. The top represents an original premise (P) and its paraphrase (P’). The bottom depicts an original hypothesis (H) and its paraphrase (H’). A model robust to paraphrases should have consistent predictions across the following pairs: P-H, P’-H, P-H’, and P’-H’.
phrasing has been used to improve RTE predictions (Bosma and Callison-Burch, 2006; Sun et al., 2021) and RTE has been used for paraphrase identification (Seethamol and Manju, 2017) and generation (Arora et al., 2022). Furthermore, both phenomena are key to NLU (Androutsopoulos and Malakasiotis, 2010) and work such as Zhao et al. (2018); Hu et al. (2019) have explored rewriting RTE examples to create more robust models.
We follow a long tradition of evaluating linguistic phenomena captured in RTE models (Cooper et al., 1996). Recent tests focus on evaluating how well contemporary RTE models capture phenomena such as monotonicity (Yanaka et al., 2019, 2019), verb veridicality (Ross and Pavlick, 2019; Yanaka et al., 2021), presuppositions (Parrish et al., 2021) implicatures (Jeretic et al., 2020), basic logic (Richardson et al., 2020; Shi et al., 2021), figurative language (Chakrabarty et al., 2021), and others (Naik et al., 2018; Poliak et al., 2018; Vashishtha et al., 2020). Unlike many of those works that evaluate models' accuracy on examples that target specific phenomena, we use a contrastive approach (Prabhakaran et al., 2019; Gardner et al., 2020) to determine whether RTE models' predictions change when examples are paraphrased.
## 3 \(\hat{P}aRTE\)
To explore whether these RTE models are robust to paraphrases, we create \(\hat{P}aRTE\), a modified version of the Pascal RTE1-3 challenges (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007). \(\hat{P}aRTE\) contains 1,126 examples of an original unmodified RTE sentence-pair grouped with a sentence-pair with a modified premise, hypothesis, or both. We use the examples in RTE1-3 to create our test set, as opposed to other RTE datasets due to its long-standing history.
### Paraphrase Generation & Verification
For each RTE premise-hypothesis pair (P-H), we created three paraphrased premises (P') and hypotheses (H') using a T5-based paraphraser2 fine-tuned on the Google PAWS dataset (Zhang et al., 2019). To ensure lexically diverse paraphrases, we filter out any paraphrases that have high lexical overlap with the original sentences using Jaccard index threshold of 0.75. Out of 14,400 generated sentences, 2,449 remained - 956 paraphrased premises (P') and 1,493 paraphrased hypotheses (H'). Next, we retained 550 paraphrased premises and 800 paraphrased hypotheses paraphrases that crowdsource workers identified as grammatical and similar in meaning to the original sentences.3 We include a grammatical check since an existing RTE evaluation set focused on paraphrases (White et al., 2017) contains hypothesis-only biases related to grammaticality (Poliak et al., 2018).
Footnote 2: We manually verified the quality of this paraphraser. See Appendix B for more details.
If at least one P' or one H' passes this filtering process, we retain the original RTE example and pair it with a corresponding paraphrased example (i.e. P'-H', P'-H, or P-H'). In the case where more than one P' or H' passes the filtering, we retained the P' or H' that crowdsource workers deemed most similar to the original sentence. Out of the original 2,400 RTE test pairs, we retain 914 pairs with a high-quality P' or H', resulting in 1,178 original and paraphrased RTE pairs.4
Footnote 3: See Appendix B for a detailed description of this filtering process, including annotation guidelines.
Footnote 4: 415 pairs where the premise is paraphrased, 631 pairs where the hypothesis is paraphrased, and 132 pairs where both are paraphrased.
### Overcoming Semantic Variability
MacCartney (2009) argues that in addition to being _reliable_ and _robust_, RTE models must deal with the _broad variability of semantic expression_. In other words, though two sentences may be semantically congruent, it is possible that small variations in a paraphrased sentence contain enough semantic variability to change what would likely, or not likely be inferred from the sentence. Despite all P' and H' being deemed to be semantically congruent with their corresponding original sentences, the semantic variability of paraphrases might change whether H or H' can be inferred from P' or P.
Therefore, propagating an RTE label from an original sentence pair to a modified sentence pair might be inappropriate. We manually determined that this issue occurs in just 52 (4%) examples, and retained 1,126 examples. This ensures an evaluation set of high-quality examples that can be used to determine whether models are sensitive to paraphrases and change their prediction on paraphrased examples. Our dataset contains 402 examples with just a paraphrased premise P', 602 with just a paraphrased hypothesis H', and 122 with both a paraphrased premise and hypothesis.
## 4 Experimental Setup
We explore models built upon three different classes of sentence encoders: bag of words (BoW), LSTMs, and Transformers. Our BoW model represents premises and hypotheses as an average of their tokens' 300 dimensional GloVe embeddings [2]. The concatenation of these representations is fed to an MLP with two hidden layers. For the BiLSTM model, we represent tokens with GloVe embeddings, extract sentence representations using max-pooling, and pass concatenated sentence representations to an MLP with two hidden layers.
Our transformer-based models are pre-trained BERT [13] and Roberta [14] encoders with an MLP attached to the final layer. Additionally, we use GPT-3 in a zero-shot setting where we ask it to label the relationship between a premise and hypothesis.5
Footnote 5: See Appendix A for more details, including hyperparameters, model sizes, and GPT-3 prompt design and configurations. Our code is available at [https://github.com/stonybrook1p/parte](https://github.com/stonybrook1p/parte)
The RTE training sets do not contain enough examples to train deep learning models with a large number of parameters. We follow the common practice of training models on MNLI and using our test set to evaluate how well they capture a specific phenomenon related to NLU. During testing, we map the MNLI 'contradiction' and 'neutral' labels to the 'not-entailed' label in RTE, following common practice [21, 22, 13, 14].
## 5 Results
Table 2 report the results. The RTE and \(\hat{P}aRTE\) columns respectively report the models' accuracy on the 1,126 unmodified and paraphrased sentence pairs.6 Comparing the difference in accuracy between unmodified and paraphrased examples can be misleading. If the number of times a model changes a correct prediction is close to the number of times it changes an incorrect prediction, then the accuracy will hardly change. Figure 1 demonstrates why the accuracies do not change by much when models' predictions change on paraphrased examples. Furthermore, if a model is robust to paraphrases, then it should not change its predictions when an example is paraphrased, even if the prediction on the original unmodified example was incorrect. Hence, our test statistic is the percentage of examples where a model's predictions change (% \(\Delta\ \hat{P}aRTE\) column in Table 2) rather than a change in accuracy.
Footnote 6: Although there are just 914 unmodified sentence pairs, for the sake of a head-to-head comparison, we retain all instances of the unmodified sentence pairs when computing accuracy.
Compared to the Transformer based models, the BoW and BiLSTM models seem to be more sensitive, and less robust to paraphrasing, as they change their predictions on 15.27% and 16.69% respectively of the 1,126 examples. However, this might be associated with how word xembedding models only just outperform random guesses in and perform much worse on RTE compared to the Transformer models.
\begin{table}
\begin{tabular}{|l|l|l|l||l|} \hline ModelTestset & MNLI & RTE & \(\hat{P}aRTE\) & \% \(\Delta\ \hat{P}aRTE\) \\ \hline BoW & 67.97 & 53.99 & 54.70 & 15.27 \\ \hline BiLSTM & 66.68 & 51.59 & 51.24 & 16.69 \\ \hline BERT & 90.04 & 72.11 & 72.55 & 9.50 \\ \hline RoBERTa & 92.68 & 83.83 & 82.59 & 7.99 \\ \hline GPT-3 & - & 80.90 & 79.12 & 10.12 \\ \hline \end{tabular}
\end{table}
Table 2: Each row represents a model. The columns MNLI, RTE, \(\hat{P}aRTE\) report the model’s accuracy on those test sets. The last column (% \(\Delta\ \hat{P}aRTE\)) reports the percentage of examples where the model changed its prediction.
Figure 1: Number of times a model changes its predictions from correct to incorrect (left blue bar) or incorrect to correct (right red bar).
Focusing on the Transformer models, we noticed that RoBERTa performs the best on the datasets and is the most robust to paraphrasing - changing its predictions on just under 8% of paraphrased examples. Interestingly, when the models are trained specifically to perform this task, the models change their predictions on fewer paraphrased examples as these models' accuracy increases. However, improving performance alone might not automatically improve models' robustness to paraphrases. GPT-3's accuracy noticeably outperforms BERT's accuracy, but GPT-3 changes its predictions on more paraphrased examples compared to BERT.
P'-H' compared to P-H' or P'-HFigure 2 shows noticeable increases in the percentage of changed predictions when both premise and hypothesis are paraphrased compared to when just one of the sentences is paraphrased. Specifically, for BoW and BiLSTM we see an increase of 4.01 and 6.01 percentage points respectively, and for BERT, Roberta, GPT-3 increases of 4.97, 4.83, and 3.55. As the transformer-based models changed their predictions on 12-14% of examples where both sentences are paraphrased compared to 9-11% in general, this analysis further suggests that these models are not as robust to paraphrases as desired.
Entailed vs Not-entailed examplesRTE analyses often differentiate how models perform on entailed vs not entailed examples Liu et al. (2022). In Figure 3, we do not see meaningful differences in how models' predictions change on paraphrased examples based on the gold label. This might suggest that our dataset does not contain statistical irregularities based on the RTE labels.
Correct vs Not-Correct PredictionsFigure 4 shows that the Transformer models' predictions is more likely to change when it's prediction on an original example was incorrect (right red bars) compared to when the prediction for an original example was correct (left blue bars). For example, when RoBERTa's prediction for an original RTE example was correct, the model changed its prediction on just 5.5% of the corresponding paraphrased examples. When RoBERTa's predictions for an original RTE example were incorrect, RoBERTa's predictions changed for 20.88% corresponding paraphrased examples. Analyzing differences in models' confidences assigned to predictions might provide more insight Marce and Poliak (2022). We leave this for future work.
Source TaskRTE1-3 examples originated from multiple domains and downstream tasks, e.g. question-answering Moldovan et al. (2006), information extraction Grishman and Sundheim (1996), and summarization Evans et al. (2004); Radev et al. (2001). This enables researchers to evaluate how
Figure 4: Percentage of examples where models’ predictions change when the original prediction was correct (left blue bar) or incorrect (right red bar).
Figure 3: Percentage of examples where the models’ predictions changed when the gold label is entailed (blue left bar) or not-entailed (right red bar).
Figure 2: Percentage of examples with one paraphrased sentence (left blue bar) or two paraphrased sentences (right red bar) where models’ predictions change.
RTE models perform on examples that contain different aspects of _open domain inference_ necessary for the task MacCartney (2009). Figure 5 reports the changes in models' predictions across the different sources of examples. We do not see consistent trends across the original data sources.
## 6 Conclusion
We introduced \(\hat{P}aRTE\), a high-quality evaluation set of RTE examples paired with paraphrased RTE examples. We use our evaluation set to determine whether RTE models are robust to paraphrased examples. Our experiments indicate that while these models predictions are usually consistent when RTE examples are paraphrased, there is still room for improvement as models remain sensitive to changes in input Jia and Liang (2017); Belinkov and Bisk (2018); Iyyer et al. (2018). We hope that researchers will use \(\hat{P}aRTE\) to evaluate how well their NLU systems perform on paraphrased data.
## Limitations
Our results nor evaluation set cannot be used to indicate whether RTE models trained for other languages are robust to paraphrases. However, researchers can apply the methods we used to develop \(\hat{P}aRTE\) to build evaluation sets in other languages to test whether non-English NLU systems are robust to paraphrases.
## Ethics Statement
In conducting our research on RTE model robustness to paraphrasing, we take great care to ensure the ethical and responsible use of any data and models involved. We adhere to the principles of fairness, transparency, and non-discrimination in our experimentation and analysis. Furthermore, we take measures to protect the privacy and confidentiality of any individual crowdsource workers. We also strive to make our evaluation set and methods openly available to the research community to promote further study and advancement in the field of Natural Language Processing.
|
2305.14465 | Arithmetic Fundamental Lemma for the spherical Hecke algebra | We define Hecke correspondences and Hecke operators on unitary RZ spaces and
study their basic geometric properties, including a commutativity conjecture on
Hecke operators. Then we formulate the Arithmetic Fundamental Lemma conjecture
for the spherical Hecke algebra. We also formulate a conjecture on the
abundance of spherical Hecke functions with identically vanishing first
derivative of orbital integrals. We prove these conjectures for the case
$U(1)\times U(2)$. | Chao Li, Michael Rapoport, Wei Zhang | 2023-05-23T18:39:56Z | http://arxiv.org/abs/2305.14465v2 | # Arithmetic fundamental lemma for the spherical Hecke algebra
###### Abstract.
We define Hecke correspondences and Hecke operators on unitary RZ spaces and study their basic geometric properties, including a commutativity conjecture on Hecke operators. Then we formulate the Arithmetic Fundamental Lemma conjecture for the spherical Hecke algebra. We also formulate a conjecture on the abundance of spherical Hecke functions with identically vanishing first derivative of orbital integrals. We prove these conjectures for the case \(\operatorname{U}(1)\times\operatorname{U}(2)\).
###### Contents
* 1 Introduction
* 2 Notations
* 3 FL for the full spherical Hecke algebra
* 4 An alternative basis of \(\mathcal{H}_{K}\)
* 5 RZ spaces and their Hecke correspondences
* 6 AFL Conjectures for the Hecke algebra
* 7 The case \(n=1\)
* 8 Base-point freeness
* 9 Appendix: Correspondences for formal schemes and maps on K-groups
## 1. Introduction
In the relative trace formula approach of Jacquet and Rallis to the Gan-Gross-Prasad conjecture, the Jacquet-Rallis fundamental lemma (FL) conjecture plays a key role [15]. It states an identity of the following form. Let \(p\) be an odd prime number. Let \(F_{0}\) be a finite extension of \(\mathbb{Q}_{p}\) and let \(F/F_{0}\) be an unramified quadratic extension. Let \(W_{0}\) be a split \(F/F_{0}\)-hermitian space of dimension \(n+1\) and let \(W_{0}^{\flat}\) be the perp-space of a vector \(u_{0}\in W_{0}\) of unit length. Then the following identity holds for all _matching regular semi-simple_ elements \(\gamma\in\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n+1}(F)\) and \(g\in\operatorname{U}(W_{0}^{\flat})(F_{0})\times\operatorname{U}(W_{0})(F_{0})\),
\[\operatorname{Orb}(g,\mathbf{1}_{K^{\flat}\times K})=\omega(\gamma) \operatorname{Orb}(\gamma,\mathbf{1}_{K^{\theta}\times K^{\prime}}). \tag{1.0.1}\]
Here on the RHS, there appears the weighted orbital integral of the characteristic function of the natural hyperspecial compact subgroup \(K^{\flat}\times K^{\prime}\) of \(\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n+1}(F)\); on the LHS there appears the orbital integral of the characteristic function of the natural hyperspecial compact subgroup \(K^{\flat}\times K\) of \(\operatorname{U}(W_{0}^{\flat})(F_{0})\times\operatorname{U}(W_{0})(F_{0})\). The first factor on the RHS is the natural transfer factor, cf. [29]; both sides only depend on the orbits of \(\gamma\), resp. \(g\), under natural group actions.
The FL conjecture was proved for \(F\) with large residue characteristic by Yun and Gordon [35], and is now proved completely: the proof by R. Beuzart-Plessis [2] is local; the proof in [40] (for \(p\geq n+1\)) is global.
In fact, an identity of this form is true for the whole spherical Hecke algebra, and is due to S. Leslie [19]. Let \(\varphi^{\prime}\in\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\) be an arbitrary element in the spherical Hecke algebra for \(\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n+1}(F)\). Then the following identity holds for all matching regular semi-simple elements \(\gamma\in\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n+1}(F)\) and \(g\in\operatorname{U}(W_{0}^{\flat})(F_{0})\times\operatorname{U}(W_{0})(F_{ 0})\),
\[\operatorname{Orb}(g,\varphi)=\omega(\gamma)\operatorname{Orb}(\gamma, \varphi^{\prime}). \tag{1.0.2}\]
Here on the LHS appears the orbital integral of the image \(\varphi\) of \(\varphi^{\prime}\) under the _base change homomorphism_ from the spherical Hecke algebra of \(\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n+1}(F)\) to the spherical Hecke algebra of \(\operatorname{U}(W_{0}^{\flat})(F_{0})\times\operatorname{U}(W_{0})(F_{0})\). The second factor on the RHS is the weighted orbital integral of \(\varphi^{\prime}\). The method of proof of [19] is ultimately global.
The third author proposed a relative trace formula approach to the _arithmetic_ Gan-Gross-Prasad conjecture. In this context, he formulated the arithmetic fundamental lemma (AFL) conjecture [38]. The AFL relates the special value of the derivative of an orbital integral to an arithmetic intersection number on a Rapoport-Zink formal moduli space (RZ space) of \(p\)-divisible groups attached to a unitary group. The AFL conjecture is an identity of the following form. Let \(W_{1}\) be a non-split \(F/F_{0}\)-hermitian space of dimension \(n+1\) and let \(W_{1}^{\flat}\) be the perp-space of a vector \(u_{1}\in W_{1}\) of unit length. Then the following identity holds for all matching regular semi-simple elements \(\gamma\in\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n+1}(F)\) and \(g\in\operatorname{U}(W_{1}^{\flat})(F_{0})\times\operatorname{U}(W_{1})(F_{0})\),
\[2\langle g\Delta,\Delta\rangle_{\mathcal{N}_{n,n+1}}\cdot\log q=-\omega( \gamma)\,\partial\operatorname{Orb}(\gamma,\mathbf{1}). \tag{1.0.3}\]
Here the second factor on the RHS is the special value of the derivative of the weighted orbital integral of the unit element in the spherical Hecke algebra \(\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\). On the LHS appears the intersection number of the diagonal cycle \(\Delta\) of the product RZ-space \(\mathcal{N}_{n,n+1}=\mathcal{N}_{n}\times\mathcal{N}_{n+1}\) with its translate under the automorphism of \(\mathcal{N}_{n,n+1}\) induced by \(g\). Here, for any \(n\), \(\mathcal{N}_{n}\) is the moduli space of framed _basic_ principally polarized \(p\)-divisible groups with action of \(O_{F}\) of signature \((1,n-1)\).
The AFL conjecture is now known to hold for any odd prime \(p\), cf. W. Zhang [40], Mihatsch-Zhang [24], Z. Zhang [41]. These proofs are global in nature. Local proofs of the AFL are known for \(n=1,2\) (W. Zhang [38]), and for minuscule elements (He-Li-Zhu [14]).
The aim of the present paper is to propose a variant of the AFL conjecture in the spirit of Leslie's result on the FL, where the unit element in the spherical Hecke algebra is replaced by an arbitrary element \(\varphi^{\prime}\in\mathcal{H}_{K^{\theta}\times K^{\prime}}\). The proposed formula takes the following form,
\[2\langle g\Delta,\mathbb{T}_{\varphi}(\Delta)\rangle_{\mathcal{N}_{n,n+1}}\cdot \log q=-\omega(\gamma)\,\partial\mathrm{Orb}(\gamma,\varphi^{\prime}). \tag{1.0.4}\]
The new feature compared to the AFL conjecture (1.0.3) for the unit element is the appearance of the Hecke operator \(\mathbb{T}_{\varphi}\) on the LHS, and the definition of such Hecke operators is one of the main issues of the present paper, see below.
The AFL conjecture comes, as usual, in a homogeneous version (as stated above) and an inhomogeneous version. However, in contrast to the case of the unit element, these two versions are not equivalent: the homogeneous version implies the inhomogeneous version but not conversely. It is conceivable that the inhomogeneous version is easier to prove in some cases.
As evidence for this conjecture, we prove it in the case \(n=1\) (in this case, the homogeneous version and the inhomogeneous version are easily seen to be equivalent).
**Theorem 1.0.1**.: _The AFL formula (1.0.4) holds for \(n=1\)._
The proof is local, by explicit calculation of both sides of the formula and resembles the proof of the AFL in cases of low rank in [38]. On the geometric side we exploit the fact that, in the particular case \(n=1\), the Hecke operators are induced by explicit geometric correspondences which are finite and flat. Another ingredient is the theory of quasi-canonical divisors on \(\mathcal{N}_{2}\) in the sense of [17]. On the analytic side, we also give a purely local proof of the FL for the whole Hecke algebra (Leslie's theorem) in this case.
Our definition of Hecke operators (in K-theory) is based on the fact that there is a presentation of the spherical Hecke algebra of the unitary group as a polynomial algebra. The basis elements of this presentation can be chosen to be decomposed into a product (i.e., convolution) of _intertwining Hecke functions_. Here an intertwining Hecke function in the Iwahori Hecke algebra for a fixed Iwahori subgroup is a function of the form \(\mathbf{1}_{KK^{\prime}}\) for parahoric subgroups \(K,K^{\prime}\) stabilizing a facet in the alcove in the Bruhat-Tits building corresponding to the Iwahori subgroup1. For these elements it is possible to define integral models of Hecke correspondences. Indeed, we can naturally define a geometric correspondence between two RZ spaces for parahorics \(K,K^{\prime}\) as above, a diagram of RZ spaces at parahoric levels \(K\), \(K^{\prime}\), \(K\cap K^{\prime}\),
Footnote 1: The terminology “intertwining Hecke function” is borrowed from [22, Definition B.2.3], where a special case is considered.
(1.0.5)
(1.0.6)
In fact, for our purposes, it suffices to consider intertwining Hecke correspondences of the form \({\bf 1}_{KK^{\prime}}\) and \({\bf 1}_{K^{\prime}K}\), where \(K^{\prime}\) is a maximal parahoric and \(K\) is the hyperspecial vertex in the fixed alcove defining the spherical Hecke algebra. Hecke operators for general elements in the spherical Hecke algebra, in the sense of maps on K-groups, are then defined in three steps. The basis elements, also called _atomic elements_, can be written in the form \(\phi_{K}={\bf 1}_{KK^{\prime}}*{\bf 1}_{K^{\prime}K}\), where \(K^{\prime}\) is a maximal parahoric subgroup corresponding to a vertex in the fixed alcove, and this defines the corresponding Hecke operator. For a monomial in the basis elements, the corresponding Hecke operator is defined as a product of basic Hecke operators. The general case is obtained by linear combinations. However, at this point arises a highly non-trivial problem: the definition of monomial Hecke operators presupposes that the Hecke operators corresponding to different \(\phi_{K}\) commute. We conjecture that this is indeed true but at the moment our formulation of the AFL formula (1.0.4) is contingent on the solution of this conjecture. More precisely, without this conjecture, the formulation of the AFL conjecture becomes somewhat awkward (cf. Remark 6.1.5), unless \(\varphi\) is a power of a minuscule element (but even this instance of the AFL formula may be interesting to prove).
The idea of defining Hecke operators as linear combinations of products of certain distinguished Hecke operators also appears in the work of Li-Mihatsch on the linear ATC [21]. In their case, the projection maps \(\pi_{1}\) and \(\pi_{2}\) are finite and flat, and the same is true for compositions of distinguished Hecke correspondences. This implies that their Hecke operators are induced by explicit geometric Hecke correspondences. This also allows them to pass to the generic fiber to prove the necessary commutativity statement in their context.
In our case the projection maps \(\pi_{1}\) and \(\pi_{2}\) are usually not flat and the composition of such correspondences, in the sense of maps on K-groups, is not induced by the composition of geometric correspondences. This discrepancy between compositions of geometric correspondences and K-group correspondences would disappear if instead of usual (_classical_) formal schemes we had used _derived_ formal schemes in the definition of geometric correspondences. In this sense, our definition of Hecke operators is a "shadow" of a more sophisticated definition (which remains to be developed). By avoiding derived schemes, we forgo the possibility of defining our Hecke operators in terms of geometric Hecke correspondences2. However, it is unclear to us whether such a more sophisticated definition can be helpful in resolving the commutativity conjecture mentioned above. Relatedly, it seems that the more sophisticated definition of Hecke correspondences transfers to the global context of integral models of Shimura varieties for \(\operatorname{GU}(1,n-1)\) but the relation to the classical Hecke correspondences in the generic fiber is unclear.
Footnote 2: It may be possible, at least as far as defining and calculating intersection multiplicities is concerned, to replace derived formal schemes by their underlying classical formal scheme, equipped with a suitable element in the derived category of coherent sheaves.
Let us compare our construction of Hecke operators with variants in the literature; indeed, the construction of integral Hecke correspondences and their induced Hecke operators on cohomology, resp. cycle groups, resp. K-groups is a well-known problem in various contexts. An example, in the context of integral models of Shimura varieties, occurs in the proof of the Eichler-Shimura congruence relation, comp. Faltings-Chai [8] and Bultel-Wedhorn [4], Koskivirta [16], Lee [18], Wedhorn [33]. Specifically, in the case of the Siegel moduli space with hyperspecial level at \(p\), one considers simply the space of all isogenies of \(p\)-power degree and then isolates inside it a subspace that can be analyzed for the purpose at hand (note that the space of all isogenies of \(p\)-power degree is an unwieldy object that is hard to control). Let us also mention the recent paper by Fakhrudin-Pilloni [7], in which they aim to define Hecke operators for automorphic vector bundles on \(p\)-integral models of Shimura varieties with hyperspecial level at \(p\). Translated to our language of RZ spaces, they consider correspondences given by diagrams (1.0.5), where \(K\) and \(K^{\prime}\) are hyperspecial and conjugate under an auxiliary group (in their case, a group of unitary or symplectic similitudes). It should be pointed out that such diagrams exist only rarely: in the case of the symplectic group, there is precisely one such diagram (\(K\) is the stabilizer of a selfdual lattice and \(K^{\prime}\) is the stabilizer of a lattice selfdual up to a scalar), and similarly in the case of unitary groups considered here in the even rank case when \(K\) is the stabilizer of a selfdual lattice and \(K^{\prime}\) is the stabilizer of a lattice selfdual up to a scalar; in the case of the general linear group, all pairs \(K,K^{\prime}\) of hyperspecial subgroups corresponding to vertices in a fixed alcove give such diagrams. On the other hand, in [26, SS7] Pilloni defines more general automorphic vector bundle Hecke operators for \(\operatorname{GSp}_{4}\) in a way somewhat similar to ours, via intertwining Hecke operators. It is interesting to note that in the context of [7], there is also a commutativity conjecture of Hecke operators [7, Rem. 7.6]; however, there seems to be no direct relation to our conjecture above (but maybe a solution to one of the problems can give indications for a solution to the other problem). We also note that a function field analog has been considered by Yun and the third author (cf. [36, Prop. 5.10] and [37, Prop. 3.14]), where they consider the moduli space of \(\operatorname{GL}_{2}\)-shtukas (with an arbitrary number of legs) and construct Hecke correspondences for a natural basis (as a vector space) of the spherical Hecke algebra. They show a commutativity statement using crucially an equidimensionality result (cf. [36, Lem. 5.9] and [37, Lem. 3.13]), which is in turn proved via constructions closely related to the Geometric Satake isomorphism. Another attempt at defining integral Hecke correspondences occurs for RZ spaces in [30, Chap. 4]. That definition suffers from several drawbacks, the most serious being that the projection morphisms may not be proper and not surjective, cf. [30, remark after Prop. 4.44].
Why is it of interest to extend the AFL conjecture from the unit element to all elements in the spherical Hecke algebra? The reason that in the proof of the global Gan-Gross-Prasad conjecture (e.g., [39, 3]) one only considers the FL for the unit element is the density theorem of Ramakrishnan [27]. It allows one to avoid the Jacquet-Rallis fundamental lemma for the
full spherical Hecke algebra at inert places. However, such a density result is not available for the orthogonal group, in which case we need necessarily to consider the full Hecke algebra. It should be pointed out, however, that at present we do not have a formulation of an FL conjecture or an AFL conjecture in the case of the orthogonal group. Another motivation comes from the consideration of the \(p\)-adic height pairing of arithmetic diagonal cycles, as in on-going work of Disegni and the third author [6]. Here it is necessary to consider all Hecke correspondences at \(p\)-adic places. This is one of the reasons, why in [6] it is assumed that all \(p\)-adic places are split in the quadratic extension of global fields \(F/F_{0}\). When there are inert \(p\)-adic places, it will be necessary to consider Hecke correspondences at inert places and the situation of the present paper becomes relevant. One may even need to consider the more complicated case of the Iwahori level Hecke algebra. In fact, in Disegni's work on the \(p\)-adic Gross-Zagier formula for Shimura curves in the inert case [5], a crucial ingredient are Hecke correspondences for arbitrarily deep level.
One spin-off of the consideration of the AFL conjecture for the spherical Hecke algebra is that it naturally leads to the following question, also partly motivated by Disegni's work. Namely, one may ask whether a function in the spherical Hecke algebra is determined by its first derivatives of orbital integrals over regular semi-simple elements. To put this into context, it should be pointed out that a function in the spherical Hecke algebra is determined by its orbital integrals over regular semi-simple elements, cf. Proposition 8.1.1. Experimental evidence points to the fact that these two questions have quite distinct answers. Indeed, we conjecture that there is an abundance of functions with vanishing first derivatives of orbital integrals, in the following precise form.
Let \(G^{\prime}_{\mathrm{rs},W_{1}}\) denote the open subset of \(\mathrm{GL}_{n}(F)\times\mathrm{GL}_{n+1}(F)\) consisting of regular semisimple elements matching with elements in the non-quasi-split unitary group \(\mathrm{U}(W_{1}^{\natural})(F_{0})\times\mathrm{U}(W_{1})(F_{0})\).
**Conjecture 1.0.2**.: _The map_
\[\partial\mathrm{Orb}:\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\longrightarrow C ^{\infty}(G^{\prime}_{\mathrm{rs},W_{1}})\]
_has a large kernel, in the sense that the kernel generates the whole ring \(\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\) as an ideal (note that this kernel is only a vector subspace rather than an ideal). Similarly, the map defined by the intersection numbers, \(\mathrm{Int}:\mathcal{H}_{K^{\natural}\times K}\to C^{\infty}(G^{\prime}_{ \mathrm{rs},W_{1}})\), has a large kernel._
The conjecture is somewhat speculative and we give several weaker variants of it. We confirm this conjecture in the case \(n=1\).
**Theorem 1.0.3**.: _Conjecture 1.0.2 holds when \(n=1\)._
However, even without spin-offs, the arithmetic fundamental lemma for the entire spherical Hecke algebra is an interesting problem of its own, which may turn out to be quite difficult. Its solution might yield additional insight into the nature of Rapoport-Zink spaces and their special cycles. It might also be a good testing ground for applying derived algebraic
geometry in an unequal characteristic situation, after its success in the equal characteristic counterpart, comp. [9]. It would also be interesting to consider Hecke correspondences for other RZ spaces.
Since the arithmetic fundamental lemma for the entire spherical Hecke algebra seems so difficult, it may be instructive to prove it in special cases. We already mentioned its inhomogeneous version. Another possible simplification may occur for Hecke functions of the form \(\varphi^{\prime}={\bf 1}_{K^{\rho}}\otimes f^{\prime}\), where \(f^{\prime}\in{\cal H}_{K^{\prime}}\). Yet another simplification may occur for atomic elements.
We note that our procedure is based on integral models of Hecke correspondences for certain elements that are not contained in the spherical Hecke algebra (the function \({\bf 1}_{KK^{\prime}}\) is usually not spherical). Now \({\bf 1}_{KK^{\prime}}\) is contained in the Iwahori Hecke algebra (corresponding to a fixed chamber containing the facets corresponding to \(K\) and \(K^{\prime}\)), and it would be interesting to see how large a subalgebra they generate, in order to define the LHS of (1.0.4) for a wider class of functions \(\varphi\). On the other hand, at this moment we do not know natural candidates of smooth transfers (in the sense of Jacquet-Rallis) for functions not in the spherical Hecke algebra, so we have no immediate use for Arithmetic Transfer conjectures for all integral Hecke correspondences that we construct in this way.
In view of the global nature of the proof of Leslie's theorem and of the various FL statements for full Hecke algebras in the Langlands program, it is natural to speculate that a proof of our AFL conjecture would necessarily require a global input. It seems that the most promising approach is to study the global \(p\)-adic height pairing of Nekovar, which satisfies a "modularity" condition, in the sense that the action of the Hecke algebra on this height pairing factors through an action on automorphic forms. One may leverage this modularity to deduce a version of the (global) relative trace formula identities for the _full Hecke algebra_, from the a priori weaker result for a _partial Hecke algebra_ (i.e., locally the unit element at inert place and arbitrary at split places). One may then hope to deduce from such global identities the AFL identity (1.0.4). The aforementioned work of Disegni and the third author on \(p\)-adic height pairings [6] involves the projection to the _ordinary part_; this is an obstacle to pushing through this proof strategy.
The layout of the paper is as follows. After a notation section, we recall in SS3 the set-up and review the formulation of the Jacquet-Rallis transfer and Fundamental Lemma for the full spherical Hecke algebra. In SS4 we define atomic Hecke functions and exhibit the spherical Hecke algebra as the polynomial algebra of the atomic elements. In SS5 we define various Rapoport-Zink spaces with certain parahoric levels, and use them to define Hecke correspondences. We then formulate the commutativity conjecture. In SS6 we state the AFL conjecture for the spherical Hecke algebra, and in SS7 we prove it in the case \({\rm U}(1)\times{\rm U}(2)\). In SS8 we formulate a conjecture on the abundance of spherical Hecke functions with identically vanishing first derivative of orbital integrals. In SS9 we collect a few facts on the general theory of correspondences.
We thank Tony Feng, Benjamin Howard and Andreas Mihatsch for their help. We thank SLMath for its hospitality to all three of us during the Spring 2023 semester on "Algebraic cycles, L-values, and Euler systems" when part of this work was done. CL was supported by the NSF grant DMS #2101157. MR was supported by the Simons foundation. WZ was supported by the NSF grant DMS #1901642 and the Simons Foundation.
## 2. Notations
Let \(p>2\) be a prime. Let \(F_{0}\) be a finite extension of \(\mathbb{Q}_{p}\), with ring of integers \(O_{F_{0}}\), residue field \(k=\mathbb{F}_{q}\) of size \(q\), and uniformizer \(\varpi\). Let \(F\) be the unramified quadratic extension of \(F_{0}\), with ring of integers \(O_{F}\) and residue field \(k_{F}\). Let \(\sigma\) be the nontrivial Galois automorphism of \(F/F_{0}\). Fix \(\delta\in O_{F}^{\times}\) such that \(\sigma(\delta)=-\delta\). Let \(\operatorname{val}:F\to\mathbb{Z}\cup\{\infty\}\) be the valuation on \(F\). Let \(|\cdot|_{F}:F\to\mathbb{R}_{\geq 0}\) (resp. \(|\cdot|:F_{0}\to\mathbb{R}_{\geq 0}\)) be the normalized absolute value on \(F\) (resp. \(F_{0}\)). Let \(\eta=\eta_{F/F_{0}}:F_{0}^{\times}\to\{\pm 1\}\) be the quadratic character associated to \(F/F_{0}\). We let \(\widetilde{\eta}:F^{\times}\to\{\pm 1\}\) be the unique unramified quadratic character extending \(\eta\). Let \(\breve{F}\) be the completion of the maximal unramified extension of \(F\), and \(O_{\breve{F}}\) its ring of integers, and \(\bar{k}\) its residue field.
For a linear algebraic group \(G\) over \(F_{0}\), we use the notation \(C_{0}^{\infty}(G)\) for \(C_{0}^{\infty}(G(F_{0}))\).
## 3. FL for the full spherical Hecke algebra
In this section we review the formulation of the Jacquet-Rallis transfer and Fundamental Lemma for the full spherical Hecke algebra.
### Groups
We recall the group-theoretic setup of [28, SS2] in both homogeneous and inhomogeneous settings. Let \(n\geq 1\). In the homogeneous setting, set
\[G^{\prime}:=\operatorname{Res}_{F/F_{0}}(\operatorname{GL}_{n}\times \operatorname{GL}_{n+1}), \tag{3.1.1}\]
a reductive algebraic group over \(F_{0}\). Let \(W\) be a \(F/F_{0}\)-hermitian space of dimension \(n+1\). Fix \(u\in W\) a non-isotropic vector (the _special vector_), and let \(W^{\flat}=\langle u\rangle^{\perp}\). Set
\[G_{W}=\operatorname{U}(W^{\flat})\times\operatorname{U}(W), \tag{3.1.2}\]
a reductive algebraic group over \(F_{0}\). We have the notion of a _regular semi-simple element_, for \(\gamma\in G^{\prime}(F_{0})\) and for \(g\in G_{W}(F_{0})\). The notions of regular semi-simple elements are with respect to the action of the reductive algebraic group over \(F_{0}\),
\[H^{\prime}_{1,2}=H^{\prime}_{1}\times H^{\prime}_{2}:=\operatorname{Res}_{F/F_ {0}}(\operatorname{GL}_{n})\times(\operatorname{GL}_{n}\times\operatorname{ GL}_{n+1})\]
on \(G^{\prime}\), resp., of \(\operatorname{U}(W^{\flat})\times\operatorname{U}(W^{\flat})\) on \(G_{W}\). The sets of regular semi-simple elements are denoted by \(G^{\prime}(F_{0})_{\operatorname{rs}}\) and \(G_{W}(F_{0})_{\operatorname{rs}}\) respectively. We choose a basis of \(W\) by first choosing a basis of \(W^{\flat}\) and then adding the special vector as the last basis vector. This then gives an identification of \(G_{W}\otimes_{F_{0}}F\) with \(G^{\prime}\otimes_{F_{0}}F\), and defines the notion of _matching_\(\gamma\leftrightarrow g\) between regular semi-simple elements of \(G_{W}(F_{0})\) and \(G^{\prime}(F_{0})\), cf. [28, SS2].
In the inhomogeneous setting, recall the symmetric space
\[S=S_{n+1}:=\{g\in\operatorname{Res}_{F/F_{0}}\operatorname{GL}_{n+1}\mid g \overline{g}=1_{n+1}\} \tag{3.1.3}\]
and the map \(r:\operatorname{Res}_{F/F_{0}}\operatorname{GL}_{n+1}\to S\) given by \(g\mapsto\gamma=g\overline{g}^{-1}\), which induces an isomorphism
\[(\operatorname{Res}_{F/F_{0}}\operatorname{GL}_{n+1})/\operatorname{GL}_{n+1} \simeq S.\]
We have the notion of a _regular semi-simple element_, for \(\gamma\in S(F_{0})\) and for \(g\in\operatorname{U}(W)(F_{0})\) and, after the choice of a basis of \(W\) as above, the notion of _matching_\(\gamma\leftrightarrow g\). The notions of regular semi-simple elements are with respect to the conjugation actions of \(H^{\prime}:=\operatorname{GL}_{n}\) on \(S\), resp., of \(H:=\operatorname{U}(W^{\flat})\) on \(\operatorname{U}(W)\). The sets of regular semi-simple elements are denoted by \(S(F_{0})_{\operatorname{rs}}\) and \(\operatorname{U}(W)(F_{0})_{\operatorname{rs}}\) respectively.
### Orbital integrals
We recall the orbital integrals in both homogeneous and inhomogeneous settings, following [28, SS5]. In the homogeneous setting, for \(\gamma\in G^{\prime}(F_{0})_{\operatorname{rs}}\), a function \(\varphi\in C_{0}^{\infty}(G^{\prime})\) and a complex parameter \(s\in\mathbb{C}\), we define
\[\operatorname{Orb}(\gamma,\varphi,s):=\int_{H^{\prime}_{1,2}(F_{0})}\varphi(h _{1}^{-1}\gamma h_{2})|\mathrm{det}\,h_{1}|_{F}^{s}\eta(\mathrm{det}\,h_{2}) \,\mathrm{d}h_{1}\,\mathrm{d}h_{2}, \tag{3.2.1}\]
where we use fixed Haar measures on \(H^{\prime}_{1}(F_{0})\) and \(H^{\prime}_{2}(F_{0})\) and the product Haar measure on \(H^{\prime}_{1,2}(F_{0})=H^{\prime}_{1}(F_{0})\times H^{\prime}_{2}(F_{0})\). We further define the value and derivative at \(s=0\),
\[\operatorname{Orb}(\gamma,\varphi):=\operatorname{Orb}(\gamma,\varphi,0) \quad\text{and}\quad\partial\operatorname{Orb}(\gamma,\varphi):=\frac{ \mathrm{d}}{\mathrm{d}s}\Big{|}_{s=0}\operatorname{Orb}(\gamma,\varphi,s). \tag{3.2.2}\]
The integral defining \(\operatorname{Orb}(\gamma,\varphi,s)\) is absolutely convergent, and depends only on the orbit of \(\gamma\).
Now we turn to the inhomogeneous setting. For \(\gamma\in S(F_{0})_{\operatorname{rs}}\), a function \(\phi\in C_{c}^{\infty}(S)\), and a complex parameter \(s\in\mathbb{C}\), we introduce the _weighted orbital integral_
\[\operatorname{Orb}(\gamma,\phi,s):=\int_{H^{\prime}(F_{0})}\phi(h^{-1}\gamma h )|\mathrm{det}\,h|^{s}\eta(\mathrm{det}\,h)\,\mathrm{d}h, \tag{3.2.3}\]
as well as the value and derivative at \(s=0\),
\[\operatorname{Orb}(\gamma,\phi):=\operatorname{Orb}(\gamma,\phi,0)\quad\text{ and}\quad\partial\operatorname{Orb}(\gamma,\phi):=\frac{\mathrm{d}}{\mathrm{d}s} \Big{|}_{s=0}\operatorname{Orb}(\gamma,\phi,s).\]
As in the homogeneous setting, the integral defining \(\operatorname{Orb}(\gamma,\phi,s)\) is absolutely convergent, and depends only on the orbit of \(\gamma\).
### Matching and transfer
Let \(W_{0}\), \(W_{1}\) be representatives of the two isomorphism classes of \(F/F_{0}\)-hermitian spaces of dimension \(n+1\). We assume \(W_{0}\) to be split. Take the special vectors \(u_{0}\in W_{0}\) and \(u_{1}\in W_{1}\) to have the same norm (not necessarily a unit). We also choose bases of \(W_{0}\) and \(W_{1}\) as above. Then matching defines bijections of regular semisimple orbits
\[\big{[}G_{W_{0}}(F_{0})\big{]}_{\operatorname{rs}}\coprod\big{[}G_{W_{1}}(F_{0 })\big{]}_{\operatorname{rs}}\xrightarrow{\sim}\big{[}G^{\prime}(F_{0})\big{]} _{\operatorname{rs}} \tag{3.3.1}\]
in the homogeneous setting, and
\[\left[\mathrm{U}(W_{0})(F_{0})\right]_{\mathrm{rs}}\coprod\left[\mathrm{U}(W_{1}) (F_{0})\right]_{\mathrm{rs}}\stackrel{{\sim}}{{\longrightarrow}} \left[S(F_{0})\right]_{\mathrm{rs}} \tag{3.3.2}\]
in the inhomogeneous setting, cf. [28, SSSS2.1, 2.2].
Then associated to a transfer factor \(\omega_{G^{\prime}}:G^{\prime}(F_{0})_{\mathrm{rs}}\to\mathbb{C}^{\times}\) we have the notion of _transfer_ between functions \(\varphi\in C_{c}^{\infty}(G^{\prime})\) and pairs of functions \((f_{0},f_{1})\in C_{c}^{\infty}(G_{W_{0}})\times C_{c}^{\infty}(G_{W_{1}})\) ([29, Definition 2.2]). We will always use the transfer factor given by [29, (5.2)] (extrapolated in the obvious way from odd \(n\) to even \(n\)). Similarly, for a transfer factor \(\omega_{S}:S(F_{0})_{\mathrm{rs}}\to\mathbb{C}^{\times}\) we have the notion of _transfer_ between functions \(\phi\in C_{c}^{\infty}(S)\) and pairs of functions \((f_{0},f_{1})\in C_{c}^{\infty}(\mathrm{U}(W_{0}))\times C_{c}^{\infty}( \mathrm{U}(W_{1}))\) ([29, Definition 2.4]) We will always use the transfer factor given by [29, (5.5)] (again extrapolated to all \(n\)).
### Satake isomorphism and base change
Let \(K^{\prime}=K^{\prime}_{n}=\mathrm{GL}_{n}(O_{F})\). We denote by \(\mathcal{H}_{K^{\prime}}=\mathbb{Q}[K^{\prime}_{n}\backslash\mathrm{GL}_{n}(F )/K^{\prime}_{n}]\) the Hecke algebra with coefficients in \(\mathbb{Q}\) of \(\mathrm{GL}_{n}\).
We first recall the Satake isomorphism for \(\mathrm{GL}_{n}\) over \(F\). Let \(T\subseteq\mathrm{GL}_{n}\) be the diagonal torus. The cocharacter group \(X_{*}(T)\) is a free abelian group generated by \(\{\mu_{1},\dots,\mu_{n}\}\), where \(\mu_{i}\) is the injection in the \(i\)-th factor. For \(\mu\in X_{*}(T)\), denote by \([\mu]\) the corresponding element in the group algebra \(\mathbb{C}[X_{*}(T)]\). For \(1\leq i\leq n\), define
\[x_{i}:=[\mu_{i}]\in\mathbb{C}[X_{*}(T)].\]
Let \(\sigma_{i}\) be the degree \(i\) elementary symmetric polynomial in \(\{x_{1},\dots,x_{n}\}\). Then the Satake transform gives an isomorphism of algebras
\[\mathrm{Sat}:\mathcal{H}_{K^{\prime}}\otimes_{\mathbb{Q}}\mathbb{C} \stackrel{{\sim}}{{\longrightarrow}}\mathbb{C}[\sigma_{1},\dots, \sigma_{n-1},\sigma_{n}^{\pm}]=\mathbb{C}[T(F)/T(O_{F})]^{S_{n}}. \tag{3.4.1}\]
It sends the minuscule function \(\mathbf{1}_{K^{\prime}\varpi^{(1^{i},0^{n-i})}K^{\prime}}\) to \(q_{F}^{i/2}\sigma_{i}=q^{i}\sigma_{i}\), which corresponds to the sum of the elements in the \(S_{n}\)-orbit of \(\varpi^{(1^{i},0^{n-i})}T(O_{F})\). Note that since the modulus function \(\delta_{B}^{\frac{1}{2}}\) takes values in \(\mathbb{Q}\), the homomorphism (3.4.1) is defined over \(\mathbb{Q}\).
Next we recall the Satake isomorphism for the unramified unitary group. Let now \(W_{0}\) be a split \(F/F_{0}\)-hermitian space of dimension \(n\). We choose a basis of \(W\) such that the hermitian form is given by the antidiagonal unit matrix. Let \(\Xi\subset W_{0}\) be the standard lattice, which is self-dual, and let \(K\subset\mathrm{U}(W_{0})(F_{0})\) be its stabilizer. Let \(\mathcal{H}_{K}=\mathbb{Q}[K\backslash\mathrm{U}(W_{0})(F_{0})/K]\) be the Hecke algebra with coefficients in \(\mathbb{Q}\) of \(\mathrm{U}(W_{0})\). We recall the Satake isomorphism for \(\mathrm{U}(W_{0})\). Let \(A\) be the maximal split diagonal torus in \(\mathrm{U}(W_{0})\). Let \(m=\lfloor n/2\rfloor\). For \(1\leq s\leq m\), define
\[y_{s}:=[\mu_{s}-\mu_{n+1-s}]+[\mu_{n+1-s}-\mu_{s}]=\frac{x_{s}}{x_{n+1-s}}+ \frac{x_{n+1-s}}{x_{s}}\in\mathbb{C}[X_{*}(A)].\]
Let \(\mathfrak{s}_{s}\) be the degree \(s\) elementary symmetric polynomial in \(\{y_{1},\dots,y_{m}\}\). Then the Satake transform gives an isomorphism of algebras
\[\mathrm{Sat}:\mathcal{H}_{K}\otimes_{\mathbb{Q}}\mathbb{C}\stackrel{{ \sim}}{{\longrightarrow}}\mathbb{C}[\mathfrak{s}_{1},\dots,\mathfrak{s}_{m} ]=\mathbb{C}[A(F_{0})/A(O_{F_{0}})]^{W_{n}}, \tag{3.4.2}\]
where \(W_{n}\simeq(\mathbb{Z}/2\mathbb{Z})^{m}\rtimes S_{m}\) is the Weyl group of \(A(F_{0})\) in \(\mathrm{U}(W_{0})(F_{0})\). In particular, \(\mathcal{H}_{K}\) is a polynomial algebra. Analogous to the case of \(\mathrm{GL}_{n}\) (over \(F\)), the modulus function \(\delta_{\bar{B}}^{\frac{1}{2}}\) takes values in \(\mathbb{Q}\), hence the homomorphism (3.4.2) is defined over \(\mathbb{Q}\).
We have an algebra homomorphism, called the base change homomorphism,
\[\mathrm{BC}:\mathcal{H}_{K^{\prime}}\to\mathcal{H}_{K},\quad\varphi\longmapsto \mathrm{BC}(\varphi). \tag{3.4.3}\]
The homomorphism \(\mathrm{BC}\) is characterized by the identity
\[\mathrm{Sat}(\varphi)(\mathrm{bc}(\alpha))=\mathrm{Sat}(\mathrm{BC}(\varphi) )(\alpha).\]
Here on the LHS appears the natural inclusion,
\[\mathrm{bc}\colon(\mathbb{C}^{n})^{\mathrm{unit}}\hookrightarrow(\mathbb{C}^{ \times})^{n}.\]
Here \((\mathbb{C}^{n})^{\mathrm{unit}}\) denotes the space of unitary parameters in \(\mathbb{C}^{n}\),
\[\begin{split}(\mathbb{C}^{n})^{\mathrm{unit}}=\{\alpha=(\alpha_{1 },\dots,\alpha_{n})\in\mathbb{C}^{n}\mid\alpha_{i}\alpha_{n+1-i}=1\ \forall i,\\ \text{and}\ \alpha_{m+1}=1\ \text{if}\ n=2m+1\}.\end{split} \tag{3.4.4}\]
We denote by the same symbol the induced homomorphism,
\[\mathrm{BC}\colon\mathbb{Q}[\sigma_{1},\sigma_{2},\dots,\sigma_{n-1},\sigma_{ n}^{\pm}]\longrightarrow\mathbb{Q}[\mathfrak{s}_{1},\dots,\mathfrak{s}_{m}],\]
which induces \(\mathrm{bc}\) on the corresponding spectra. We obtain a commutative diagram
(3.4.5)
For example, we have
\[\mathrm{BC}(\sigma_{1})=\begin{cases}\mathfrak{s}_{1}&\text{if $n$ is even}\\ \mathfrak{s}_{1}+1&\text{if $n$ is odd}.\end{cases}\]
### Some explicit examples
In this subsection we make a digression to give some examples of Satake transforms and the base change homomorphism. Most of it will not be used later. We use for \(W_{0}\) the Hermitian form on the standard vector space given by an anti-diagonal matrix.
For even \(t\) with \(0\leq t\leq n\), we introduce the functions
\[f^{[t]}:=\mathbf{1}_{K\varpi^{(1t/2,0^{n-t},(-1)^{t/2})}K}\in\mathcal{H}_{K}. \tag{3.5.1}\]
Note that these functions form a polynomial basis of \(\mathcal{H}_{K}\). We wish to determine their Satake transforms.
We recall some results from [22]. Let \(\hat{T}\subset\mathrm{GL}_{n}(\mathbb{C})\) be the diagonal torus in the dual group of \(\mathrm{GL}_{n}\). Let \(\chi(\rho_{n,s})\) be the restriction of the character of \(\wedge^{s}Std\otimes\wedge^{s}Std^{\vee}\) to \(\hat{T}\times\{\sigma\}\subset\mathrm{GL}_{n}(\mathbb{C})\rtimes\mathrm{Gal}(F/ F_{0})\), viewed as an element in \(\mathbb{Z}[X^{*}(\hat{T})]=\mathbb{Z}[X_{*}(T)]\) (in fact it lies in
\(\mathbb{Z}[X_{*}(T)]^{W_{n}}=\mathbb{C}[\mathfrak{s}_{1},\cdots,\mathfrak{s}_{m}]\)). Here we refer to loc. cit. for the definition of the semi-direct product (defining the L-group of \(\mathrm{U}(W_{0})\)) and of the action of \(\sigma\) on the representation space. Then [22, Lem. B.2] (the latter is also [34, Lem. 9.2.4])
\[\chi(\rho_{n,s})=\begin{cases}\sum_{j=0}^{[s/2]}\begin{pmatrix}m-(s-2j)\\ j\end{pmatrix}\mathfrak{s}_{s-2j},&n\text{ even}\\ \sum_{i=0}^{s}\begin{pmatrix}m-(s-i)\\ [i/2]\end{pmatrix}\mathfrak{s}_{s-i},&n\text{ odd}.\end{cases}\]
Set
\[[n]_{q}=\frac{q^{n}-1}{q-1},\quad[n]_{q}!=[n]_{q}[n-1]_{q}\cdots[1]_{q},\quad \begin{bmatrix}n\\ m\end{bmatrix}_{q}=\frac{[n]_{q}!}{[m]_{q}![n-m]_{q}!}\]
The Satake transforms of \(f^{[2s]}\), \(1\leq s\leq m\), are determined by the following identity in \(\mathbb{C}[\mathfrak{s}_{1},\dots,\mathfrak{s}_{m}]\), cf. [22, Lem. 2.6],
\[q^{s(n-s)}\chi(\rho_{n,s})=\sum_{i=0}^{s}\begin{bmatrix}n-2i\\ s-i\end{bmatrix}_{-q}\text{Sat}(f^{[2i]}),\quad 1\leq s\leq m.\]
For completeness we also recall [22, Lem. B.1.3, B.1.4]:
\[\prod_{t=1}^{m}(\lambda+\lambda^{-1}+y_{t})=\begin{cases}\chi(\rho_{n,m})+\sum _{i=1}^{m}\chi(\rho_{n,m-i})(\lambda^{i}+\lambda^{-i}),&n\text{ even}\\ \sum_{i=0}^{m}\chi(\rho_{n,m-i})\frac{(\lambda^{i+1}+\lambda^{-i})}{\lambda+1},&n\text{ odd},\end{cases}\]
as an identity of finite Laurent series in \(\lambda\).
**Example 3.5.1**.: Taking \(s=1\) we obtain
\[\begin{bmatrix}n\\ 1\end{bmatrix}_{-q}\text{Sat}(f^{[0]})+\begin{bmatrix}n-2\\ 0\end{bmatrix}_{-q}\text{Sat}(f^{[2]})=q^{n-1}\begin{cases}\mathfrak{s}_{1},&n \text{ even},\\ \mathfrak{s}_{1}+1,&n\text{ odd}.\end{cases}\]
Therefore
\[\text{Sat}(f^{[2]})=-[n]_{-q}+q^{n-1}\begin{cases}\mathfrak{s}_{1},&n\text{ even},\\ \mathfrak{s}_{1}+1,&n\text{ odd}.\end{cases}\]
Taking \(s=2\) we obtain
\[\begin{bmatrix}n\\ 2\end{bmatrix}_{-q}\text{Sat}(f^{[0]})+\begin{bmatrix}n-2\\ 1\end{bmatrix}_{-q}\text{Sat}(f^{[2]})+\begin{bmatrix}n-4\\ 0\end{bmatrix}_{-q}\text{Sat}(f^{[4]})=q^{2(n-2)}\begin{cases}\mathfrak{s}_{2}+ m,&n\text{ even},\\ \mathfrak{s}_{2}+\mathfrak{s}_{1}+m,&n\text{ odd}.\end{cases}\]
We also describe some explicit functions \(\varphi^{\prime[t]}\) such that \(\mathrm{BC}(\varphi^{\prime[t]})=f^{[t]}\) for even \(t\). For \(\varphi^{\prime}\in\mathcal{H}_{K^{\prime}}\), we view \(\varphi^{\prime}(\alpha_{1},\dots,\alpha_{n})\), where \((\alpha_{1},\dots,\alpha_{n})\in(\mathbb{C}^{n})^{\mathrm{unit}}\), as a symmetric polynomial
in \(\alpha_{i}+\alpha_{i}^{-1}\) for \(1\leq i\leq m\), then the resulting polynomial is nothing but \(\operatorname{BC}(\varphi^{\prime})\). For example,
\[\sigma_{2}(\alpha_{1},\dots,\alpha_{n})=\sum_{1\leq i<j\leq n} \alpha_{i}\alpha_{j}= \sum_{1\leq i<j\leq m}(\alpha_{i}+\alpha_{i}^{-1})(\alpha_{j}+ \alpha_{j}^{-1})\] \[+\sum_{1\leq i\leq m}\alpha_{i}\alpha_{i}^{-1}(+\sum_{1\leq i\leq m }(\alpha_{i}+\alpha_{i}^{-1})\text{ if $n$ is odd}).\]
Thus
\[\operatorname{BC}(\sigma_{2})=\begin{cases}\mathfrak{s}_{2}+m,& \text{$n$ even},\\ \mathfrak{s}_{2}+\mathfrak{s}_{1}+m,&\text{$n$ odd}.\end{cases}\]
In general, for \(1\leq s\leq m\), we have
\[\operatorname{BC}(\sigma_{s})=\chi(\rho_{n,s}).\]
Thus \(\{\operatorname{Sat}(f^{[2s]})\}\) can be written as linear combination of \(\{\operatorname{BC}(\sigma_{s})\}\) given by
\[\begin{pmatrix}\operatorname{Sat}(f^{[0]})\\ \operatorname{Sat}(f^{[2]})\\ \vdots\\ \operatorname{Sat}(f^{[2m]})\end{pmatrix}=\begin{pmatrix}\left[\begin{smallmatrix} n\\ 0\end{smallmatrix}\right]_{-q}&0&\cdots&0\\ \left[\begin{smallmatrix}n\\ 1\end{smallmatrix}\right]_{-q}&\left[\begin{smallmatrix}n-2\\ 0\end{smallmatrix}\right]_{-q}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ \left[\begin{smallmatrix}n\\ m\end{smallmatrix}\right]_{-q}&\left[\begin{smallmatrix}n-2\\ m-1\end{smallmatrix}\right]_{-q}&\cdots&\left[\begin{smallmatrix}n-2m\\ 0\end{smallmatrix}\right]_{-q}\end{pmatrix}^{-1}\cdot\begin{pmatrix}1\\ q^{n-1}\cdot\operatorname{BC}(\sigma_{1})\\ \vdots\\ q^{m(n-m)}\cdot\operatorname{BC}(\sigma_{m})\end{pmatrix}\]
### The fundamental lemma, homogeneous version
We apply the preceding considerations to a split space \(W_{0}\) of dimension \(n+1\) and to \(W_{0}^{\flat}=\langle u_{0}\rangle^{\perp}\), where the special vector \(u_{0}\in\Xi\) has unit length. We denote by \(K^{\flat}\) the stabilizer of the selfdual lattice \(\Xi^{\flat}:=W_{0}^{\flat}\cap\Xi\). The Hecke algebra \(\mathcal{H}_{K^{\flat}\times K}\) for \(\operatorname{U}(W_{0}^{\flat})\times\operatorname{U}(W_{0})\) can be identified with the tensor product of algebras \(\mathcal{H}_{K^{\flat}}\otimes_{\mathbb{Q}}\mathcal{H}_{K}\). The analogous facts hold for the triple \((\operatorname{GL}_{n},\operatorname{GL}_{n+1},\operatorname{GL}_{n}\times \operatorname{GL}_{n+1})\) and the standard open compact subgroups \((K^{\prime}{}^{9},K^{\prime},K^{\prime}{}^{\flat}\times K^{\prime})\). We use the same symbol \(\operatorname{BC}\) for the analogous algebra homomorphisms. More precisely, we have
\[\operatorname{BC}_{n}:\mathcal{H}_{K^{\prime\flat}}\to\mathcal{H}_{K^{\flat}},\quad\operatorname{BC}_{n+1}:\mathcal{H}_{K^{\prime}}\to\mathcal{H}_{K}, \quad\operatorname{BC}=\operatorname{BC}_{n}\otimes\operatorname{BC}_{n+1}: \mathcal{H}_{K^{\prime\flat}}\otimes_{\mathbb{Q}}\mathcal{H}_{K^{\prime}}\to \mathcal{H}_{K^{\flat}}\otimes_{\mathbb{Q}}\mathcal{H}_{K}.\]
We have the following fundamental lemma for the spherical Hecke algebra in the Jacquet-Rallis case. In it, we let \(W_{1}\) be the non-split space of dimension \(n+1\) and \(W_{1}^{\flat}=\langle u_{1}\rangle^{\perp}\), where the special vector \(u_{1}\) has unit length. Also, we have chosen bases of \(W_{0}\) and \(W_{1}\) as usual.
**Theorem 3.6.1** (FL for the spherical Hecke algebra (homogeneous version) (Leslie [19])).: _Let \(\varphi^{\prime}\in\mathcal{H}_{K^{\prime\flat}}\otimes\mathcal{H}_{K^{\prime}}\). Then_
\[(\operatorname{BC}(\varphi^{\prime}),0)\in C_{c}^{\infty}(G_{W_{0}})\times C_{ c}^{\infty}(G_{W_{1}})\]
_is a transfer of \(\varphi^{\prime}\)._
### The fundamental lemma, inhomogeneous version
Recall the action of \(\operatorname{GL}_{n}\) on \(S=S_{n}\), cf. (3.1.3) (except that here \(n\) replaces \(n+1\)). Let
\[\mathcal{H}_{K^{\prime}_{S_{n}}}=\mathcal{C}_{c}^{\infty}(S_{n}(F_{0}))^{K^{ \prime}_{n}}.\]
Note that this is a module over the Hecke algebra \(\mathcal{H}_{K^{\prime}_{n}}=\mathcal{H}_{K^{\prime}_{n}}(\operatorname{GL}_{ n})\). We obtain a map
\[\operatorname{act}:\mathcal{H}_{K^{\prime}_{n}}\longrightarrow\mathcal{H}_{K ^{\prime}_{S_{n}}},\quad f^{\prime}\longmapsto f^{\prime}*\mathbf{1}_{K^{ \prime}_{S_{n}}}. \tag{3.7.1}\]
Here \(\mathbf{1}_{K^{\prime}_{S_{n}}}\) denotes the characteristic function of \(K^{\prime}_{S_{n}}=K^{\prime}_{n}\cdot 1\).
There is an alternative description. Let \(r:G^{\prime}\to S_{n}\) be the map \(g\mapsto g\overline{g}^{-1}\). We also have the map induced by integration on the fibers
\[r_{*}:\mathcal{H}_{K^{\prime}_{n}}\longrightarrow\mathcal{H}_{K^{\prime}_{S_{ n}}}, \tag{3.7.2}\]
which sends \(f^{\prime}\) to \(f^{{}^{\prime}\natural}\) defined by
\[f^{{}^{\prime}\natural}(g\overline{g}^{-1})=\int_{\operatorname{GL}_{n}(F_{0} )}f^{\prime}(gh)\,dh.\]
Here we choose the Haar measure on \(\operatorname{GL}_{n}(F_{0})\) such that \(\operatorname{vol}(\operatorname{GL}_{n}(O_{F_{0}}))=1\). Then it is easy to check that the two maps (3.7.1) and (3.7.2) coincide, cf. [19, Lem. 3.3].
Using a theorem of Offen [25], Leslie shows that both maps factor through the base change homomorphism \(\operatorname{BC}=\operatorname{BC}_{n}:\mathcal{H}_{K^{\prime}_{n}}( \operatorname{GL}_{n})\rightarrow\mathcal{H}_{K}\) and induce an isomorphism with the Hecke algebra for the quasi-split unitary group,
\[\operatorname{BC}_{S_{n}}:\mathcal{H}_{K^{\prime}_{S}}\stackrel{{ \sim}}{{\longrightarrow}}\mathcal{H}_{K}, \tag{3.7.3}\]
cf. [19, Cor. 3.5]. We thus have the following commutative diagram,
(3.7.4)
For the inhomogeneous \(\operatorname{FL}\) we need a twist by the algebra automorphism \(\eta_{\mathcal{H}}:\mathcal{H}_{K^{\prime}_{n}}\rightarrow\mathcal{H}_{K^{ \prime}_{n}}\) defined by \(f\mapsto f\widetilde{\eta}\) where \(\widetilde{\eta}(g)=(-1)^{v(\det(g))}\) is our fixed extension of the character \(\eta\). In terms of the Satake isomorphism (3.4.1), this automorphism translates to the map \(x_{i}\mapsto-x_{i}\). In particular, this shows that the map \(\eta_{\mathcal{H}}\) is an algebra homomorphism. Via the Satake isomorphism we see that the involution \(\eta_{\mathcal{H}}\) descends to \(\mathcal{H}_{K}\), which we also denote by \(\eta_{\mathcal{H}}\). We have a commutative diagram
Let \(i\geq 0\). We introduce the \(\eta^{i}\)-twisted version of (3.7.3) by
\[\operatorname{BC}^{\eta^{i}}_{S_{n}}=\eta^{i}_{\mathcal{H}}\circ\operatorname{ BC}_{S_{n}}:\mathcal{H}_{K^{\prime}_{S}}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{H}_{K}, \tag{3.7.5}\]
where \(\eta^{i}_{\mathcal{H}}\) is the \(i\)-fold iterate of the automorphism \(\eta_{\mathcal{H}}\). To have a diagram similar to (3.7.4), we introduce the \(\eta^{i}\)-twist of \(r_{*}\):
\[r_{*}^{\eta^{i}}=r_{*}\circ\eta^{i}_{\mathcal{H}}. \tag{3.7.6}\]
Explicitly, we have
\[r_{*}^{\eta^{i}}(f^{\prime})(g\overline{g}^{-1})=\int_{\operatorname{GL}_{n}( F_{0})}f^{\prime}(gh)\widetilde{\eta}^{i}(gh)\,dh. \tag{3.7.7}\]
Then we have a commutative diagram
(3.7.8)
There is then the following inhomogeneous version of the fundamental lemma for the spherical Hecke algebra in the Jacquet-Rallis case. Here we recall that \(W_{0}\) denotes the split hermitian space of dimension \(n+1\) and \(W_{1}\) the non-split space (therefore there is a shift of dimensions compared to above).
**Theorem 3.7.1** (FL for the spherical Hecke algebra (inhomogeneous version) (Leslie [19])).: _Let \(\varphi^{\prime}\in\mathcal{H}_{K^{\prime}_{S_{n+1}}}\). Then3_
Footnote 3: Note that in [19], the twist \(\eta^{n}\) is erroneously omitted.
\[(\operatorname{BC}^{\eta^{n}}_{S_{n+1}}(\varphi^{\prime}),0)\in C^{\infty}_{c }(\operatorname{U}(W_{0}))\times C^{\infty}_{c}(\operatorname{U}(W_{1}))\]
_is a transfer of \(\varphi^{\prime}\). _
The inhomogeneous version is equivalent to the special case of the homogeneous version when the factor on \(\mathcal{H}_{K_{n}}\) is the identity \(\mathbf{1}_{K^{\prime}_{n}}\). To see this, we compare orbital integrals: the homogeneous version defined by (3.2.1) and the inhomogeneous one defined by (3.2.3). The following easy lemma is a combination of [28, Lem. 5.7] and [29, Lem. 14.7 (iii)].
**Lemma 3.7.2**.: _Let \(\varphi^{\prime}\in\mathcal{H}_{K^{\prime\flat}}\otimes\mathcal{H}_{K^{\prime}}\) be of the form \(\mathbf{1}_{K^{\prime\flat}}\otimes f^{\prime}\) with \(f^{\prime}\in\mathcal{H}_{K^{\prime}}\). Then we have, for \(\gamma\in S_{n+1}(F_{0})_{\mathrm{rs}}\) with \(\gamma=r(g)=g\overline{g}^{-1}\) with \(g\in\operatorname{GL}_{n+1}(F)\),_
\[\operatorname{Orb}\bigl{(}(1,g),\mathbf{1}_{K^{\prime\flat}}\otimes f^{ \prime},s\bigr{)}=\widetilde{\eta}^{-n}(g)\operatorname{Orb}(\gamma,r_{*}^{ \eta^{n}}(f^{\prime}),2s).\]
_Moreover, for \(g=(g_{1},g_{2})\in G^{\prime}(F_{0})_{\mathrm{rs}}\) and \(\gamma=r(g_{1}^{-1}g_{2})\in S(F_{0})_{\mathrm{rs}}\), we have_
\[\omega_{G^{\prime}}(g)\operatorname{Orb}\bigl{(}g,\mathbf{1}_{K^{\prime\flat }}\otimes f^{\prime}\bigr{)}=\omega_{S}(\gamma)\operatorname{Orb}(\gamma,r_{*} ^{\eta^{n}}(f^{\prime})),\]
_and when \(\operatorname{Orb}\bigl{(}g,\mathbf{1}_{K^{\prime\flat}}\otimes f^{\prime}\bigr{)}=0\), we have_
\[\omega_{G^{\prime}}(g)\,\partial\operatorname{Orb}\bigl{(}g,\mathbf{1}_{K^{ \prime\flat}}\otimes f^{\prime}\bigr{)}=2\omega_{S}(\gamma)\,\partial \operatorname{Orb}(\gamma,r_{*}^{\eta^{n}}(f^{\prime})).\]
Proof.: By definition of the orbital integral (3.2.1), we have
\[\operatorname{Orb}\bigl{(}(1,g),\varphi^{\prime},s\bigr{)}=\int_{H^{\prime}_{ 1,2}(F_{0})}\varphi^{\prime}(h_{1}^{-1}h_{2}^{\prime},h_{1}^{-1}gh_{2}^{\prime \prime})|\mathrm{det}\,h_{1}|_{F}^{s}\eta(h_{2})\,dh_{1}\,dh_{2}^{\prime}\,dh _{2}^{\prime\prime},\]
where \(h_{1}\in H^{\prime}_{1}(F_{0})=\operatorname{GL}_{n-1}(F)\) and \(h_{2}=(h_{2}^{\prime},h_{2}^{\prime\prime})\in H^{\prime}_{2}(F_{0})= \operatorname{GL}_{n}(F_{0})\times\operatorname{GL}_{n+1}(F_{0})\). Here \(|\cdot|_{F}\) is the normalized absolute value on \(F\). Also \(\eta(h_{2})=\eta^{n-1}(\mathrm{det}\,h_{2}^{\prime})\eta^{n}(\mathrm{det}\,h_ {2}^{\prime\prime})\). Replacing \(h_{1}\) by \(h_{2}^{\prime}h_{1}\), we have
\[\operatorname{Orb}\bigl{(}(1,g),\varphi^{\prime},s\bigr{)}=\int_{H^{\prime}_ {1,2}(F_{0})}\varphi^{\prime}\bigl{(}h_{1}^{-1},h_{1}^{-1}(h_{2}^{\prime})^{- 1}gh_{2}^{\prime\prime}\bigr{)}|\mathrm{det}(h_{2}^{\prime}h_{1})|_{F}^{s}\eta (h_{2})\,dh_{1}\,dh_{2}^{\prime}\,dh_{2}^{\prime\prime}.\]
Now we specialize to \(\varphi^{\prime}=\mathbf{1}_{K^{\prime\flat}}\otimes f^{\prime}\). Then the above equation simplifies to
\[\operatorname{Orb}\bigl{(}(1,g),\varphi^{\prime},s\bigr{)}=\int_{H^{\prime}_ {2}(F_{0})}f^{\prime}\bigl{(}(h_{2}^{\prime})^{-1}gh_{2}^{\prime\prime}\bigr{)} |\mathrm{det}(h_{2}^{\prime})|_{F_{0}}^{2s}\eta(h_{2})\,dh_{2}^{\prime}\,dh_{2 }^{\prime\prime}.\]
Here we have used \(|a|_{F}=|a|_{F_{0}}^{2}\) for \(a\in F_{0}^{\times}\) and this results in the extra factor \(2\) in the exponent. To apply the definition of \(r_{*}^{\eta^{n}}\), we rewrite it as
\[\operatorname{Orb}\bigl{(}(1,g),\varphi^{\prime},s\bigr{)}=\widetilde{\eta}^{ -n}(g)\int_{H^{\prime}_{2}(F_{0})}f^{\prime}\bigl{(}(h_{2}^{\prime})^{-1}gh_{2} ^{\prime\prime}\bigr{)}|\mathrm{det}(h_{2}^{\prime})|_{F_{0}}^{2s}\eta(h_{2}^ {\prime})\widetilde{\eta}^{n}((h_{2}^{\prime})^{-1}gh_{2}^{\prime\prime})\,dh _{2}^{\prime}\,dh_{2}^{\prime\prime}.\]
We integrate over \(h_{2}^{\prime\prime}\in\operatorname{GL}_{n+1}(F_{0})\) to obtain
\[\operatorname{Orb}\bigl{(}(1,g),\varphi^{\prime},s\bigr{)}=\widetilde{\eta}^ {-n}(g)\int_{H^{\prime}_{2}(F_{0})}r_{*}^{\eta^{n}}(f^{\prime})\bigl{(}(h_{2}^ {\prime})^{-1}(g\overline{g}^{-})h_{2}^{\prime}\bigr{)}|\mathrm{det}(h_{2}^{ \prime})|_{F_{0}}^{2s}\eta(h_{2}^{\prime})\,dh_{2}^{\prime}.\]
A direct comparison with (3.2.3) completes the proof of the first identity.
For the other identities, we compare the transfer factors on \(G^{\prime}\) and \(S\). In fact, by their definitions in [29, SS2.4] and noting that \(\widetilde{\eta}\) is of order two by our choice, we see that
\[\omega_{G^{\prime}}(g)=\widetilde{\eta}(g_{1}^{-1}g_{2})^{n}\omega_{S}\bigl{(} r(g_{1}^{-1}g_{2})\bigr{)}.\]
Note that our \(n+1\) corresponds to \(n\) in loc. cit. The desired identities follow immediately.
## 4. An alternative basis of \(\mathcal{H}_{K}\)
### An alternative basis of \(\mathcal{H}_{K}\)
We again denote by \(W_{0}\) a split Hermitian space of dimension \(n\). Let \(m=[n/2]\). Fix \(\Xi_{0}\) a self-dual lattice in \(W_{0}\) and let \(K=K_{0}\) be the stabilizer of \(\Xi_{0}\). Recall that a vertex lattice in \(W_{0}\) is a lattice \(\Lambda\) such that \(\Lambda\subset\Lambda^{\vee}\subset\varpi^{-1}\Lambda\). Here \(\Lambda^{\vee}\) is the dual lattice of \(\Lambda\). The dimension \(t=\Lambda^{\vee}/\Lambda\) is called the type of \(\Lambda\). It is an even integer with \(0\leq t\leq n\). We fix a maximal chain of vertex lattices
\[\Xi_{0}\supset\Xi_{2}\supset\cdots\supset\Xi_{2m},\]
where \(\Xi_{t}\) is a type \(t\) vertex lattice, for every even integer \(0\leq t\leq n\). Let \(K_{t}\subset\mathrm{U}(W_{0})\) be the stabilizer of \(\Xi_{t}\), a special maximal parahoric subgroup. Set
\[\varphi_{[t,t^{\prime}]}=\mathbf{1}_{K_{t}K_{t^{\prime}}}\in\mathcal{H}(K_{t} \backslash\mathrm{U}(W_{0})/K_{t^{\prime}}), \tag{4.1.1}\]
which will be called the _intertwining function_ for the level \(K_{t},K_{t^{\prime}}\). (See [22, Definition B.2.3] for the special case \(t=t^{\prime}=2m\).) Note that \(\mathbf{1}_{K_{t}K_{t^{\prime}}}\) is the characteristic function of a subset of \(\mathrm{U}(W_{0})\) that is not a subgroup, unless \(K_{t}=K_{t^{\prime}}\).
Note that a function \(\varphi\in C_{c}^{\infty}(K\backslash G/K^{\prime})\) induces a natural map \(C_{c}^{\infty}(G/K)\to C_{c}^{\infty}(G/K^{\prime})\) which determines the function uniquely. Explicitly, a given \(\varphi\) sends \(f\in C_{c}^{\infty}(G/K)\) to \(f*\varphi\), which lies in \(C_{c}^{\infty}(G/K^{\prime})\). Similarly, a correspondence between the two sets \(G/K\) and \(G/K^{\prime}\) also induces a natural map \(C_{c}^{\infty}(G/K)\to C_{c}^{\infty}(G/K^{\prime})\). In this way we will freely switch between functions and correspondences. We can identify the function \(\varphi_{[t,t^{\prime}]}\) in terms of "Hecke correspondences of moduli spaces of vertex lattices", as follows. Let
\[\mathbb{N}^{[t]}=\mathbb{N}^{[t]}_{W}=\{\Lambda\subset W_{0}\mid\Lambda\text { is a vertex lattice of type }t\}.\]
Note that there is the diagonal action of \(\mathrm{U}(W_{0})\) on \(\mathbb{N}^{[t]}\times\mathbb{N}^{[t^{\prime}]}\). Then \(\mathcal{H}(K_{t}\backslash\mathrm{U}(W_{0})/K_{t^{\prime}})\) can be identified with the space of functions on \(\mathbb{N}^{[t]}\times\mathbb{N}^{[t^{\prime}]}\) which are invariant under \(\mathrm{U}(W_{0})\) and have compact support modulo this action.
We also introduce
\[\mathbb{N}^{[t,t^{\prime}]}=\begin{cases}\{(\Lambda_{t},\Lambda_{t^{\prime}}) \in\mathbb{N}^{[t]}\times\mathbb{N}^{[t^{\prime}]}\mid\Lambda_{t}\subset \Lambda_{t^{\prime}}\}&\text{if }t^{\prime}\leq t\\ \{(\Lambda_{t},\Lambda_{t^{\prime}})\in\mathbb{N}^{[t]}\times\mathbb{N}^{[t^{ \prime}]}\mid\Lambda_{t^{\prime}}\subset\Lambda_{t}\}&\text{if }t\leq t^{\prime}.\end{cases}\]
We obtain a diagram,
(4.1.2)
or, equivalently, the map
\[\pi\colon\mathbb{N}^{[t,t^{\prime}]}\longrightarrow\mathbb{N}^{[t]}\times \mathbb{N}^{[t^{\prime}]}.\]
Then
\[\varphi_{[t,t^{\prime}]}=\pi_{*}(\mathrm{char}_{\mathbb{N}^{[t,t^{\prime}]}}). \tag{4.1.3}\]
Note that \(\varphi_{[t,t^{\prime}]}\) does not lie in the spherical Hecke algebra. To obtain functions in \(\mathcal{H}_{K}\), we use convolution.
**Definition 4.1.1**.: The atomic function associated to an even integer \(t\) with \(0\leq t\leq n\) is the following element in the spherical Hecke algebra,
\[\varphi_{t}=\varphi_{[0,t]}*\varphi_{[t,0]}=\mathbf{1}_{K_{0}K_{t}}*\mathbf{1}_ {K_{t}K_{0}}\in\mathcal{H}_{K}. \tag{4.1.4}\]
Consider the composite of correspondences,
\[\mathbb{T}_{\leq t}:=\mathbb{N}^{[0,t]}\circ\mathbb{N}^{[t,0]}=\{(\Lambda_{0}, \Lambda_{t},\Lambda_{0}^{\prime})\in\mathbb{N}^{[0]}\times\mathbb{N}^{[t]} \times\mathbb{N}^{[0]}\mid\Lambda_{t}\subset\Lambda_{0}\cap\Lambda_{0}^{\prime}\}.\]
We obtain a diagram with a cartesian square,
(4.1.5)
Consider
\[\pi:\mathbb{T}_{\leq t}\longrightarrow\mathbb{N}^{[0]}\times\mathbb{N}^{[0]}.\]
Then, just as in (4.1.3),
\[\varphi_{t}=\pi_{*}(\operatorname{char}_{\mathbb{T}_{\leq t}}). \tag{4.1.6}\]
**Remark 4.1.2**.: As is well-known, the spherical Hecke algebra is commutative. In particular, \(\varphi_{t}*\varphi_{t^{\prime}}=\varphi_{t}*\varphi_{t^{\prime}}\). Comparing the supports of these two functions, we get the equality of the following two subsets of \(\mathbb{N}^{[0]}\times\mathbb{N}^{[0]}\).
We say that two selfdual lattices \(\Lambda_{1}\) and \(\Lambda_{2}\) are related by a correspondence of type \(t\), denoted \(\Lambda_{1}\Longleftrightarrow^{t}\Lambda_{2}\), if there exists a vertex lattice \(M\) of type \(t\) contained in \(\Lambda_{1}\cap\Lambda_{2}\). In other words, \((\Lambda_{1},\Lambda_{2})\in\pi(\mathbb{T}_{\leq t})\). The two sets in question are
\[\begin{split}&\{(\Lambda_{1},\Lambda_{2})\mid\exists\Lambda \in\mathbb{N}^{[0]}\text{ with }\Lambda_{1}\Longleftrightarrow^{t}\Lambda\text{ and }\Lambda \Longleftrightarrow^{t^{\prime}}\Lambda_{2}\}.\\ &\{(\Lambda_{1},\Lambda_{2})\mid\exists\Lambda\in\mathbb{N}^{[0]} \text{ with }\Lambda_{1}\Longleftrightarrow^{t^{\prime}}\Lambda\text{ and }\Lambda \Longleftrightarrow^{t}\Lambda_{2}\}.\end{split} \tag{4.1.7}\]
Here is an elementary proof of the equality of these two sets that was indicated to us by B. Howard. It is based on Gelfand's trick of proving the commutativity of the Hecke algebra by constructing an anti-automorphism of \(\operatorname{U}(W_{0})\) inducing the identity on \(K\backslash\operatorname{U}(W_{0})/K\). Fix a basis of \(W_{0}\) such that the hermitian form is given by the antidiagonal unit matrix \(J\), and let \(\Xi_{0}\) be the standard lattice. Define the anti-automorphism \(\tau\) of \(\operatorname{U}(W_{0})\) by \(\tau(g)=J\sigma(g^{-1})J\). Then \(\tau\) induces the identity on \(A(F_{0})\) and hence induces the identity on \(\mathcal{H}_{K}\) (Cartan decomposition). On the other hand, one easily checks that under the identification \(K\backslash\operatorname{U}(W_{0})/K=\operatorname{U}(W_{0})\backslash( \mathbb{N}^{[0]}\times\mathbb{N}^{[0]})\), the map \(\tau\) is given by \((\Lambda_{0},\Lambda_{0}^{\prime})\mapsto(\sigma(\Lambda_{0}^{\prime}),\sigma (\Lambda_{0}))\). It follows that for any \((\Lambda_{0},\Lambda_{0}^{\prime})\in\mathbb{N}^{[0]}\times\mathbb{N}^{[0]}\) there exists \(\gamma\in\operatorname{U}(W_{0})(F_{0})\) such that \((\sigma(\Lambda_{0}^{\prime}),\sigma(\Lambda_{0}))=\gamma(\Lambda_{0}, \Lambda_{0}^{\prime})\). Let us now check the claim. By symmetry, it suffices to show that the first of the two sets in (4.1.7) is contained in the second. Let \((\Lambda_{1},\Lambda_{2})\) be in the first set, i.e., there is \(\Lambda\in\mathbb{N}^{[0]}\) with \(\Lambda_{1}\Longleftrightarrow^{t}\Lambda\) and \(\Lambda\Longleftrightarrow^{t^{\prime}}\Lambda_{2}\). Let \((\sigma(\Lambda_{2}),\sigma(\Lambda_{1}))=\gamma(\Lambda_{1},\Lambda_{2})\). Then \(\sigma(\Lambda_{2})\Longleftrightarrow^{t}\gamma\Lambda\) and \(\gamma\Lambda\Longleftrightarrow^{t^{\prime}}\sigma(\Lambda_{1})\). It follows that \((\Lambda_{1},\Lambda_{2})\) is in the second set, with intermediary lattice \(\Lambda^{\prime}=\sigma(\gamma\Lambda)\).
Besides \(\mathbb{T}_{\leq t}\), we also consider
\[\mathbb{T}_{t} =\{(\Lambda_{0},\Lambda_{0}^{\prime})\in\mathbb{N}^{[0]}\times \mathbb{N}^{[0]}\mid\Lambda_{0}\cap\Lambda_{0}^{\prime}\text{ is a vertex lattice of type }t\}\] \[=\{(\Lambda_{0},\Lambda_{t},\Lambda_{0}^{\prime})\in\mathbb{N}^{[0 ]}\times\mathbb{N}^{[t]}\times\mathbb{N}^{[0]}\mid\Lambda_{t}=\Lambda_{0}\cap \Lambda_{0}^{\prime}\},\]
with its natural map
\[\pi:\mathbb{T}_{t}\longrightarrow\mathbb{N}^{[0]}\times\mathbb{N}^{[0]}.\]
Recall \(f^{[t]}:=\mathbf{1}_{K\varpi^{(1^{t}/2,0^{n-2t,(-1)^{t/2})}}K}\), cf. (3.5.1). Then
\[f^{[t]}=\pi_{*}(\mathrm{char}_{\mathbb{T}_{t}}). \tag{4.1.8}\]
**Proposition 4.1.3**.: _The spherical Hecke algebra \(\mathcal{H}_{K}\) is a polynomial algebra in the atomic functions \(\varphi_{t}\), as \(t\) runs through all even integers \(0\leq t\leq n\),_
\[\mathcal{H}_{K}=\mathbb{Q}[\varphi_{2},\varphi_{4},\cdots,\varphi_{2m}].\]
_More precisely, the elements \(\{\varphi_{t}\}_{0<t\leq n,t\equiv 0\bmod 2}\) are expressed as a linear combination of \(\{f^{[t]}\}_{0<t\leq n,t\equiv 0\bmod 2}\) by an upper-triangular matrix with all diagonal entries equal to one._
Proof.: From the "moduli interpretation", we can partition \(\mathbb{T}_{\leq t}\) into a disjoint union
\[\mathbb{T}_{\leq t}=\coprod_{\begin{subarray}{c}t^{\prime}\leq t,\\ t^{\prime}\equiv 0\mod 2\end{subarray}}\mathbb{T}_{\leq t}^{[t^{\prime}]},\]
where
\[\mathbb{T}_{\leq t}^{[t^{\prime}]}=\{(\Lambda_{0},\Lambda_{t},\Lambda_{0}^{ \prime})\in\mathbb{T}_{\leq t}\mid\ \Lambda_{0}\cap\Lambda_{0}^{\prime}\text{ has type }t^{\prime}\}.\]
It follows that
\[\varphi_{t}=\sum_{\begin{subarray}{c}t^{\prime}\leq t,\\ t^{\prime}\equiv 0\mod 2\end{subarray}}m(t^{\prime},t)f^{[t]}, \tag{4.1.9}\]
where \(m(t^{\prime},t)\) is the number of vertex lattices of type \(t\) contained in a given vertex lattice of type \(t^{\prime}\),
\[m(t^{\prime},t)=\#\{\Lambda_{t}\in\mathbb{N}^{[t]}\mid\Lambda_{t}\subset \Lambda_{t^{\prime}}\}.\]
That is, \(m(t^{\prime},t)\) equals the (constant) degree of the fiber of the natural projection map \(\mathbb{N}^{[t,t^{\prime}]}\rightarrow\mathbb{N}^{[t^{\prime}]}\). Clearly
\[m(t,t)=1.\]
This shows that the basis \(\{\varphi_{t}\}_{0<t\leq n,t\equiv 0\bmod 2}\) differs from \(\{f^{[t]}\}_{0<t\leq n,t\equiv 0\bmod 2}\) by an upper-triangular matrix with all diagonal entries equal to one. This implies the desired assertion.
**Remark 4.1.4**.: One can determine the coefficients \(m(t^{\prime},t)\) explicitly. See [22, Lem. B.2.4] for the formula expressing \(\varphi_{2m}\) in terms of the \(f^{[t]}\).
**Definition 4.1.5**.: We call an element \(\varphi\in\mathcal{H}_{K}\) monomial if it can be expressed as a monomial in the atomic functions \(\{\varphi_{2},\varphi_{4},\cdots,\varphi_{2m}\}\) (in a necessarily unique way).
It is clear that an arbitrary element \(\varphi\in\mathcal{H}_{K}\) can be expressed as a linear combination of monomial elements (in a unique way).
### The product situation
We also apply the preceding considerations to a product situation. Let \(W_{0}\) be a split space of dimension \(n+1\), and let \(W_{0}^{\flat}=\langle u\rangle^{\perp}\), where the special vector \(u\in W_{0}\) has unit length. Then \(W_{0}^{\flat}\oplus W_{0}\) is also a split space. The Hecke algebra \(\mathcal{H}_{K^{\flat}\times K}\) for \(\operatorname{U}(W_{0}^{\flat})\times\operatorname{U}(W_{0})\) can be identified with the tensor product of algebras \(\mathcal{H}_{K^{\flat}}\otimes\mathcal{H}_{K}\). By an atomic, resp. a monomial, element in \(\mathcal{H}_{K^{\flat}\times K}\) we mean a pure tensor \(\widetilde{\varphi}=\varphi^{\flat}\otimes\varphi\), where either \(\varphi^{\flat}\) is atomic, resp. monomial, and \(\varphi\) is the unit element, or \(\varphi\) is atomic, resp. monomial, and \(\varphi^{\flat}\) is the unit element. A monomial element in \(\mathcal{H}_{K^{\flat}\times K}\) is a product of atomic elements, and this in a unique way. Any element in \(\mathcal{H}_{K^{\flat}\times K}\) is a linear combination of monomial elements in a unique way.
Let \(\varphi_{t,t^{\prime}}=\varphi_{t}\otimes\varphi_{t^{\prime}}\) be an atomic function. Hence either \(t=0\) or \(t^{\prime}=0\). Then
\[\varphi_{t,t^{\prime}}=\pi_{*}(\operatorname{char}_{\mathbb{N}_{W^{\flat}, \leq t}}\times\mathbb{N}_{W,\leq t^{\prime}}). \tag{4.2.1}\]
Here we use the diagram
(4.2.2)
and the corresponding map
\[\pi\colon\mathbb{N}_{W^{\flat},\leq t}\times\mathbb{N}_{W,\leq t^{\prime}} \longrightarrow(\mathbb{N}_{W^{\flat}}^{[0]}\times\mathbb{N}_{W}^{[0]})\times (\mathbb{N}_{W^{\flat}}^{[0]}\times\mathbb{N}_{W}^{[0]}).\]
The following lemma holds for all functions \(\varphi\in C_{c}^{\infty}(G_{W_{0}})\) (and is used earlier in the definition of transfer). We include a proof for the special case of functions in the Hecke algebra, in order to illustrate the idea that will be used in the proof of Proposition 6.1.1.
**Lemma 4.2.1**.: _Let \(\varphi\in\mathcal{H}_{K^{\flat}}\otimes\mathcal{H}_{K}\). Then \(\operatorname{Orb}(g,\varphi)\) is finite for every \(g\in G_{W}(F)_{\mathrm{rs}}\)._
Proof.: It suffices to consider monomial functions and elements of the form \((1,g)\) with \(g\in\operatorname{U}(W_{0})_{\mathrm{rs}}\). We can bound \(\varphi\) by a constant multiple of a function of the following form
\[\Phi_{N}=\mathbf{1}_{\operatorname{U}(W_{0}^{\flat})\cap\varpi^{-N} \operatorname{End}_{O_{F}}(\Xi_{0}^{\flat})}\otimes\mathbf{1}_{\operatorname{ U}(W_{0})\cap\varpi^{-N}\operatorname{End}_{O_{F}}(\Xi_{0})}\]
for some large integer \(N\), and hence it suffices to consider such functions \(\Phi_{N}\). In terms of lattice counting, we need to show the finiteness of pairs of self-dual lattices
\[(\Lambda^{\flat},\Lambda),\quad(\Lambda^{\prime\flat},\Lambda^{\prime})\]
such that \(\Lambda=\Lambda^{\flat}\oplus\langle u\rangle,\Lambda^{\prime}=\Lambda^{ \prime\flat}\oplus\langle u\rangle\), and \(\Lambda^{\flat}\subset\varpi^{-N}\Lambda^{\prime\flat}\), and \(\Lambda\subset\varpi^{-N}g\Lambda^{\prime}\). Note that \(\Lambda^{\flat}\) (resp. \(\Lambda^{\prime\flat}\)) is determined by \(\Lambda\) (resp. \(\Lambda^{\prime}\)). The self-duality also implies \(\Lambda^{\flat}\supset\varpi^{N}\Lambda^{\prime\flat}\) and \(\Lambda\supset\varpi^{N}g\Lambda^{\prime}\).
We claim that \(\varpi^{(2i-2)N}g^{i-1}u\in\Lambda^{\prime}\)_and \(\varpi^{(2i-1)N}g^{i}u\in\Lambda\) for every \(i\geq 1\). To show the claim it suffices to show
1. \(u\in\Lambda^{\prime}\),
2. \(\varpi^{(2i-2)N}g^{i-1}u\in\Lambda^{\prime}\Longrightarrow\varpi^{(2i-1)N}g^{i}u \in\Lambda\) for every \(i\geq 1\),
3. \(\varpi^{(2i-1)N}g^{i}u\in\Lambda\Longrightarrow\varpi^{2iN}g^{i}u\in\Lambda^{ \prime}\) for every \(i\geq 1\).
Part (1): clear.
Part (2): if \(\varpi^{(2i-2)N}g^{i-1}u\in\Lambda^{\prime}\) then, from \(\Lambda\supset\varpi^{N}g\Lambda^{\prime}\), it follows that \(\varpi^{N}g(\varpi^{(2i-2)N}g^{i-1}u)=\varpi^{(2i-1)N}g^{i}u\in\Lambda\).
Part (3): if \(\varpi^{(2i-1)N}g^{i}u\in\Lambda\) then, from \(\Lambda^{\flat}\subset\varpi^{-N}\Lambda^{\prime\flat}\), it follows that \(\Lambda^{\prime}\supset\varpi^{N}\Lambda\) and hence \(\varpi^{N}(\varpi^{(2i-1)N}g^{i}u)=\varpi^{2iN}g^{i}u\in\Lambda^{\prime}\).
From the claim it follows that \(\Lambda\) contains the lattice \(\varpi^{(2n-1)N}\langle u,gu,\ldots,g^{n}u\rangle\), which has full rank by the regular semisimplicity of \(g\), cf. [28, SS2.4]. This shows the finiteness of possible \(\Lambda\), and hence of \(\Lambda^{\prime}\). This completes the proof.
## 5. RZ spaces and their Hecke correspondences
In this section we define various Rapoport-Zink spaces (RZ spaces) with certain parahoric levels, and use them to define Hecke correspondences.
### Rapoport-Zink spaces \(\mathcal{N}_{n}\) of self-dual level
Let \(S\) be a \(\operatorname{Spf}O_{\bar{F}}\)-scheme. Consider a triple \((X,\iota,\lambda)\) where
(i) \(X\) is a formal \(\varpi\)-divisible \(O_{F_{0}}\)-module over \(S\) of relative height \(2n\) and dimension \(n\),
(ii) \(\iota:O_{F}\to\operatorname{End}(X)\) is an action of \(O_{F}\) extending the \(O_{F_{0}}\)-action and satisfying the Kottwitz condition of signature \((1,n-1)\): for all \(a\in O_{F}\), the characteristic polynomial of \(\iota(a)\) on \(\operatorname{Lie}X\) is equal to \((T-a)(T-\sigma(a))^{n-1}\in\mathcal{O}_{S}[T]\),
(iii) \(\lambda:X\to X^{\vee}\) is a principal polarization on \(X\) whose Rosati involution induces the automorphism \(\sigma\) on \(O_{F}\) via \(\iota\).
Up to \(O_{F}\)-linear quasi-isogeny compatible with polarizations, there is a unique such triple \((\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})\) over \(S=\operatorname{Spec}\bar{k}\). Let \(\mathcal{N}_{n}=\mathcal{N}_{F/F_{0},n}\) be the (relative) _unitary Rapoport-Zink space of self-dual level_, which is a formal scheme over \(\operatorname{Spf}O_{\bar{F}}\) representing the functor sending each \(S\) to the set of isomorphism classes of tuples \((X,\iota,\lambda,\rho)\), where the _framing_\(\rho:X\times_{S}\bar{S}\to\mathbb{X}\times_{\operatorname{Spec}\bar{k}}\bar{S}\) is an \(O_{F}\)-linear quasi-isogeny of height \(0\) such that \(\rho^{*}((\lambda_{\mathbb{X}})_{\bar{S}})=\lambda_{\bar{S}}\). Here \(\bar{S}\coloneqq S_{\bar{k}}\) is the special fiber.
The Rapoport-Zink space \(\mathcal{N}_{n}\) is formally locally of finite type and formally smooth of relative dimension \(n-1\) over \(\operatorname{Spf}O_{\bar{F}}\) ([30], [23, Prop. 1.3]).
### The hermitian space \(\mathbb{V}_{n}\)
Let \(n\geq 1\) be an integer. Let \(\mathbb{E}\) be the formal \(O_{F_{0}}\)-module of relative height \(2\) and dimension \(1\) over \(\operatorname{Spec}\bar{k}\). Then \(D\coloneqq\operatorname{End}^{\circ}_{O_{F_{0}}}(\mathbb{E}):=\operatorname{ End}_{O_{F_{0}}}(\mathbb{E})\otimes\mathbb{Q}\) is the quaternion division algebra over \(F_{0}\). We fix an \(F_{0}\)-embedding \(\iota_{\mathbb{E}}:F\to D\), which makes \(\mathbb{E}\) into a formal \(O_{F}\)-module of relative height \(1\). We fix an \(O_{F_{0}}\)-linear principal polarization \(\lambda_{\mathbb{E}}:\mathbb{E}\stackrel{{\sim}}{{\to}}\mathbb{E} ^{\vee}\). Then \((\mathbb{E},\iota_{\mathbb{E}},\lambda_{\mathbb{E}})\) is a hermitian \(O_{F}\)-module of signature \((1,0)\). We have \(\mathcal{N}_{1}\simeq\operatorname{Spf}O_{\bar{F}}\) and there is a unique lifting (_the canonical lifting_) \(\mathcal{E}\) of the formal
\(O_{F}\)-module \(\mathbb{E}\) over \(\operatorname{Spf}O_{\bar{F}}\), equipped with its \(O_{F}\)-action \(\iota_{\mathcal{E}}\), its framing \(\rho_{\mathcal{E}}:\mathcal{E}_{\bar{k}}\xrightarrow{\sim}\mathbb{E}\), and its principal polarization \(\lambda_{\mathcal{E}}\) lifting \(\rho_{\mathcal{E}}^{*}(\lambda_{\mathbb{E}})\). Define \(\overline{\mathbb{E}}\) to be the same \(O_{F_{0}}\)-module as \(\mathbb{E}\) but with \(O_{F}\)-action given by \(\iota_{\overline{\mathbb{E}}}\coloneqq\iota_{\mathbb{E}}\circ\sigma\), and \(\lambda_{\overline{\mathbb{E}}}\coloneqq\lambda_{\mathbb{E}}\), and similarly define \(\bar{\mathcal{E}}\) and \(\lambda_{\bar{\mathcal{E}}}\).
Denote by \(\mathbb{V}=\mathbb{V}_{n}:=\operatorname{Hom}_{O_{F}}^{\circ}(\overline{ \mathbb{E}},\mathbb{X})\) the space of special quasi-homomorphisms. Then \(\mathbb{V}\) carries a \(F/F_{0}\)-hermitian form: for \(x,y\in\mathbb{V}\), the pairing \((x,y)\in F\) is given by the composition
\[(\overline{\mathbb{E}}\xrightarrow{x}\mathbb{X}\xrightarrow{\lambda_{\mathbb{ X}}}\mathbb{X}^{\vee}\xrightarrow{y^{\vee}}\overline{\mathbb{E}}^{\vee} \xrightarrow{\lambda_{\mathbb{E}}^{-1}}\overline{\mathbb{E}})\in\operatorname {End}_{O_{F}}^{\circ}(\overline{\mathbb{E}})=\iota_{\overline{\mathbb{E}}}(F) \simeq F.\]
The hermitian space \(\mathbb{V}\) is the unique (up to isomorphism) non-degenerate non-split \(F/F_{0}\)-hermitian space of dimension \(n\). The unitary group \(\operatorname{U}(\mathbb{V})(F_{0})\) acts on the framing hermitian \(O_{F}\)-module \((\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})\) (via the identification in [17, Lem. 3.9]) and hence acts on the Rapoport-Zink space \(\mathcal{N}_{n}\) via \(g(X,\iota,\lambda,\rho)=(X,\iota,\lambda,g\circ\rho)\) for \(g\in\operatorname{U}(\mathbb{V})(F_{0})\).
To a non-zero \(u\in\mathbb{V}\), there is the associated special divisor \(\mathcal{Z}(u)\) on \(\mathcal{N}_{n}\). It is the closed formal subscheme of \(\mathcal{N}_{n}\) of points \((X,\iota,\lambda,\rho)\) where the quasi-homomorphism \(u:\mathbb{E}\to\mathbb{X}\) lifts to a homomorphism \(\mathcal{E}\to X\), cf. [17]. More generally, for any finitely generated subgroup \(L\subset\mathbb{V}\), there is the associated _special cycle_\(\mathcal{Z}(L)\), the locus where the quasi-homomorphisms \(u:\mathbb{E}\to\mathbb{X}\) lift to homomorphisms \(\mathcal{E}\to X\), for all \(u\in L\).
### RZ spaces \(\mathcal{N}_{n}^{[t]}\)
Let \(t\) be an even integer with \(0\leq t\leq n\). Now we consider triples \((Y,\iota_{Y},\lambda_{Y})\) as in subsection 5.1 (with \(Y\) instead of \(X\)), except that we replace the condition on the polarization to be principal by the condition that the polarization is of degree \(q^{2t}\) and satisfies \(\ker(\lambda_{Y})\subset Y[\varpi]\). Again, we fix a framing object \((\mathbb{X}_{t},\iota_{\mathbb{X}_{t}},\lambda_{\mathbb{X}_{t}})\) over \(\operatorname{Spec}\bar{k}\) (which is again unique up to \(O_{F}\)-linear quasi-isogeny compatible with polarizations). We define the RZ-space \(\mathcal{N}_{n}^{[t]}\) as the space of tuples \((Y,\iota_{Y},\lambda_{Y},\rho_{Y})\), where \(\rho_{Y}\) is a framing with \((\mathbb{X}_{t},\iota_{\mathbb{X}_{t}},\lambda_{\mathbb{X}_{t}})\). It is an RZ space of level equal to the stabilizer of a vertex lattice of type \(t\). Note that \(\mathcal{N}_{n}^{[0]}=\mathcal{N}_{n}\). The Rapoport-Zink space \(\mathcal{N}_{n}^{[t]}\) is formally locally of finite type and regular of dimension \(n\) with semi-stable reduction over \(\operatorname{Spf}O_{\bar{F}}\) ([30], [12]).
### RZ spaces \(\mathcal{N}_{n}^{[t,t^{\prime}]}\)
Let \(t,t^{\prime}\) be even integers. We introduce the RZ-space \(\mathcal{N}_{n}^{[t,t^{\prime}]}\). Let first \(t^{\prime}\leq t\). We fix a \(O_{F_{0}}\)-linear isogeny of degree \(q^{t-t^{\prime}}\) with kernel killed by \(\varpi\),
\[\varphi_{t,t^{\prime}}:\mathbb{X}_{t}\longrightarrow\mathbb{X}_{t^{\prime}}, \tag{5.4.1}\]
such that \(\varphi^{*}(\lambda_{\mathbb{X}_{t^{\prime}}})=\lambda_{\mathbb{X}_{t}}\). Then \(\mathcal{N}_{n}^{[t,t^{\prime}]}\) classifies triples
\[\big{(}(X_{1},\iota_{1},\lambda_{1},\rho_{1}),(X_{2},\iota_{2},\lambda_{2},\rho _{2}),\varphi:X_{1}\longrightarrow X_{2}\big{)},\]
where \((X_{1},\iota_{1},\lambda_{1},\rho_{1})\in\mathcal{N}^{[t]}\) and \((X_{2},\iota_{2},\lambda_{2},\rho_{2})\in\mathcal{N}^{[t^{\prime}]}\), and where \(\varphi\) is an isogeny lifting \(\varphi_{t,t^{\prime}}:\mathbb{X}_{t}\to\mathbb{X}_{t^{\prime}}\). Then \(\varphi\) is uniquely determined, is of degree \(q^{t-t^{\prime}}\) and satisfies \(\ker\varphi\subset X_{1}[\varpi]\); also, \(\varphi\) preserves the polarizations.
When \(t\leq t^{\prime}\), we define \(\mathcal{N}_{n}^{[t,t^{\prime}]}\) to be the _transpose_\({}^{t}\mathcal{N}_{n}^{[t^{\prime},t]}\) of \(\mathcal{N}_{n}^{[t^{\prime},t]}\), i.e., \(\mathcal{N}_{n}^{[t,t^{\prime}]}\) classifies triples
\[\big{(}(X_{1},\iota_{1},\lambda_{1},\rho_{1}),(X_{2},\iota_{2},\lambda_{2},\rho _{2}),\varphi:X_{2}\longrightarrow X_{1}\big{)},\]
where \((X_{1},\iota_{1},\lambda_{1},\rho_{1})\in\mathcal{N}^{[t]}\) and \((X_{2},\iota_{2},\lambda_{2},\rho_{2})\in\mathcal{N}^{[t^{\prime}]}\), and where \(\varphi\) is an isogeny lifting \(\varphi_{t^{\prime},t}\colon\mathbb{X}_{t^{\prime}}\to\mathbb{X}_{t}\). Only the cases \((t,t^{\prime})=(0,t^{\prime})\) and \((t,t^{\prime})=(t,0)\) will be effectively used in this paper. We view \(\mathcal{N}^{[t,t^{\prime}]}_{n}\) as a correspondence via the two natural projections, which is analogous to the corresponding diagram (4.1.2) of lattice correspondences,
(5.4.2)
The two projection maps are both proper (representable by a projective morphism). Note that \(\mathcal{N}^{[t,t^{\prime}]}_{n}\) is an RZ space of level equal to the joint stabilizer of two vertex lattices of type \(t,t^{\prime}\). It is formally locally of finite type and regular of dimension \(n\) with semi-stable reduction over \(\operatorname{Spf}O_{\tilde{F}}\) ([30], [12]).
### The Hecke correspondence \(\mathcal{T}^{\leq t}_{n}\)
Define \(\mathcal{T}^{\leq t}_{n}\) by
\[\big{(}(Y,\iota_{Y},\lambda_{Y},\rho_{Y}),(X_{1},\iota_{1},\lambda_{1},\rho_{ 1}),(X_{2},\iota_{2},\lambda_{2},\rho_{2}),\varphi_{i}:Y\longrightarrow X_{i},i=1,2\big{)}\]
such that \((Y,X_{i},\varphi_{i})\in\mathcal{N}^{[t,0]}_{n}\). In other words, \(\mathcal{T}^{\leq t}_{n}\) is the composition of correspondences,
\[\mathcal{T}^{\leq t}_{n}=\mathcal{N}^{[0,t]}_{n}\circ\mathcal{N}^{[t,0]}_{n},\]
cf. subsection 9.2. Explicitly, this means that we obtain the following diagram with a cartesian square in the middle,
(5.5.1)
Note that all formal schemes in the lower two rows are regular and the maps are relatively representable by morphisms of projective schemes. Using the calculus in the appendix, we may therefore define for closed formal subschemes \(A\subset\mathcal{N}^{[0]}_{n}\), resp. \(B\subset\mathcal{N}^{[t]}_{n}\),
\[\mathbb{T}^{A,+}_{t}=(\mathcal{N}^{[0,t]}_{n})_{*}\colon K^{A}(\mathcal{N}^{[ 0]}_{n})\longrightarrow K^{\mathcal{N}^{[0,t]}_{n}(A)}(\mathcal{N}^{[t]}_{n} ),\quad\mathbb{T}^{B,-}_{t}=(\mathcal{N}^{[t,0]}_{n})_{*}\colon K^{B}( \mathcal{N}^{[t]}_{n})\longrightarrow K^{\mathcal{N}^{[t,0]}_{n}(B)}( \mathcal{N}^{[0]}_{n}).\]
This defines the map
\[\mathbb{T}^{A}_{t}=\mathbb{T}^{\mathcal{N}^{[0,t]}_{n}(A),-}_{t}\circ\mathbb{T }^{A,+}_{t}\colon K^{A}(\mathcal{N}_{n})\longrightarrow K^{\mathcal{T}^{\leq t }(A)}(\mathcal{N}_{n}). \tag{5.5.2}\]
Recall here that \(\mathcal{N}_{n}=\mathcal{N}^{[0]}_{n}\).
We state the following conjecture. Note that this conjecture is empty for \(n=2\) and \(n=3\).
**Conjecture 5.5.1**.: _Let \(t\neq t^{\prime}\). Then for any closed formal subscheme \(A\) of \(\mathcal{N}_{n}\) we have an equality \(|\mathcal{T}^{\leq t}(\mathcal{T}^{\leq t^{\prime}}(A))|=|\mathcal{T}^{\leq t^{ \prime}}(\mathcal{T}^{\leq t}(A))|\), where the notation means that the two closed formal subschemes \(\mathcal{T}^{\leq t}(\mathcal{T}^{\leq t^{\prime}}(A))\) and \(\mathcal{T}^{\leq t^{\prime}}(\mathcal{T}^{\leq t}(A))\) of \(\mathcal{N}_{n}\) agree up to a locally nilpotent ideal sheaf. Furthermore, the two maps_
\[\mathbb{T}_{t}^{\mathcal{T}^{\leq t^{\prime}}(A)}\circ\mathbb{T}_{t^{\prime}}^ {A}\colon K^{A}(\mathcal{N}_{n})\longrightarrow K^{\mathcal{T}^{\leq t}( \mathcal{T}^{\leq t^{\prime}}(A))}(\mathcal{N}_{n}),\quad\mathbb{T}_{t^{ \prime}}^{\mathcal{T}^{\leq t}(A)}\circ\mathbb{T}_{t}^{A}\colon K^{A}( \mathcal{N}_{n})\longrightarrow K^{\mathcal{T}^{\leq t^{\prime}}(\mathcal{T}_ {t}(A))}(\mathcal{N}_{n})\]
_are identical modulo torsion._
**Remark 5.5.2**.: The conjectured equality \(|\mathcal{T}^{\leq t}(\mathcal{T}^{\leq t^{\prime}}(A))|=|\mathcal{T}^{\leq t ^{\prime}}(\mathcal{T}^{\leq t}(A))|\) is analogous to the identity of the two subsets (4.1.7) of \(\mathbb{N}^{[0]}\times\mathbb{N}^{[0]}\) in Remark 4.1.2.
For an atomic function \(\varphi_{t}\) (cf. Definition 4.1.1) with \(0\leq t\leq n\), we define the corresponding Hecke operator as
\[\mathbb{T}_{\varphi_{t}}^{A}:=\mathbb{T}_{t}^{A}\colon K^{A}(\mathcal{N}_{n}) \longrightarrow K^{\mathcal{T}^{\leq t}(A)}(\mathcal{N}_{n}).\]
The homomorphisms obtained in this way will be called _atomic Hecke operators_. By Proposition 4.1.3, atomic functions form a basis of the spherical Hecke algebra (as a polynomial algebra). Following [11, Lem. 1.4], we introduce
\[K^{\sigma}(\mathcal{N})_{\mathbb{Q}}=\oplus_{A\subset\mathcal{N}_{n}}K^{A}( \mathcal{N}_{n})\otimes\mathbb{Q},\]
where \(A\) runs through all closed formal subschemes defined by radical ideal sheaves. Then we obtain for every \(t\) an endomorphism \(\mathbb{T}_{\varphi_{t}}\) of \(K^{\sigma}(\mathcal{N}_{n})_{\mathbb{Q}}\) by mapping an element \((\gamma_{A})_{A}\) to \((\beta_{B})_{B}\), where
\[\beta_{B}=\begin{cases}\mathbb{T}_{\varphi_{t}}^{A}(\gamma_{A})&\text{ if }\,B=\mathcal{T}^{\leq t}(A)\\ 0&\text{ if }B\text{ is not of this form.}\end{cases}\]
Denoting by \(\widetilde{\mathcal{H}}_{K}\) the free \(\mathbb{Q}\)-algebra with generators \(X_{1},\ldots,X_{m}\), we therefore obtain by taking compositions a homomorphism of algebras
\[\mathbb{T}\colon\widetilde{\mathcal{H}}_{K}\longrightarrow\operatorname{End}( K^{\sigma}(\mathcal{N}_{n})_{\mathbb{Q}}). \tag{5.5.3}\]
Explicitly, if \(\varphi=\varphi_{1}*\varphi_{2}*\cdots*\varphi_{r}\) is a monomial in the atomic Hecke functions \(\varphi_{1}=\varphi_{t_{1}},\varphi_{2}=\varphi_{t_{2}},\ldots,\varphi_{r}= \varphi_{t_{r}}\), then \(\mathbb{T}_{\varphi}^{A}=\mathbb{T}_{t_{1}}^{A}\circ\mathbb{T}_{t_{2}}^{ \mathcal{T}_{\leq t_{1}}(A)}\circ\cdots\circ\mathbb{T}_{t_{r}}^{\mathcal{T}_{ \leq t_{r-1}}\circ\mathcal{T}_{\leq t_{r-2}}\circ\ldots\circ\mathcal{T}_{ \leq t_{1}}(A)}\). These endomorphisms of \(K^{\sigma}(\mathcal{N}_{n})_{\mathbb{Q}}\) will be called _monomial_ Hecke operators.
Conjecture 5.5.1 implies that this homomorphism factors through a homomorphism of the Hecke algebra,
\[\mathbb{T}\colon\mathcal{H}_{K}\longrightarrow\operatorname{End}(K^{\sigma}( \mathcal{N}_{n})_{\mathbb{Q}}). \tag{5.5.4}\]
**Remark 5.5.3**.: The homomorphism property of (5.5.3) would not hold for \(n>2\) if we had defined \(\mathbb{T}_{\varphi_{1}*\varphi_{2}*\cdots*\varphi_{r}}\) as the map induced by the composed geometric correspondence \(\mathcal{T}^{\leq t_{1}}\circ\mathcal{T}^{\leq t_{2}}\circ\cdots\circ\mathcal{T }^{\leq t_{r}}\). Indeed, let \(n>2\) and consider the \(r\)-fold iterate \(\mathbb{T}_{t}^{r}\) for \(t>0\) and for variable \(r\). By [13, Thm. 1.2], the fiber dimension of the map \(\mathcal{N}_{n}^{[0,t]}\rightarrow\mathcal{N}_{n}^{[0]}\) is positive, and hence so is the fiber dimension \(d\) of \(\mathcal{T}_{n}^{\leq t}\rightarrow\mathcal{N}_{n}^{[0]}\). But then the fiber dimension of the
\(r\)-fold composition of geometric correspondences \(\mathcal{T}^{\leq t}\circ\mathcal{T}^{\leq t}\circ\cdots\circ\mathcal{T}^{\leq t }\to\mathcal{N}_{n}^{[0]}\) is equal to \(rd\) and hence, for \(rd>n\), has irreducible components contained in the special fiber which have too high dimension. Hence the resulting formal scheme is not flat over \(O_{\bar{F}}\) and the induced map on K-groups is not equal to the \(r\)-fold composition of the endomorphism \((\mathcal{T}^{\leq t})_{*}\).
This problem would disappear, if we had defined the Hecke correspondences as derived formal schemes. Indeed, the map \(\mathbb{T}_{\varphi_{1}*\varphi_{2}*\cdots*\varphi_{r}}\) is induced by the composition of derived geometric correspondences \(\mathcal{T}_{\mathrm{der}}^{\leq t_{1}}\circ\mathcal{T}_{\mathrm{der}}^{\leq t _{2}}\circ\cdots\circ\mathcal{T}_{\mathrm{der}}^{\leq t_{r}}\), cf. Remark 9.2.2. As the argument above shows, this composition of derived geometric correspondences is not in general a classical formal scheme in our case. Note that, even if we had defined our Hecke operators in terms of derived formal schemes, this does not seem to help in proving Conjecture 5.5.1. Indeed, the only potential argument we see to prove this commutativity is to relate our Hecke operators to classical Hecke operators in the generic fiber, where the desired commutativity holds, and use some density argument to extend the commutativity integrally. However, derived schemes seem unsuitable for such density arguments.
**Remark 5.5.4**.: It is conceivable that \((\mathcal{T}^{\leq t})_{*}=\mathbb{T}_{t}\), even when the fiber dimension is positive. Even so, we prefer to write \(\mathbb{T}_{t}\) as a product of intertwining Hecke operators. Our definition is tentative and only a proof of the AFL for the full Hecke algebra in higher dimension will decide which definition is "the right one".
**Remark 5.5.5**.: The argument above uses the fiber dimension of the natural maps \(\pi\colon\mathcal{N}_{n}^{[t,0]}\to\mathcal{N}_{n}^{[0]}\), resp. \(\pi^{\prime}\colon\mathcal{N}_{n}^{[0,t]}\to\mathcal{N}_{n}^{[0]}\). In [13], the natural maps \(\pi\colon\mathcal{N}_{n}^{[t^{\prime},t]}\to\mathcal{N}_{n}^{[t^{\prime}]}\) for arbitrary \(t^{\prime}\neq t\) are considered (in loc. cit. arbitrary affine Deligne-Lusztig varieties are considered at the point set level). More precisely, consider the fiber of \(\pi\) over an arbitrary point in \(\mathcal{N}_{n}^{[t^{\prime}]}(\bar{k})\) (the latter is the set of \(\bar{k}\)-points of a projective scheme over \(\bar{k}\)). Then, for \(n>2\), such a fiber always has strictly positive dimension, unless \(t=0\) and \(t^{\prime}=2\), in which case the fibers are finite, cf. [13, Thm. 1.2]. Hence the morphism \(\pi\) is non-flat in all cases outside the case (\(t=0,t^{\prime}=2\)). Indeed, both morphisms \(\pi\) and \(\pi^{\prime}\) are relatively representable by projective morphisms which are finite in the generic fiber. If flatness held, then all fibers in the special fiber would be finite as well, a contradiction.
**Example 5.5.6**.: Let \(n=2\). In this case, \(\mathcal{N}_{2}\) can be identified with the Lubin-Tate space \(\mathcal{M}_{2}\) for \(n=2\) via the Serre construction, cf. [17]. Here \(\mathcal{M}_{2}\) parametrizes triples \((Y,\rho)\), where \(Y\) is a strict formal \(O_{F_{0}}\)-module of dimension one and height \(2\), and where \(\rho\) is a framing with a fixed Lubin-Tate module \(\mathbb{Y}\) over \(\bar{k}\). Also, there is a natural identification of \(\mathcal{N}_{2}^{[2]}\) with \(\mathcal{N}_{2}\). Indeed, let \((Y,\iota_{Y},\lambda_{Y},\rho_{Y})\) be an object of \(\mathcal{N}_{2}^{[2]}\). Then the inclusion \(\ker\lambda_{Y}\subset Y[\varpi]\) is an equality and hence \(\lambda_{Y}=\varpi\lambda\) for a unique principal polarization \(\lambda\); associating to \((Y,\iota_{Y},\lambda_{Y},\rho_{Y})\) the object \((Y,\iota_{Y},\lambda,\rho_{Y})\) of \(\mathcal{N}_{2}\) defines the desired isomorphism. Under this identification, \(\mathcal{N}_{2}^{[2,0]}\) is isomorphic to \(\mathcal{M}_{2,\Gamma_{0}}\), the \(\Gamma_{0}(p)\)-level covering of \(\mathcal{M}_{2}\) (generalized from \(\mathbb{Q}_{p}\) to arbitrary \(F_{0}\)). Here \(\mathcal{M}_{0,\Gamma_{0}}\) parametrizes isogenies \(\alpha:Y\to Y^{\prime}\) of degree \(q\) with \(\ker\alpha\subset Y[\varpi]\) which lift a given isogeny \(\mathbb{Y}\to\mathbb{Y}\). Note that \(\mathcal{M}_{2,\Gamma_{0}}(\bar{k})\) consists of a single
point. Indeed, any two isogenies \(\mathbb{Y}\to\mathbb{Y}\) as above differ by a quasi-isogeny of degree \(0\). But such a quasi-isogeny is automatically an automorphism: denoting by \(D\) the quaternion division algebra over \(F_{0}\), we have
\[\operatorname{Aut}(\mathbb{Y})=O_{D}^{\times}=\{x\in D\mid\operatorname{Nm}(x) \in O_{F_{0}}^{\times}\}=\{x\in\operatorname{End}^{o}(\mathbb{Y})\mid\deg(x)=0\}.\]
In this case, all maps to \(\mathcal{N}_{2}\) in the diagram (5.5.1) are finite and flat, and hence so are the corresponding maps for the iterated correspondences \((\mathcal{T}^{\leq 2})^{r}=\mathcal{T}^{\leq 2}\circ\mathcal{T}^{\leq 2}\circ \cdots\circ\mathcal{T}^{\leq 2}\). In this case, Conjecture 5.5.1 is empty, we have \((\mathcal{T}_{\leq 2})_{*}=(\mathcal{N}_{2}^{[2,0]})_{*}\circ(\mathcal{N}_{2}^{[0,2]})_{*}\) and the Hecke operators are induced by geometric Hecke correspondences, i.e., \(\mathbb{T}_{t}^{r}=((\mathcal{T}^{\leq 2})^{r})_{*}\).
**Remark 5.5.7**.: Li-Mihatsch [21] consider a situation related to the Linear ATC/AFL for Lubin-Tate space. They also define Hecke operators by first constructing these for distinguished generators of the Hecke algebra, and then by composition (instead of maps of K-groups, they consider maps of cycle groups). In their case both projection maps to \(\mathcal{N}_{n}\) are finite and flat, and the same is true for the composition of these distinguished geometric correspondences. Hence their compositions of distinguished Hecke operators are induced by compositions of geometric correspondences. In their case, the analogue of Conjecture 5.5.1 follows by a density argument for the Hecke correspondences from the classical definition of Hecke correspondences in the generic fiber, which implies the commutativity in the generic fiber.
**Remark 5.5.8**.: Fix an even \(t\) with \(0\leq t\leq n\). Since \(\mathcal{N}_{n}^{[t^{\prime}]}\) is regular for all \(t^{\prime}\), the diagram (5.4.2) defines a Hecke operator \(\mathbb{T}_{\mathbf{1}_{K^{[t]}K^{[t,t^{\prime}]}}\ast\mathbf{1}_{K^{[t^{\prime },t]}K^{[t]}}}\) corresponding to the element \(\mathbf{1}_{K^{[t]}K^{[t,t^{\prime}]}\ast\mathbf{1}_{K^{[t^{\prime},t]}K^{[t]}}}\) in the Hecke algebra \(\mathcal{H}_{K^{[t]}}\). When \(t>0\), it is not clear what subalgebra these elements generate. It is also not clear what the relations are among these elements.
### Hecke correspondences for the product
Let
\[\mathcal{N}_{n,n+1}=\mathcal{N}_{n}\times_{\operatorname{Spf}O_{\check{F}}} \mathcal{N}_{n+1}.\]
We replace the diagram (5.4.2) by the following diagram, in which the top is defined by the fact that the square is cartesian,
We use this diagram to define the Hecke operator \(\mathbb{T}_{\varphi}\) when \(\varphi\in\mathcal{H}_{K^{\flat}\times K}\) is atomic. In this case \(t=0\) or \(t^{\prime}=0\) and hence the lower oblique arrows are the identity in one factor; we
define \(\mathbb{T}_{\varphi}=\mathbb{T}_{\varphi}^{-}\circ\mathbb{T}_{\varphi}^{+}\) as in (5.5.2). More precisely, for closed formal subschemes \(A\) of \(\mathcal{N}_{n,n+1}\), resp. \(B\) of \(\mathcal{N}_{n}^{[t]}\times\mathcal{N}_{n+1}^{[t^{\prime}]}\), we obtain
\[\begin{split}&\mathbb{T}_{\varphi}^{A,+}\colon K^{A}(\mathcal{N}_{n,n+1})\longrightarrow K^{(\mathcal{N}_{n}^{[0,t]}\times\mathcal{N}_{n+1}^{[0,t^ {\prime}]})(A)}(\mathcal{N}_{n}^{[t]}\times\mathcal{N}_{n+1}^{[t^{\prime}]}), \\ &\mathbb{T}_{\varphi}^{B,-}\colon K^{B}(\mathcal{N}_{n}^{[t]} \times\mathcal{N}_{n+1}^{[t^{\prime}]})\longrightarrow K^{(\mathcal{N}_{n}^{[t,0]}\times\mathcal{N}_{n+1}^{[t^{\prime},0]})(B)}(\mathcal{N}_{n,n+1}),\\ &\mathbb{T}_{\varphi}^{A}\colon K^{A}(\mathcal{N}_{n,n+1}) \longrightarrow K^{\mathcal{T}^{\leq t,\leq t^{\prime}}(A)}(\mathcal{N}_{n,n+ 1}).\end{split} \tag{5.6.1}\]
After this, assuming the obvious analogue of Conjecture 5.5.1 for \(\mathcal{N}_{n,n+1}\), we define \(\mathbb{T}_{\varphi}^{A}\) for monomial elements by iterated compositions, and for general elements \(\varphi\in\mathcal{H}_{K^{\flat}\times K}\) as linear combinations.
## 6. AFL Conjectures for the Hecke algebra
In this section, we formulate the AFL conjecture in its homogeneous version and in its inhomogeneous version. We consider the arithmetic diagonal cycle \(\Delta\subseteq\mathcal{N}_{n,n+1}\).
### Statement of the AFL for Hecke correspondences (homogeneous version)
Let \(W_{0}\) be a split hermitian space of dimension \(n+1\) and \(W_{1}\) its nonsplit form. Also, let \(u_{i}\in W_{i}\) of unit norm for \(i=0,1\). We identify \(W_{1}\) with \(\mathbb{V}_{n+1}\) defined in SS5.2 in such a way that \(u_{1}\in W_{1}\) is mapped to the element \(u_{0}\in\mathbb{V}_{n+1}\) which corresponds to the map \(\bar{\mathbb{E}}\to\mathbb{X}\) defined by the product decomposition \(\mathbb{X}=\mathbb{X}^{\flat}\times\bar{\mathbb{E}}\). Then we may identify \(\mathrm{U}(W_{1}^{\flat})\) (resp. \(\mathrm{U}(W_{1})\)) with \(\mathrm{U}(\mathbb{V}_{n})\) (resp. \(\mathrm{U}(\mathbb{V}_{n+1})\)). Then \(G_{W_{1}}(F_{0})\) acts on \(\mathcal{N}_{n}\times\mathcal{N}_{n+1}\) via this identification.
For each monomial element \(\varphi\in\mathcal{H}_{K^{\flat}}\otimes\mathcal{H}_{K}\) (cf. Definition 4.1.5), we consider the corresponding geometric correspondence \(\mathcal{T}_{\varphi}\) with its two projections \(\pi_{1},\pi_{2}\colon\mathcal{T}_{\varphi}\to\mathcal{N}_{n,n+1}\) (recall that in general the Hecke operator corresponding to \(\varphi\) is not induced by \(\mathcal{T}_{\varphi}\)). More precisely, \(\mathcal{T}_{\varphi}\) depends on the order of the product decomposition of \(\varphi\) into atomic elements. Let \(g\in G_{W_{1}}(F_{0})\), and consider the image \(g\Delta\) under the induced automorphism of \(\mathcal{N}_{n,n+1}\).
**Proposition 6.1.1**.: _Let \(\varphi\in\mathcal{H}_{K^{\flat}}\otimes\mathcal{H}_{K}\) be a monomial element. Let \(g\in G_{W_{1}}(F_{0})_{\mathrm{rs}}\) be regular semisimple. Then the intersection_
\[g\Delta\cap\mathrm{supp}(\mathcal{T}_{\varphi}(\Delta))=\pi_{2}\big{(}\pi_{1}^ {-1}(g\Delta)\times_{\mathcal{T}_{\varphi}}\pi_{1}^{-1}(\Delta)\big{)}\]
_is a proper scheme, i.e., its ideal of definition is nilpotent and the underlying scheme is proper over \(\mathrm{Spec}\,O_{\bar{F}}\)._
Proof.: The proof is based on the same idea as that of Lemma 4.2.1. It suffices to consider elements of the form \((1,g)\) with \(g\in\mathrm{U}(W_{1})_{\mathrm{rs}}\).
The two projection maps from the formal scheme \(\mathcal{T}=\mathcal{T}_{\varphi}\) to \(\mathcal{N}_{n,n+1}\) are both proper. Therefore it suffices to show that the image of the intersection under the first projection map is a proper scheme. Let \((X^{\flat},X)\), resp. \((X^{\prime\flat},X^{\prime})\), be in \(\Delta\) yielding identical images under \(\mathcal{T}_{\varphi}\), resp. \((1,g)\) (we have surpressed the additional structures in the notation). Then \(X=X^{\flat}\times\mathcal{E}\)
and \(X^{\prime}=X^{\prime\theta}\times\mathcal{E}\), and there are quasi-isogenies \(f_{1}^{\flat}:X^{\flat}\to X^{\prime\theta}\) and \(f_{2}:X\to gX^{\prime}\). Let \(\varpi^{N}f_{1}^{\flat},\varpi^{N}f_{2}\) be isogenies, for some large integer \(N\), depending only on \(\mathcal{T}\). Consider the quasi-isogeny \(f_{1}=(f_{1}^{\flat},\mathrm{id}_{\mathcal{E}}):X\to X^{\prime}\). Then \(\varpi^{N}f_{1}\) is an isogeny.
Recall the element \(u_{0}\in\mathbb{V}\) corresponding to \(u_{1}\in W_{1}\), i.e., the natural element corresponding to the product decomposition \(\mathbb{X}=\mathbb{X}^{\flat}\times\bar{\mathbb{E}}\). Let \(\mathcal{Z}(u_{0})\) be the associated special divisor on \(\mathcal{N}_{n+1}\), cf. subsection 5.2. From \(X^{\prime}\in\mathcal{Z}(u_{0})\) and the isogeny \(\varpi^{N}f_{2}\), it follows that \(X\in\mathcal{Z}(\varpi^{N}gu_{0})\). From the isogeny \(\varpi^{N}f_{1}\) it follows that \(X^{\prime}\in\mathcal{Z}(\varpi^{2N}gu_{0})\). Then inductively we see that \(X\) lies on the intersection of the special divisors \(\mathcal{Z}(u_{0}),\mathcal{Z}(\varpi^{N}gu_{0}),\mathcal{Z}(\varpi^{3N}g^{2 }u_{0}),\cdots\). In particular, \(X\) lies on the special cycle
\[\mathcal{Z}(\varpi^{(2n-1)N}\langle u_{0},gu_{0},\ldots,g^{n}u_{0}\rangle).\]
It is known that this special cycle is a proper scheme, cf. [23, Proof of Lem. 6.1]. This shows that the image \((X^{\flat},X)\) of the intersection under the first projection map is a proper scheme. The proof is complete.
We introduce the _intersection number_
\[\mathrm{Int}(g,\varphi):=\langle g\Delta,\mathbb{T}_{\varphi}^{\Delta}(\Delta )\rangle_{\mathcal{N}_{n,n+1}},\quad g\in G_{W_{1}}(F_{0})_{\mathrm{rs}},\quad \varphi\in\mathcal{H}_{K^{\flat}\times K}. \tag{6.1.1}\]
Here the RHS is defined by linearity from the case of monomial \(\varphi\). For monomial \(\varphi\), it is defined as follows. Consider the cup product
\[K^{g\Delta}(\mathcal{N}_{n,n+1})\times K^{\mathcal{T}_{\varphi}(\Delta)}( \mathcal{N}_{n,n+1})\longrightarrow K^{g\Delta\cap\mathcal{T}_{\varphi}( \Delta)}(\mathcal{N}_{n,n+1}).\]
On the other hand, there is the composition of maps
\[K^{g\Delta\cap\mathcal{T}_{\varphi}(\Delta)}(\mathcal{N}_{n,n+1})\stackrel{{ \mathrm{nat}}}{{\longrightarrow}}K^{\prime}(g\Delta\cap\mathcal{T}_{\varphi}( \Delta))\stackrel{{\chi}}{{\rightarrow}}\mathbb{Z}.\]
Here the first map is the natural map from \(K\)-theory to \(K^{\prime}\)-theory (sending a complex \(C\) which is acyclic outside a closed subset to the alternating sum of the classes in \(K^{\prime}\) of the cohomology sheaves of \(C\)); the second map is given by the Euler-Poincare characteristic (defined since \(g\Delta\cap\mathcal{T}_{\varphi}(\Delta)\) is a proper scheme).
Combining these two maps defines the pairing
\[\langle\,,\,\rangle_{\mathcal{N}_{n,n+1}}\colon K^{g\Delta}(\mathcal{N}_{n,n+ 1})\times K^{\mathcal{T}_{\varphi}(\Delta)}(\mathcal{N}_{n,n+1})\longrightarrow \mathbb{Z}.\]
Tensoring with \(\mathbb{Q}\), we obtain the pairing
\[\langle\,,\,\rangle_{\mathcal{N}_{n,n+1}}\colon K^{g\Delta}(\mathcal{N}_{n,n+ 1})_{\mathbb{Q}}\times K^{\mathcal{T}_{\varphi}(\Delta)}(\mathcal{N}_{n,n+1})_ {\mathbb{Q}}\longrightarrow\mathbb{Q}. \tag{6.1.2}\]
Consider the element \([g\Delta]\in K^{g\Delta}(\mathcal{N}_{n,n+1})\), namely the structure sheaf \(\mathcal{O}_{g\Delta}\), considered as an element in \(K^{\prime}(g\Delta)=K^{g\Delta}(\mathcal{N}_{n,n+1})\). Similarly, consider the element \([\Delta]\in K^{\Delta}(\mathcal{N}_{n,n+1})\) and its image \(\mathbb{T}_{\varphi}^{\Delta}([\Delta])\in K^{\mathcal{T}_{\varphi}(\Delta)}( \mathcal{N}_{n,n+1})_{\mathbb{Q}}\) under the map \(\mathbb{T}_{\varphi}^{\Delta}\colon K^{\Delta}(\mathcal{N}_{n,n+1})\to K^{ \mathcal{T}_{\varphi}(\Delta)}(\mathcal{N}_{n,n+1})_{\mathbb{Q}}\). Then the RHS of (6.1.1) is defined as \(\langle[g\Delta],\mathbb{T}_{\varphi}^{\Delta}([\Delta])\rangle_{\mathcal{N}_{n,n+1}}\).
**Remark 6.1.2**.: Let \(\varphi\) be a monomial element. If Conjecture 5.5.1 holds true, then the quantity \(\operatorname{Int}(g,\varphi)\) does not depend on the order in which the monomial element \(\varphi\) is written as a product of atomic elements. This independence is all that matters in the formulation of the AFL conjecture. Then \(\operatorname{Int}(g,\varphi)\) defines a linear functional \(\mathcal{H}_{K^{\flat}\times K}\to\mathbb{Q}\).
**Remark 6.1.3**.: Let \(h\in\operatorname{U}(W_{1}^{\flat})(F_{0})\times\operatorname{U}(W_{1})(F_{0})\). Denote by
\[h_{*}\colon K^{\Delta}(\mathcal{N}_{n,n+1})\longrightarrow K^{h(\Delta)}( \mathcal{N}_{n,n+1}),\text{ resp. }h_{*}\colon K^{\mathbb{T}_{\varphi}(\Delta)}( \mathcal{N}_{n,n+1})\longrightarrow K^{h(\mathbb{T}_{\varphi}(\Delta))}( \mathcal{N}_{n,n+1})\]
its action on the K-group. Then \(h_{*}\circ\mathbb{T}_{\varphi}^{\Delta}=\mathbb{T}_{\varphi}^{h(\Delta)} \circ h_{*}\). If \(h\) is a diagonal element induced by an element in \(\operatorname{U}(W_{1}^{\flat})(F_{0})\), then \(h\Delta=\Delta\). It follows that for \(h_{1},h_{2}\in\operatorname{U}(W_{1}^{\flat})(F_{0})\),
\[\operatorname{Int}(g,\varphi) =\langle g\Delta,\mathbb{T}_{\varphi}^{\Delta}(\Delta)\rangle= \langle gh_{1}\Delta,\mathbb{T}_{\varphi}^{\Delta}(h_{2}\Delta)\rangle\] \[=\langle gh_{1}\Delta,h_{2}\mathbb{T}_{\varphi}^{\Delta}(\Delta)\rangle\] \[=\langle h_{2}^{-1}gh_{1}\Delta,\mathbb{T}_{\varphi}^{\Delta}( \Delta)\rangle\] \[=\operatorname{Int}(h_{2}^{-1}gh_{1},\varphi).\]
Therefore the function \(g\in G_{W_{1}}(F_{0})_{\operatorname{rs}}\mapsto\operatorname{Int}(g,\varphi)\) depends only on the orbit of \(g\).
**Conjecture 6.1.4**.: _(AFL for the spherical Hecke algebra, homogeneous version.) Let \(\varphi^{\prime}\in\mathcal{H}_{K^{\flat}}\otimes\mathcal{H}_{K^{\prime}}\), and let \(\varphi=\operatorname{BC}(\varphi^{\prime})\in\mathcal{H}_{K^{\flat}}\otimes \mathcal{H}_{K}\). Then_
\[2\operatorname{Int}(g,\varphi)\cdot\log q=-\omega(\gamma)\,\partial \mathrm{Orb}\,\big{(}\gamma,\varphi^{\prime}\big{)},\]
_whenever \(\gamma\in G^{\prime}(F_{0})_{\operatorname{rs}}\) is matched with \(g\in G_{W_{1}}(F_{0})_{\operatorname{rs}}\)._
In SS7 we prove the conjecture when \(n=1\), cf. Theorem 8.2.3.
**Remark 6.1.5**.: To ensure that the LHS is well-defined, we are implicitly using the commutativity conjecture, Conjecture 5.5.1 (for the product \(\mathcal{N}_{n,n+1}\), cf. SS5.6) or its weakened version, cf. Remark 6.1.2. However, one may bypass this by interpreting Conjecture 6.1.4 as saying that the AFL identity holds for any representative \(\widetilde{\varphi}\in\widetilde{\mathcal{H}}_{K}\) of \(\varphi\) in the free algebra corresponding to the polynomial basis given by the atomic generators, cf. (5.5.3).
### The inhomogeneous version of the AFL
This is the special case when \(\varphi=\mathbf{1}_{K^{\flat}}\otimes f\). Moreover we choose \(\varphi^{\prime}=\mathbf{1}_{K^{\flat}}\otimes f^{\prime}\) for \(f^{\prime}\in\mathcal{H}_{K^{\prime}}\) with \(\operatorname{BC}_{n+1}(f^{\prime})=f\). In this case it is more elegant to formulate the analytic side in terms of the symmetric space \(S_{n+1}\). In fact, by Lemma 3.7.2 we have for \(\gamma=(\gamma_{1},\gamma_{2})\in G^{\prime}(F_{0})_{\operatorname{rs}}\),
\[\omega_{G^{\prime}}(\gamma)\,\partial\mathrm{Orb}\big{(}\gamma,\mathbf{1}_{K^ {\prime\flat}}\otimes f^{\prime}\big{)}=2\omega_{S}(r(\gamma_{1}^{-1}\gamma_{2 }))\,\partial\mathrm{Orb}(r(\gamma_{1}^{-1}\gamma_{2}),r_{*}^{\eta^{n}}(f^{ \prime})).\]
By Remark 6.1.3, it suffices to consider the regular semi-simple elements of the form \((1,g)\), with \(g\in\operatorname{U}(W_{1})(F_{0})_{\operatorname{rs}}\). For notational simplicity, we write \(\operatorname{Int}(g,f)=\operatorname{Int}((1,g),\mathbf{1}_{K^{\flat}} \otimes f)\). Recall from (3.7.3) and (3.7.5) the isomorphism
\[\operatorname{BC}_{S_{n+1}}^{\eta^{n}}:\mathcal{H}_{K^{\prime}_{S}}^{\sim} \overset{\sim}{\longrightarrow}\mathcal{H}_{K}.\]
Then the inhomogeneous version AFL is as follows. We emphasize that this conjecture is only a special case of Conjecture 6.1.4.
**Conjecture 6.2.1**.: _(AFL for the spherical Hecke algebra \(\mathcal{H}_{K}\), inhomogeneous version.) Let \(f\in\mathcal{H}_{K}\). Then_
\[\operatorname{Int}(g,f)\cdot\log q=-\omega(\gamma)\,\partial\mathrm{Orb}\, \big{(}\gamma,(\mathrm{BC}_{S_{n+1}}^{\eta^{n}})^{-1}(f)\big{)},\]
_whenever \(g\in U(\mathbb{V}_{n+1})_{\mathrm{rs}}\) is matched with \(\gamma\in S_{n+1}(F_{0})_{\mathrm{rs}}\)._
**Remark 6.2.2**.: There is also the special case of the AFL conjecture when the second factor of \(\varphi\in\mathcal{H}_{K^{\flat}}\otimes\mathcal{H}_{K}\) is the unit element. It is unclear to us whether this case is simpler than the general case.
## 7. The case \(n=1\)
In this section we provide evidence to Conjecture 6.1.4 by proving the case \(n=1\). We also give a direct (local) proof of the FL for the full spherical Hecke algebra in this case (cf. SS3.6). We use the following notation. Let \(G^{\prime}=\mathrm{GL}_{2}(F)\) and \(K^{\prime}=\mathrm{GL}_{2}(O_{F})\). We write \(\varpi^{(m,m^{\prime})}\) for the diagonal matrix in \(G^{\prime}\) with entries \(\varpi^{m}\) and \(\varpi^{m^{\prime}}\). Also, \(G=\mathrm{U}(W_{0})\), where we use the Hermitian form on the standard \(2\)-dimensional vector space given by the Hermitian matrix
\[\begin{pmatrix}&\sqrt{\epsilon}\\ -\sqrt{\epsilon}&\end{pmatrix},\quad\epsilon\in O_{F_{0}}^{\times}\setminus O_{ F_{0}}^{\times,2}. \tag{7.0.1}\]
We let \(K=G\cap\mathrm{M}_{2}(O_{F})\) be the natural hyperspecial maximal compact subgroup.
### Base change homomorphism
In this subsection we explicate the base change homomorphism
\[\mathrm{BC}:\mathcal{H}_{K^{\prime}}\longrightarrow\mathcal{H}_{K},\]
as well as an auxiliary map and its factorization (3.7.8), which we record here for later use,
(7.1.1)
Our goal is to compute the isomorphism \(\mathrm{BC}_{S}^{\eta}\) explicitly in terms a certain basis of \(\mathcal{H}_{K}\). We will use freely statements from SS3.
Set
\[f^{\prime}_{m}=\mathbf{1}_{M_{2}(O_{F})_{v\mathrm{cdet}=m}},\quad m\geq 0.\]
The Satake isomorphism \(\mathcal{H}_{K^{\prime}}\simeq\mathbb{C}[X,Y]^{W}\) for \(\mathrm{GL}_{2}(F)\) is explicitly given by
\[\mathrm{Sat}(f^{\prime}_{m})=q^{m}\frac{X^{m+1}-Y^{m+1}}{X-Y}. \tag{7.1.2}\]
Here and below we use \(X=x_{1},Y=x_{2}\) for \(x_{1},x_{2}\) in SS3.
Similarly, setting
\[f_{m}=\mathbf{1}_{\varpi^{-m}M_{2}(O_{F})\cap G},\]
the Satake isomorphism for the unitary group is explicitly given by
\[\operatorname{Sat}(f_{m})=q^{m}\frac{X^{(2m+1)/2}-Y^{(2m+1)/2}}{X^{1/2}-Y^{1/2}}, \tag{7.1.3}\]
where \(XY=1\). Note that, despite the fractional exponents, the last expression is a polynomial of \(X\) and \(Y=X^{-1}\):
\[\operatorname{Sat}(f_{m})=q^{m}\sum_{i=-m}^{m}X^{i}. \tag{7.1.4}\]
Set
\[\phi_{m}:=\mathbf{1}_{K\varpi^{(m,-m)}K}=f_{m}-f_{m-1}. \tag{7.1.5}\]
Here, and the sequel, we set \(f_{m}^{\prime}=0\) for \(m<0\), and similarly for \(f_{m}\). Then we have a basis of \(\mathcal{H}_{K}\) given by \(\phi_{m},m\geq 0\). In terms of the functions introduced in SS3 and SS4, we have \(\phi_{1}=f^{[2]}\), cf. (3.5.1), hence \(\phi_{1}=\varphi_{2}-(q+1)\), cf. (4.1.9).
The base change homomorphism is then the natural quotient map defined by setting \(XY=1\):
\[\operatorname{BC}\colon\mathbb{C}[X,Y]^{W}\longrightarrow\mathbb{C}[X,X^{-1} ]^{W}\simeq\mathbb{C}[X+X^{-1}]. \tag{7.1.6}\]
Next we describe explicitly the other two maps in the diagram (7.1.1). Consider the Cartan decomposition for the symmetric space \(S_{2}\),
\[S_{2}(F_{0})=\coprod_{m\geq 0}K^{\prime}\cdot\begin{pmatrix}&\varpi^{m}\\ \varpi^{-m}&\end{pmatrix}.\]
The subset indexed by \(m=0\) is \(K_{S}^{\prime}=K^{\prime}\cdot 1_{2}=K^{\prime}\cap S_{2}(F_{0})\). We denote the complete set of representatives of \(K^{\prime}\)-orbits on \(S_{2}(F_{0})\),
\[t_{m}:=\begin{pmatrix}&\varpi^{m}\\ \varpi^{-m}&\end{pmatrix},\quad m\geq 0. \tag{7.1.7}\]
Therefore, we have a basis of \(\mathcal{H}_{K_{S}^{\prime}}\) given by
\[\varphi_{m}^{\prime}=\mathbf{1}_{K^{\prime}\cdot t_{m}},\quad m\geq 0. \tag{7.1.8}\]
Recall the map \(r_{*}^{\eta}\) from (3.7.7). Note that a basis of \(\mathcal{H}_{K^{\prime}}\) is given by \(\mathbf{1}_{\varpi^{j}K^{\prime}\varpi^{(m,0)}K^{\prime}},j\in\mathbb{Z},m\geq 0\).
**Lemma 7.1.1**.:
1. _The map_ \(r_{*}^{\eta}\colon\mathcal{H}_{K^{\prime}}\to\mathcal{H}_{K_{S}^{\prime}}\) _sends_ \(\mathbf{1}_{\varpi^{j}K^{\prime}\varpi^{(m,0)}K^{\prime}}\) _to_ \((-1)^{m}\sum_{i=0}^{m}e_{m-i}\varphi_{i}^{\prime}\)_, where_ \[e_{i}=\begin{cases}1,&i=0,\\ q^{i}(1+q^{-1}),&i>0.\end{cases}\]
2. _The map_ \(r_{*}^{\eta}\) _sends_ \(f_{m}^{\prime}\) _to_ \((-1)^{m}\sum_{i=0}^{m}(\sum_{j=0}^{m-i}q^{j})\varphi_{i}^{\prime}\)_._
_._
3. _We have_ \(\operatorname{BC}_{S}^{\eta}(\vec{\varphi}^{\prime}_{m})=\phi_{m}\)_, where_ (7.1.9) \[\vec{\varphi}^{\prime}_{m}:=(-1)^{m}(\varphi^{\prime}_{m}+2\varphi^{\prime}_{m- 1}+\cdots+2\varphi^{\prime}_{0})\in\mathcal{H}_{K^{\prime}_{S}}.\]
Proof.: We first show part (ii). We need to compute the integral
\[r^{\eta}_{*}(f^{\prime}_{m})(g\overline{g}^{-1})=\int_{\operatorname{GL}_{2}( F_{0})}f^{\prime}_{m}(gh)\widetilde{\eta}(gh)\,dh.\]
Since the determinant of any element in the support of \(f^{\prime}_{m}\) has valuation \(m\), the integral is equal to
\[r^{\eta}_{*}(f^{\prime}_{m})(g\overline{g}^{-1})=(-1)^{m}\int_{\operatorname{ GL}_{2}(F_{0})}f^{\prime}_{m}(gh)\,dh.\]
It suffices to determine its value at elements of the form \(t_{i}\), cf. (7.1.7). By Iwasawa decomposition, every element in \(\operatorname{GL}_{2}(F)\) lies in in some \(K^{\prime}\begin{pmatrix}1&u\\ &1\end{pmatrix}\begin{pmatrix}\varpi^{m-i}&\\ &\varpi^{i}\end{pmatrix}\). All elements in this \(K^{\prime}\)-coset are mapped into a single \(K^{\prime}\)-orbit in \(S_{2}(F_{0})\) with representative
\[\begin{pmatrix}1&u\\ &1\end{pmatrix}\begin{pmatrix}1&\overline{u}\\ &1\end{pmatrix}^{-1}=\begin{pmatrix}1&u-\overline{u}\\ &1\end{pmatrix}.\]
This last element is \(K^{\prime}\)-equivalent (in \(S_{2}(F_{0})\)) to \(t_{\max\{-v(u-\overline{u}),0\}}\) (recall that we are assuming that the residue characteristic is odd). Therefore it suffices to compute the integral for \(g=\begin{pmatrix}1&u\\ &1\end{pmatrix}\), where \(u\in F^{-}\) lies in the purely imaginary part and has valuation \(v(u^{-1})\leq 0\).
We use the Iwasawa decomposition \(\operatorname{GL}_{2}(F_{0})=ANK_{0}\), where \(A\) denotes the diagonal torus, \(N\) the subgroup of upper triangular unipotent matrices, and \(K_{0}=\operatorname{GL}_{2}(O_{F_{0}})\). Accordingly we write an element in \(\operatorname{GL}_{2}(F_{0})\) as \(h=\begin{pmatrix}x&\\ &y\end{pmatrix}\begin{pmatrix}1&z\\ &1\end{pmatrix}k\). The Haar measure on \(\operatorname{GL}_{2}(F_{0})\) is then given by \(d^{\times}x\,d^{\times}y\,dz\), where the multiplicative (resp. additive) Haar measure on \(F_{0}^{\times}\) (resp. \(F_{0}\)) is normalized such that \(\operatorname{vol}(O_{F_{0}}^{\times})=1\) (resp. \(\operatorname{vol}(O_{F_{0}})=1\)). Then the condition \(gh\in M_{2}(O_{F}),v(\det(gh))=m\) is equivalent to
\[x,y\in O_{F_{0}},\quad v(xy)=m,\quad xz\in O_{F_{0}},\quad yu\in O_{F}\]
(note that \(x,y,z\in F_{0}\) and \(u\in F^{-}\)). It follows that
\[\int_{\operatorname{GL}_{2}(F_{0})}f^{\prime}_{m}(gh)\,dh=\int_{-v(u)\leq v(y )\leq m}\int_{v(x)=m-v(y)}\int_{z\in\frac{1}{x}O_{F_{0}}}dz\,d^{\times}x\,d^{ \times}y.\]
This triple integral is equal to
\[\sum_{0\leq i\leq m+v(u)}q^{i}.\]
This shows
\[r^{\eta}_{*}(f^{\prime}_{m})=(-1)^{m}\sum_{i=0}^{m}(\sum_{j=0}^{m-i}q^{j}) \varphi^{\prime}_{i}.\]
This shows part (ii) and part (i) follows: by \(\mathbf{1}_{K^{\prime}\varpi^{(m,0)}K^{\prime}}=f^{\prime}_{m}-f^{\prime}_{m-2}\), we deduce from part (ii) that
\[r_{*}^{\eta}(\mathbf{1}_{K^{\prime}\varpi^{(m,0)}K^{\prime}})=r_{*}^{\eta}(f^{ \prime}_{m}-f^{\prime}_{m-2})=\sum_{i=0}^{m}e_{m-i}\varphi^{\prime}_{i}.\]
It remains to show part (iii). From the explicit formulas of Satake transforms (7.1.2) and (7.1.3), we see that the map \(\mathrm{BC}\) in (7.1.6) sends \(f^{\prime}_{m}+qf^{\prime}_{m-1}\) to \(f_{m}\). It follows that
\[\mathrm{BC}(f^{\prime}_{m}+(q-1)f^{\prime}_{m-1}-f^{\prime}_{m-2})=\phi_{m}, \quad m\geq 0. \tag{7.1.10}\]
From part (ii) we have
\[r_{*}^{\eta}(f^{\prime}_{m}+(q-1)f^{\prime}_{m-1}-f^{\prime}_{m-2})=\tilde{ \varphi}^{\prime}_{m}.\]
By the commutative diagram (7.1.1), we see that \(\mathrm{BC}^{\eta}_{S}\) sends \(\tilde{\varphi}^{\prime}_{m}\) to \(\phi_{m}\), as desired.
**Remark 7.1.2**.: The explicit description above gives a direct proof of (3.7.3) in the case \(n=2\).
### Orbital integrals on the unitary group
We now consider the orbital integrals of elements in \(\mathcal{H}_{K^{\prime}_{S}}\). Recall from (7.1.5) that \(\phi_{m}\) denotes \(\mathbf{1}_{K\varpi^{(m,-m)}K}\), where, we recall, \(K\) denotes the hyperspecial subgroup of \(U(W_{0})\). Now we make a change of Hermitian form for \(W_{0}\) from (7.0.1) to the identity matrix. This is for the convenience of orbit comparison in (3.3.2). We define the invariants map on the orbits of \(G(F_{0})\) by
\[g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\longmapsto(a,d,bc), \tag{7.2.1}\]
(note that \((a,d,bc)\) are not independent).
**Proposition 7.2.1**.: _Let \(m\geq 1\). Then for \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in G(F_{0})\) regular semisimple, we have_
\[\mathrm{Orb}(g,\phi_{m},s)=\begin{cases}1,&v(1-a\overline{a})=-2m\\ 0,&v(1-a\overline{a})\neq-2m.\end{cases} \tag{7.2.2}\]
Proof.: We have, since we use the Hermitian form on \(W_{0}\) given by the identity matrix,
\[a\bar{a}=d\bar{d}=1-b\bar{b},\quad a\bar{c}=b\bar{d}\neq 0. \tag{7.2.3}\]
The invariants are \(a,d\) and \(bc=(1-a\overline{a})d/\overline{a}\). Since the group \(H=\mathrm{U}_{1}(F_{0})\) is compact and lies in \(K\), the orbital integral is either \(1\) or \(0\), depending on whether \(g\) lies in the support of \(\phi_{m}\) or not. We note that the support of \(\phi_{m}\) is the set of matrices \(g\in G(F_{0})\) such that \(\varpi^{m}g\in M_{2}(O_{F})\) and such that there exists at least one entry of \(g\) with valuation exactly \(-m\). With the help of the above identities, it is easy to see that the support condition amounts to \(a,b,c,d\) all having valuation equal to \(-m\).
### Orbital integrals on the symmetric space \(S_{2}\)
Recall that the inhomogeneous orbital integral is defined by (3.2.3). We also recall from [29, SS15.1] the structure of regular semisimple sets on the symmetric space \(S(F_{0})=S_{2}(F_{0})\). We write an element as
\[\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in S(F_{0}). \tag{7.3.1}\]
Then \(\gamma\) is regular semi-simple if and only if \(bc\neq 0\), in which case we may write \(\gamma\) as
\[\gamma=\gamma(a,b) :=\begin{pmatrix}a&b\\ (1-\mathrm{N}a)/\overline{b}&-\overline{a}b/\overline{b}\end{pmatrix}\] \[=\begin{pmatrix}1&\\ &-b/\overline{b}\end{pmatrix}\begin{pmatrix}a&b\\ -(1-\mathrm{N}a)/b&\overline{a}\end{pmatrix}\in S(F_{0})_{\mathrm{rs}},\quad a \in F\smallsetminus F^{1},\ b\in F^{\times}. \tag{7.3.2}\]
Similarly to the unitary group case, we define the invariant map on the orbits of \(S(F_{0})\),
\[\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\longmapsto(a,d,bc), \tag{7.3.3}\]
(again, \((a,d,bc)\) are not independent, cf. (7.2.1)). Then an orbit of \(\gamma\in S(F_{0})_{\mathrm{rs}}\) matches an orbit of \(g\in G(F_{0})_{\mathrm{rs}}\) if and only if they have the same invariants. In particular, an element \(\gamma\in S(F_{0})_{\mathrm{rs}}\) matches an element in the quasi-split (resp. non-quasi-split) unitary group if and only if \(v(1-a\bar{a})\) is even (resp. \(v(1-a\bar{a})\) is odd).
We now consider the orbital integral of elements in \(\mathcal{H}_{K_{S}^{\prime}}\). Since \(F/F_{0}\) is unramified, we may assume that, up to conjugation by \(H=\mathrm{GL}_{1}(F_{0})\), for a regular semisimple \(\gamma\) as in (7.3.1) that the entry \(c\) is a unit.
**Proposition 7.3.1**.: _Let \(\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in S_{2}(F_{0})\) be regular semisimple with \(v(c)=0\). When \(m\geq 1\), we have_
\[\mathrm{Orb}(\gamma,\varphi_{m}^{\prime},s)=\begin{cases}(-1)^{m}q^{ms}(1+ \eta(1-a\overline{a})q^{-(v(1-a\overline{a})+2m)s}),&v(1-a\overline{a})>-2m\\ (-1)^{m}q^{ms},&v(1-a\overline{a})=-2m\\ 0,&v(1-a\overline{a})<-2m.\end{cases} \tag{7.3.4}\]
_When \(m=0\), we have_
\[\mathrm{Orb}(\gamma,\varphi_{0}^{\prime},s)=\begin{cases}\sum_{i=0}^{v(1-a \overline{a})}(-1)^{i}q^{-is},&v(1-a\overline{a})\geq 0\\ 0,&v(1-a\overline{a})<0.\end{cases} \tag{7.3.5}\]
Proof.: Let \(\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in S_{2}(F_{0})\) be regular semisimple. Then we have
\[1-a\bar{a}=1-d\bar{d}=c\bar{b}=b\bar{c}\neq 0. \tag{7.3.6}\]
Assume \(m>0\). We first consider those orbits \(\gamma\) such that \(a\in O_{F}\). Then \(d\in O_{F}\) and \(b\bar{c}\in O_{F}\). Then the condition for \(h^{-1}\gamma h=\begin{pmatrix}a&h^{-1}b\\ ch&d\end{pmatrix}\) to be in \(\operatorname{supp}(\varphi^{\prime}_{m})\) is that either \(h^{-1}b\) or \(ch\) lies in \(\varpi^{-m}O_{F}^{\times}\). It follows that \(v(h)\in\{-m,v(bc)+m\}\) and
\[\operatorname{Orb}(\gamma,\varphi^{\prime}_{m},s)=(-1)^{m}q^{ms}(1+\eta(b\bar {c})q^{-(v(b\bar{c})+2m)s}). \tag{7.3.7}\]
Next we consider the orbits with \(a\notin O_{F}\). Then \(a\bar{a}\), \(d\bar{d}\) and \(c\bar{b}=b\bar{c}\) all have equal valuations. If \(v(a)=-m\), then \(v(b)=2v(a)=-2m\) (note that we have assumed \(v(c)=0\)). The condition for \(h\cdot\gamma\in K^{\prime}\cdot t_{m}\) is
\[v(h^{-1}b)\geq-m,\quad v(ch)=v(h)\geq-m,\]
or equivalently
\[-m\leq v(h)\leq m+v(b)=-m.\]
Therefore the orbital integral is equal to
\[\operatorname{Orb}(\gamma,\varphi^{\prime}_{m},s)=(-1)^{m}q^{ms}. \tag{7.3.8}\]
If \(-m<v(a)<0\), then \(v(b)=2v(a)\neq-2m\). A similar argument shows that \(v(h)\in\{-m,m+v(b)\}\) and
\[\operatorname{Orb}(\gamma,\varphi^{\prime}_{m},s)=(-1)^{m}q^{ms}(1+\eta(b \bar{c})q^{-(v(b\bar{c})+2m)s}). \tag{7.3.9}\]
We have thus proved (7.3.4).
Assume \(m=0\). If \(v(a)\geq 0\), then
\[\operatorname{Orb}(\gamma,\varphi^{\prime}_{0},s)=\sum_{i=0}^{v(bc)}(-1)^{i}q ^{-is}. \tag{7.3.10}\]
The case \(v(a)<0\) is similar to the \(m>0\) case.
Finally note that \(v(b)=v(bc)=v(1-a\overline{a})\). The proof is complete.
**Proposition 7.3.2**.: _(i) The functions \(\phi_{m}\in\mathcal{H}_{K}\) and \(\tilde{\varphi}^{\prime}_{m}\in\mathcal{H}_{K_{S}}\) are transfers of each other._
_(ii) Let_ \(\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in S_{2}(F_{0})_{\mathrm{rs}}\) _with_ \(v(c)=0\)_. Assume_ \(v(1-a\overline{a})\) _odd (so that_ \(\gamma\) _matches an element in_ \(\mathrm{U}(W_{1})(F_{0})_{\mathrm{rs}}\)_). When_ \(m\geq 1\)_, we have_
\[\partial\operatorname{Orb}(\gamma,\tilde{\varphi}^{\prime}_{m})=\log q\begin{cases} 1,&v(1-a\overline{a})>0\\ 0,&v(1-a\overline{a})<0.\end{cases}\]
_When_ \(m=0\)_, we have_
\[\partial\operatorname{Orb}(\gamma,\tilde{\varphi}^{\prime}_{0})=\log q \begin{cases}\frac{v(1-a\overline{a})+1}{2},&v(1-a\overline{a})\geq 0\\ 0,&v(1-a\overline{a})<0.\end{cases}\]
Proof.: When \(m>0\), we have the value at \(s=0\),
\[\operatorname{Orb}(\gamma,\varphi_{m}^{\prime})=(-1)^{m}\begin{cases}1+\eta(1-a \overline{a}),&v(1-a\overline{a})>-2m,\\ 1,&v(1-a\overline{a})=-2m,\\ 0,&v(1-a\overline{a})<-2m,\end{cases} \tag{7.3.11}\]
and, when \(v(1-a\overline{a})\) is odd, we have the first derivative at \(s=0\),
\[\partial\operatorname{Orb}(\gamma,\varphi_{m}^{\prime})=(-1)^{m}\log q \begin{cases}v(1-a\overline{a})+2m,&v(1-a\overline{a})>-2m\\ 0,&v(1-a\overline{a})<-2m.\end{cases}\]
When \(m=0\), we have the value at \(s=0\),
\[\operatorname{Orb}(\gamma,\varphi_{0}^{\prime})=(-1)^{m}\begin{cases}1,&v(1- a\overline{a})\geq 0\\ 0,&v(1-a\overline{a})<0,\end{cases} \tag{7.3.12}\]
and, when \(v(1-a\overline{a})\) is odd, we have the first derivative at \(s=0\),
\[\partial\operatorname{Orb}(\gamma,\varphi_{0}^{\prime})=(-1)^{m}\log q \begin{cases}\frac{v(1-a\overline{a})+1}{2},&v(1-a\overline{a})\geq 0\\ 0,&v(1-a\overline{a})<0.\end{cases}\]
Hence, when \(v(1-a\overline{a})=-2m\), we have \(\operatorname{Orb}(\gamma,\widetilde{\varphi}_{m}^{\prime})=\operatorname{ Orb}(\gamma,\varphi_{m}^{\prime})=1\) (cf. the definition of \(\widetilde{\varphi}_{m}^{\prime}\) in (7.1.9)). When \(v(1-a\overline{a})=-2(m-i)>-2m\) is even, we have from (7.3.11) and (7.3.12)
\[\operatorname{Orb}(\gamma,\tilde{\varphi}_{m}^{\prime})=2-4+4+\cdots+(-1)^{i- 1}4+(-1)^{i}2=0,\]
It follows from a comparison with Proposition 7.2.1 that \(\phi_{m}\in\mathcal{H}(U(V))\) and \(\tilde{\varphi}_{m}^{\prime}\in\mathcal{H}(S_{2})\) are transfers of each other. This proves part (i).
It remains to compute the first derivative \(\partial\operatorname{Orb}(\gamma,\widetilde{\varphi}_{m}^{\prime})\) when \(v(1-a\overline{a})>-2m\) is odd. It suffices to consider the case when \(v(1-a\overline{a})=v(bc)\) is positive. Set \(v(1-a\overline{a})=-2(m-i)+1\). Then \(\partial\operatorname{Orb}(\gamma,\tilde{\varphi}_{m}^{\prime})\) equals \(\log q\) times
\[(v(bc)+2m)-2(v(bc)+2(m-1))+\cdots+(-1)^{m-1}2(v(bc)+2)+(-1)^{m}(v (bc)+1)\] \[= (2m-2(m-1))-\cdots+(-1)^{i-1}(2(m-i+1)-2(m-i))+\cdots+(-1)^{m-1}( 2-1)\] \[= 2-2+2-\cdots+(-1)^{m-2}2+(-1)^{m-1}\] \[= 1.\]
The proof is complete.
**Remark 7.3.3**.: Part (i) gives a direct proof in the case \(n=1\) of the FL for the full spherical Hecke algebra, cf. SS3.6 and SS3.7.
### Intersection numbers
In this subsection, we often write \(\mathcal{N}^{[0]}\), or simply \(\mathcal{N}\), for \(\mathcal{N}^{[0]}_{2}\). Note that \(\mathcal{N}\simeq\operatorname{Spf}(W[[t]])\) has only one point. Recall the Hecke operator \(\mathbb{T}^{A}_{\varphi_{2}}\colon K^{A}(\mathcal{N})\to K^{\mathcal{T}_{ \leq 2(A)}}(\mathcal{N})\). It is defined as \(\mathbb{T}^{A}_{\varphi_{2}}=\mathbb{T}^{A}_{\leq 2}=\mathbb{T}^{A,-}\circ \mathbb{T}^{\mathcal{N}^{[0,2]}(A),+}\), where
\[\begin{split}\mathbb{T}^{A,+}&=(\mathcal{N}^{[0,2] })_{*}\colon K^{A}(\mathcal{N}^{[0]}_{2})\longrightarrow K^{\mathcal{N}^{[0,2 ]}(A)}(\mathcal{N}^{[2]}_{2}),\\ \mathbb{T}^{B,-}&=(\mathcal{N}^{[2,0]})_{*}\colon K ^{B}(\mathcal{N}^{[2]}_{2})\longrightarrow K^{\mathcal{N}^{[2,0]}(B)}( \mathcal{N}^{[0]}_{2}),\end{split} \tag{7.4.1}\]
cf. (5.5.2). We may identify \(\mathcal{N}^{[2]}_{2}\) with \(\mathcal{N}\) (cf. Remark 5.5.6), and then both Hecke operators in (7.4.1) are induced by the same geometric correspondence \(\mathcal{T}_{\Gamma_{0}}\to\mathcal{N}\times\mathcal{N}\) and its transpose, i.e., we can write
\[\mathcal{T}_{\leq 2}=\mathcal{T}_{\Gamma_{0}}\circ{}^{t}\mathcal{T}_{\Gamma_{0}}. \tag{7.4.2}\]
The notation \(\mathcal{T}_{\Gamma_{0}}\) is a reminder of the fact that \(\mathcal{T}_{\Gamma_{0}}\) is the analogue of the \(\Gamma_{0}(p)\)-covering of the modular curve in the present context, cf. Remark 5.5.6.
Since the projection morphisms from the composition \(\mathcal{T}_{\leq 2}\) of the geometric correspondences \(\mathcal{N}^{[0,2]}\) and \(\mathcal{N}^{[2,0]}\) to \(\mathcal{N}\) are finite and flat, we have \(\mathbb{T}_{\leq 2}=(\mathcal{T}_{\leq 2})_{*}\), cf. Lemma 9.2.1 (we are leaving out the support \(A\) from the notation). A similar reasoning shows that \(\mathbb{T}_{\varphi_{2}^{m}}=((\mathcal{T}_{\leq 2})^{m})_{*}\), where \((\mathcal{T}_{\leq 2})^{m}=\mathcal{T}_{\leq 2}\circ\cdots\circ\mathcal{T}_{ \leq 2}\) is the \(m\)-fold iterated composition of \(\mathcal{T}_{\leq 2}\) with itself (which again maps by finite flat morphisms to \(\mathcal{N}^{[0]}\)).
At this point we use the local intersection calculus developed in [21, SS5] (the role of the regular local formal scheme \(M\) in loc. cit. is played here by \(\mathcal{N}\)).
We consider the natural morphism \((\mathcal{T}_{\leq 2})^{m}\to\mathcal{N}\times\mathcal{N}\). Since \((\mathcal{T}_{\leq 2})^{m}\) is finite and flat over \(\mathcal{N}\), this morphism is finite with support of codimension one. We associate to it an element of the group \(Z^{1}(\mathcal{N}\times\mathcal{N})\) of cycles of codimension one (namely \(\sum_{Z\in(\mathcal{N}\times\mathcal{N})^{(1)}}\ell_{\mathcal{O}_{\eta_{Z}}}( \mathcal{O}_{(\mathcal{T}_{\leq 2})^{m}})[Z]\in Z^{1}(\mathcal{N}\times\mathcal{N})\), cf. [21, Def. 5.1]). We use the same notation \((\mathcal{T}_{\leq 2})^{m}\) for this one-cycle. Since \(\mathcal{N}\times\mathcal{N}\) is regular, we may regard \((\mathcal{T}_{\leq 2})^{m}\) as an effective Cartier divisor on \(\mathcal{N}\times\mathcal{N}\). By the flatness of \((\mathcal{T}_{\leq 2})^{m}\) over \(\mathcal{N}\), it is in fact a relative Cartier divisor, i.e., it is at every point defined by one equation in \(W[[t,t^{\prime}]]\) which is neither a unit nor divisible by \(p\).
Both projection maps to \(\mathcal{N}\) are finite, hence \((\mathcal{T}_{\leq 2})^{m}\) is an element of the ring of correspondences \(\operatorname{Corr}(\mathcal{N})\), and from the definition of the ring structure on \(\operatorname{Corr}(\mathcal{N})\), it follows that \((\mathcal{T}_{\leq 2})^{m}\) is the \(m\)-th power of the correspondence \(\mathcal{T}_{\leq 2}\), cf. [21, Def. 5.6]. We obtain a homomorphism of \(\mathbb{Q}\)-algebras,
\[\mathcal{H}_{K}\longrightarrow\operatorname{Corr}(\mathcal{N}\times\mathcal{N })_{\mathbb{Q}},\quad\sum_{m}a_{m}\varphi_{2}^{m}\longmapsto\sum_{m}a_{m}( \mathcal{T}_{\leq 2})^{m}.\]
We denote the image of \(\varphi\) under this map by \(\mathcal{T}_{\varphi}\). Note that the unit element \(\varphi=\mathbf{1}\) is mapped to \(\mathcal{T}_{\mathbf{1}}=\Delta_{\mathcal{N}\times\mathcal{N}}\) (the diagonal in \(\mathcal{N}\times\mathcal{N}\)), which we denote by \(\mathcal{T}_{0}\). To simplify the notation, we write \(\mathcal{T}_{1}\) for \(\mathcal{T}_{\varphi_{2}}\) (not to be confused with the divisor associated to the unit element). Also, for \(\phi_{m}=\mathbf{1}_{K\varpi^{(m,-m)}K}\), we use the notation \(\mathcal{T}_{m}^{\circ}\) for \(\mathcal{T}_{\phi_{m}}\).
**Lemma 7.4.1**.: _There is the relation of Cartier divisors on \(\mathcal{N}\times\mathcal{N}\),_
\[\mathcal{T}_{1}^{\circ}=\mathcal{T}_{1}-(q+1)\mathcal{T}_{0},\]
_and the recursive relation_
\[\mathcal{T}_{\varphi_{2}*\phi_{m}}=\mathcal{T}_{1}\circ\mathcal{T}_{m}^{\circ} =\mathcal{T}_{m+1}^{\circ}+2q\mathcal{T}_{m}^{\circ}+q^{2}\mathcal{T}_{m-1}^{ \circ},\quad m\geq 1.\]
Proof.: This follows from the relations in the Hecke algebra:
\[\phi_{1}=\varphi_{2}-(q+1),\quad\varphi_{2}\phi_{m}=\phi_{m+1}+2q\phi_{m}+q^{2 }\phi_{m-1},m\geq 1.\]
The first identity is mentioned below (7.1.5). The second identity follows from this, after expressing \(\phi_{m}\) in terms of \(f_{m}\) as in (7.1.5) and applying the Satake isomorphism, cf. (7.1.4).
Recall the action of \(\operatorname{Corr}(\mathcal{N})\) on \(Z^{1}(\mathcal{N})\), cf. [21, Def. 5.7]
\[\mathcal{Z}\longmapsto\mathcal{T}*\mathcal{Z}. \tag{7.4.3}\]
Hence we obtain an action of \(\mathcal{H}_{K}\) on \(Z^{1}(\mathcal{N})_{\mathbb{Q}}\).
Recall the special vector \(u_{0}\in\mathbb{V}_{2}\), cf. SS6.1. It is of valuation zero (i.e., its length is a unit), and there is the natural identification \(\mathcal{Z}(u_{0})\simeq\mathcal{T}_{0}=\Delta\) for the associated special divisor on \(\mathcal{N}\), cf. [17]. For a non-zero \(u\in\mathbb{V}_{2}\) we let \(\mathcal{Z}(u)^{\circ}=\mathcal{Z}(u)-\mathcal{Z}(\varpi^{-1}u)\) denote the difference divisor, cf. [32]. It is an effective Cartier divisor on \(\mathcal{N}\). Note that \(\mathcal{Z}(u_{0})^{\circ}=\mathcal{Z}(u_{0})\).
**Proposition 7.4.2**.: _Let \(m\geq 0\). Then there is an equality of Cartier divisors on \(\mathcal{N}\),_
\[\mathcal{T}_{m}^{\circ}*\Delta=\mathcal{Z}(\varpi^{m}u_{0})^{\circ}.\]
_(For \(m\geq 1\), both sides have degree \(q^{2m-1}(q+1)\) over \(\operatorname{Spf}O_{\tilde{F}}\).)_
Proof.: By Lemma 7.4.1, this identity is equivalent to the conjunction of the following identities.
1. \(\mathcal{T}_{1}*\mathcal{Z}(u_{0})=\mathcal{Z}(\varpi u_{0})^{\circ}+(q+1) \mathcal{Z}(u_{0})\).
2. \(\mathcal{T}_{1}*\mathcal{Z}(\varpi^{m}u_{0})^{\circ}=\mathcal{Z}(\varpi^{m+1} u_{0})^{\circ}+2q\mathcal{Z}(\varpi^{m}u_{0})^{\circ}+q^{2}\mathcal{Z}(\varpi^{m-1}u_{0})^{ \circ},\quad m\geq 1\).
Let us prove (1). Note that \(\mathcal{T}_{\Gamma_{0}}\cap\pi_{1}^{-1}(\mathcal{Z}(u_{0}))\subseteq\mathcal{ N}\times\mathcal{N}\) is the locus of isogenies \(X^{(0)}\to Y\) lifting a given isogeny \(\overline{\mathbb{E}}\times\mathbb{E}\to\overline{\mathbb{E}}\times\mathbb{E}\) of degree \(q^{2}\). (Recall that there is only one such isogeny up to isomorphism, cf. Remark 5.5.6.) Here \(X^{(0)}=O_{F}\otimes_{O_{F_{0}}}\mathcal{F}_{0}\) is written in terms of the Serre construction, where \(\mathcal{F}_{0}=\mathcal{E}\) is the canonical lift, i.e., the quasi-canonical lift of level \(0\).
Let \(\mathbb{V}=\langle u_{0}\rangle\oplus\langle u_{1}\rangle\) with \(\operatorname{val}(u_{1})=1\). Then \(\mathcal{Z}(u_{1})\) can be identified with the quasi-canonical divisor of level \(1\), cf. [17, Def. 6.6]. Consider the \(O_{F}\)-module \(X^{(1)}=O_{F}\otimes_{O_{F_{0}}}\mathcal{F}_{1}\) over \(\mathcal{Z}(u_{1})\), with \(\mathcal{F}_{1}\) denoting the quasi-canonical lift of level \(1\) of \(\mathbb{E}\) in the sense of Gross, cf. [17, SS6]. From the isogeny of quasi-canonical lifts \(\mathcal{F}_{0}\to\mathcal{F}_{1}\), we obtain the isogeny \(X^{(0)}\to X^{(1)}\) over \(\mathcal{Z}(u_{1})\). Hence we obtain a closed embedding \(\iota\colon\mathcal{Z}(u_{1})\subset\mathcal{T}_{\Gamma_{0}}\cap\pi_{1}^{-1}( \mathcal{Z}(u_{0}))\). Since \(\pi_{2}\circ\iota\) is the natural embedding of \(\mathcal{Z}(u_{1})\) in \(\mathcal{N}\), we obtain an inequality of Cartier
divisors, \(\mathcal{Z}(u_{1})\leq[\mathcal{T}_{\Gamma_{0}}]*\mathcal{Z}(u_{0})\). Since both these Cartier divisors map with degree \(q+1\) to \(\operatorname{Spf}O_{\breve{F}}\), they are equal, i.e.,
\[\mathcal{T}_{\Gamma_{0}}*\mathcal{Z}(u_{0})=\mathcal{Z}(u_{1}). \tag{7.4.4}\]
Similarly, \(\mathcal{T}_{\Gamma_{0}}\cap\pi_{1}^{-1}(\mathcal{Z}(u_{1}))\) parametrizes isogenies \(X^{(1)}\to Y\) lifting the given isogeny \(\overline{\mathbb{E}}\times\mathbb{E}\to\overline{\mathbb{E}}\times\mathbb{E}\). The universal object over \(\mathcal{Z}(\varpi u_{0})^{\circ}\) provides such an isogeny \(X^{(1)}\to X^{(2)}\) over \(S=\operatorname{Spf}O_{\breve{F},2}\). Here for any \(n\), \(O_{\breve{F},n}\) denotes the ring of integers in the abelian extension of \(\breve{F}\) corresponding to the subgroup \((O_{F_{0}}+\varpi^{n}O_{F})^{\times}\) of the group of units. Hence we obtain a closed embedding \(\mathcal{Z}(\varpi u_{0})^{\circ}\subset\mathcal{T}_{\Gamma_{0}}*\mathcal{Z}( u_{1})\).
On the other hand, consider the natural isogeny \(X^{(0)}_{\operatorname{Spf}O_{\breve{F},1}}\to X^{(1)}\). It induces a unique isogeny \(X^{(1)}\to X^{(0)}_{\operatorname{Spf}O_{\breve{F},1}}\) such that the composition is equal to \(\varpi\colon X^{(1)}\to X^{(1)}\). We obtain a closed embedding of \(\operatorname{Spf}O_{\breve{F},1}\) into \(\mathcal{T}_{\Gamma_{0}}\cap\pi_{1}^{-1}(\mathcal{Z}(u_{1}))\), which is mapped under \(\pi_{2}\) onto \(\mathcal{Z}(u_{0})\), with degree \([O_{\breve{F},1}:O_{\breve{F}}]=q+1\). Altogether we obtain an inequality of Cartier divisors, \(\mathcal{Z}(\varpi u_{0})^{\circ}+(q+1)\mathcal{Z}(u_{0})\leq\mathcal{T}_{ \Gamma_{0}}*\mathcal{Z}(u_{1})\). Since both divisors have degree \((q+1)^{2}\) over \(\operatorname{Spf}O_{\breve{F}}\), we obtain the equality of divisors
\[\mathcal{T}_{\Gamma_{0}}*\mathcal{Z}(u_{1})=\mathcal{Z}(\varpi u_{0})^{\circ }+(q+1)\mathcal{Z}(u_{0}). \tag{7.4.5}\]
Taking both identities (7.4.4) and (7.4.5) together, we obtain
\[\mathcal{T}_{1}*\mathcal{Z}(u_{0})=\mathcal{T}_{\Gamma_{0}}*\mathcal{Z}(u_{1 })=\mathcal{Z}(\varpi u_{0})^{\circ}+(q+1)\mathcal{Z}(u_{0}),\]
as desired.
To prove (2), we see by the same argument that
\[\mathcal{T}_{\Gamma_{0}}*\mathcal{Z}(\varpi^{m}u_{0})^{\circ}=\mathcal{Z}( \varpi^{m}u_{1})^{\circ}+q\mathcal{Z}(\varpi^{m-1}u_{1})^{\circ}\]
and
\[\mathcal{T}_{\Gamma_{0}}*\mathcal{Z}(\varpi^{m}u_{1})^{\circ}=\mathcal{Z}( \varpi^{m+1}u_{0})^{\circ}+q\mathcal{Z}(\varpi^{m}u_{0})^{\circ}.\]
Thus
\[\mathcal{T}_{1}*\mathcal{Z}(\varpi^{m}u_{0})^{\circ}=\mathcal{Z}(\varpi^{m+1 }u_{0})^{\circ}+2q\mathcal{Z}(\varpi^{m}u_{0})^{\circ}+q^{2}\mathcal{Z}(\varpi ^{m-1}u_{0})^{\circ},\]
as desired.
Let \(\mathcal{Z}\) and \(\mathcal{Z}^{\prime}\) be formal schemes finite over a closed formal subscheme of codimension one in \(\mathcal{N}\), with classes \([\mathcal{Z}]\in K^{\mathcal{Z}}(\mathcal{N})\), resp. \([\mathcal{Z}^{\prime}]\in K^{\mathcal{Z}^{\prime}}(\mathcal{N})\). Let \(\mathcal{T}\to\mathcal{N}\times\mathcal{N}\) be a correspondence. Assume that \(\mathcal{Z}\cap\operatorname{supp}(\mathcal{T}(\mathcal{Z}^{\prime}))\) is an artinian scheme (with support in the unique point of \(\mathcal{N}_{\operatorname{red}}\)). Then, by [21, Cor. 5.5], we have
\[\langle[\mathcal{Z}],\mathcal{T}_{*}([\mathcal{Z}^{\prime}])\rangle_{\mathcal{ N}}=\langle\mathcal{Z},\mathcal{T}*\mathcal{Z}^{\prime}\rangle_{Z^{1}( \mathcal{N})}, \tag{7.4.6}\]
Here on the LHS appears the intersection product in K-theory (comp. (6.1.2)), and on the RHS appears the intersection number of properly intersecting cycles of codimension one on \(\mathcal{N}\), cf. [21, Rem. 5.3].
**Corollary 7.4.3**.: _Let \(m\geq 1\). Then for all \(g\in\mathrm{U}(\mathbb{V}_{2})\) regular semisimple,_
\[\mathrm{Int}(g,\phi_{m})=\langle g\Delta,(\mathcal{T}_{m}^{\circ})_{*}(\Delta) \rangle_{\mathcal{N}}=1.\]
Proof.: By \(\Delta=\mathcal{Z}(u_{0})\) and Proposition 7.4.2, we have by (7.4.6)
\[\mathrm{Int}(g,\phi_{m}) =\langle g\Delta,\mathcal{T}_{m}^{\circ}*\Delta\rangle_{Z^{1}( \mathcal{N})}=\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}(\varpi^{m}u_{0})^{ \circ}\rangle_{Z^{1}(\mathcal{N})}\] \[=\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}(\varpi^{m}u_{0}) \rangle_{Z^{1}(\mathcal{N})}-\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}( \varpi^{m-1}u_{0})\rangle_{Z^{1}(\mathcal{N})}.\]
It remains to compute \(\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}(\varpi^{m}u_{0})\rangle)_{Z^{1}( \mathcal{N})}\) for all \(m\geq 0\). We can use [17]. The fundamental matrix of the lattice \(\langle g\cdot u_{0},\varpi^{m}u_{0}\rangle\) for the obvious basis is
\[A=\begin{pmatrix}1&\varpi^{m}(g\cdot u_{0},u_{0})\\ \varpi^{m}(u_{0},g\cdot u_{0})&\varpi^{2m}\end{pmatrix}\]
and of the lattice \(\langle g\cdot u_{0},\varpi^{m-1}u_{0}\rangle\) is
\[A^{\prime}=\begin{pmatrix}1&\varpi^{m-1}(g\cdot u_{0},u_{0})\\ \varpi^{m-1}(u_{0},g\cdot u_{0})&\varpi^{2(m-1)}\end{pmatrix}.\]
Note that the valuation of \(g\cdot u_{0}\) is zero and the space \(\mathbb{V}\) is non-split (hence \(\mathrm{val}(\det A)\) is odd). It follows that \((g\cdot u_{0},u_{0})\) is a unit, and the fundamental invariants of \(A\) (resp. \(A^{\prime}\)) are \((0,\mathrm{val}(\det A))\) (resp. \((0,\mathrm{val}(\det A)-2)\)). Then by [17] we obtain
\[\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}(\varpi^{m}u_{0}))\rangle_{Z^{1}( \mathcal{N})}=\frac{\mathrm{val}(\det(A))+1}{2}\]
and
\[\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}(\varpi^{m-1}u_{0}))\rangle_{Z^{ 1}(\mathcal{N})}=\frac{\mathrm{val}(\det(A))-1}{2}.\]
The assertion follows.
**Remark 7.4.4**.: Note that \(\mathbb{T}_{\leq 2}\) is the composition of intertwining Hecke correspondences \(\mathbb{T}_{2}^{+}\) and \(\mathbb{T}_{2}^{-}\) which correspond to elements in the Iwahori Hecke algebra. One may try to generalize the result above from the spherical Hecke algebra to the Iwahori Hecke algebra; we have not done so.
### Comparison
**Theorem 7.5.1**.: _Conjecture 6.1.4 holds when \(n=1\), namely AFL holds for the full Hecke algebra when \(n=1\)._
Proof.: When \(n=1\), the Hecke algebra \(\mathcal{H}_{K^{\flat}}\) is trivial. Hence it suffices to show the inhomogeneous AFL conjecture 6.2.1. We show the identity for \(f\) running through the basis \(\phi_{m}\) of \(\mathcal{H}_{K}\).
By Lemma 7.1.1 (iii), we have \((\mathrm{BC}_{S}^{\eta})^{-1}(\phi_{m})=\tilde{\varphi}_{m}^{\prime}\in \mathcal{H}_{K^{\prime}_{S}}\) and hence we need to show that for matching \(g\) and \(\gamma\),
\[\mathrm{Int}(g,\phi_{m})\cdot\log q=-\omega(\gamma)\,\partial\mathrm{Orb}( \gamma,\tilde{\varphi}_{m}^{\prime}).\]
When \(m\geq 1\), this follows from directly comparing Proposition 7.3.2 (for orbital integrals) and Corollary 7.4.3 (for intersection numbers). More precisely, without loss of generality, we let \(\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in S_{2}(F_{0})_{\text{rs}}\) with \(v(c)=0\). Then it matches an element in \(U(\mathbb{V}_{2})_{\text{rs}}\) if and only if \(v(bc)=v(1-a\overline{a})\) is odd (in particular \(v(a)\geq 0\)). Recall from [29, SS2.4] that the transfer factor is defined as
\[\omega(\gamma)=(-1)^{v(\det(e,\gamma e))},\]
where \(e=(0,1)^{t}\) and hence \(\det(e,\gamma e)=\det\begin{pmatrix}0&b\\ 1&d\end{pmatrix}=-b\). Therefore, when \(v(b)=v(1-a\bar{a})\) is odd, the transfer factor is \(\omega(\gamma)=(-1)^{v(b)}=-1\) and hence, by Proposition 7.3.2, (ii),
\[-\omega(\gamma)\,\partial\text{Orb}(\gamma,\tilde{\varphi}_{m}^{\prime})= \log q.\]
On the other hand, by Corollary 7.4.3 we have
\[\text{Int}(g,\phi_{m})=1\]
for all \(g\in U(\mathbb{V}_{2})_{\text{rs}}\), which proves the desired identity.
The case \(m=0\) is the AFL (for the unit element in Hecke algebra) proved in [38]. We can also see this directly by using the facts proved in this section. The proof of Corollary 7.4.3 shows
\[\text{Int}(g,\phi_{0})=\langle g\Delta,\Delta\rangle_{Z^{1}( \mathcal{N})} =\langle\mathcal{Z}(g\cdot u_{0}),\mathcal{Z}(u_{0})\rangle_{Z^{1}( \mathcal{N})}\] \[=\frac{\text{val}(\det(A))+1}{2},\]
where
\[A=\begin{pmatrix}1&(g\cdot u_{0},u_{0})\\ (u_{0},g\cdot u_{0})&1\end{pmatrix}.\]
Note that \(v(\det(A))=v(1-a\overline{a})\), in terms of the matrix form \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in U(\mathbb{V}_{2})_{\text{rs}}\) using the basis \(\{u_{0},u_{1}\}\).
On the other hand, by Proposition 7.3.2 (ii), for \(\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in S_{2}(F_{0})_{\text{rs}}\) we have
\[\partial\text{Orb}(\gamma,\tilde{\varphi}_{0}^{\prime})=\log q\begin{cases} \frac{v(1-a\overline{a})+1}{2},&v(1-a\overline{a})\geq 0\\ 0,&v(1-a\overline{a})<0.\end{cases}\]
Matching \(g\) and \(\gamma\) have the same \(a\). The computation of the transfer factor is the same as in the case \(m\geq 1\). It follows that
\[\text{Int}(g,\phi_{0})\cdot\log q=-\omega(\gamma)\,\partial\text{Orb}(\gamma, \tilde{\varphi}_{0}^{\prime})=\frac{\text{val}(1-a\overline{a})+1}{2}.\]
The proof is complete.
**Remark 7.5.2**.: The \(\mathrm{RZ}\) space \(\mathcal{N}_{2}\) is isomorphic to the Lubin-Tate deformation space in height two, cf. Remark 5.5.6. It is therefore natural to expect that the AFL for the full Hecke algebra is related to the linear AFL for the Hecke algebra of \(\mathrm{GL}_{2}\) considered by Li [20]. We do not see an immediate passage between the two statements.
## 8. Base-point freeness
In this section, we comment on how much information on an element of the Hecke algebra is contained in either side of the AFL, i.e., in the functionals on the Hecke algebra defined by the derivative of orbital integrals, resp. by the intersection numbers. It turns out that there is a surprising contrast between the functional defined by orbital integrals and that defined by the derivative of orbital integrals.
### Orbital integrals
For orbital integrals, there is the following fact.
**Proposition 8.1.1**.: _The map_
\[\mathrm{Orb}:\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\longrightarrow C ^{\infty}(G^{\prime}_{\mathrm{rs}})\]
_is injective._
Proof.: We use a theorem of Beuzart-Plessis [1, Cor. 4.5.1]: _For any \(f^{\prime}\in C^{\infty}_{c}(G^{\prime})\), the orbital integral \(\mathrm{Orb}(-,f^{\prime})\) vanishes at all regular semisimple elements if and only if \(I_{\Pi}(f^{\prime})=0\) for all \(\Pi\in\Phi_{\mathrm{temp}}(G^{\prime}(F_{0}))\)._ Here \(\Phi_{\mathrm{temp}}(G^{\prime}(F_{0}))\) denotes the set of irreducible tempered admissible representations of \(G^{\prime}(F_{0})\), and \(I_{\Pi}(f^{\prime})\) is the local relative character associated to the local Rankin-Selberg period integral and the local Flicker-Rallis period integral.
Let \(f^{\prime}\in\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\) with identically vanishing orbital integrals. Then by the above quoted theorem, \(I_{\Pi}(f^{\prime})=0\) for all \(\Pi\) as above. In particular, \(I_{\Pi}(f^{\prime})=0\) for all \(\Pi\in\Phi_{\mathrm{temp}}^{\mathrm{ur}}(G^{\prime}(F_{0}))\), the set of tempered unramified representations. However, for \(\Pi\in\Phi_{\mathrm{temp}}^{\mathrm{ur}}(G^{\prime}(F_{0}))\), we have \(I_{\Pi}(f^{\prime})=\lambda_{\Pi}(f^{\prime})I_{\Pi}(\mathbf{1}_{K^{\prime}})\). Here \(\lambda_{\Pi}:\mathcal{H}_{K^{\prime\flat}\times K^{\prime}}\to\mathbb{C}\) is the algebra homomorphism associated to \(\Pi\) or, equivalently, the evaluation of \(\mathrm{Sat}(f^{\prime})\) at the Satake parameter of \(\Pi\). Note that \(I_{\Pi}(\mathbf{1}_{K^{\prime}})\neq 0\) for \(\Pi\in\Phi_{\mathrm{temp}}^{\mathrm{ur}}(G^{\prime}(F_{0}))\). Hence \(\lambda_{\Pi}(f^{\prime})=0\) for all \(\Pi\in\Phi_{\mathrm{temp}}^{\mathrm{ur}}(G^{\prime}(F_{0}))\). It follows that \(f^{\prime}=0\) by the Satake isomorphism.
### Derivative of orbital integrals
Let \(G^{\prime}_{\mathrm{rs},W_{1}}\) denote the open subset of \(G^{\prime}_{\mathrm{rs}}\) consisting of regular semisimple elements matching with elements in the non-quasi-split unitary group \(G_{W_{1}}\).
**Conjecture 8.2.1**.: _The map_
\[\partial\mathrm{Orb}:\mathcal{H}_{K^{\prime\flat}\times K^{\prime}} \longrightarrow C^{\infty}(G^{\prime}_{\mathrm{rs},W_{1}})\]
_has a large kernel, in the sense that the kernel generates the whole ring \(\mathcal{H}_{K^{\prime}}\) as an ideal (note that this kernel is only a vector subspace rather than an ideal). Similarly, the map defined by the intersection numbers, \(\mathrm{Int}:\mathcal{H}_{K^{\flat}\times K}\to C^{\infty}(G^{\prime}_{ \mathrm{rs},W_{1}})\), has a large kernel._
**Remark 8.2.2**.: A weaker conjecture would be that "the kernel is not contained in any maximal ideal of \(\mathcal{H}\) corresponding to a tempered representation". Here a \(\mathbb{C}\)-point \(\alpha\) of \(\operatorname{Spec}\mathcal{H}_{K^{\prime}}\) is called tempered if, in terms of the coordinates in SS3.4, we have \(|\alpha_{i}|=1\) for all \(i\). There might be some other ways to formulate the smallness of the image. On the other hand, the image should not be too small, although we do not have a precise conjecture. In general it is unclear to us how to characterize the image.
**Theorem 8.2.3**.: _Conjecture 8.2.1 holds when \(n=1\)._
Proof.: When \(n=1\), the factors \(\mathcal{H}_{K^{\prime\flat}}\) and \(\mathcal{H}_{K^{\flat}}\) are trivial. We note that the orbital integral map \(\partial\mathrm{Orb}\) factors through the base change homomorphism \(\mathrm{BC}\). We now identify the set of orbits in \(G^{\prime}_{\mathrm{rs}}\) with the set of orbits in \(S_{\mathrm{rs}}\). The induced map
\[\partial\mathrm{Orb}_{G}:\mathcal{H}_{K}\longrightarrow C^{\infty}(G^{\prime} _{\mathrm{rs},W_{1}}) \tag{8.2.1}\]
can be written in terms of
\[\partial\mathrm{Orb}_{G}(-,\phi):=\omega_{S}(-)\,\partial\mathrm{Orb}(-,( \mathrm{BC}_{S}^{\eta})^{-1}(\phi)),\quad\phi\in\mathcal{H}_{K}.\]
Then the assertion is equivalent to the statement that the kernel of \(\partial\mathrm{Orb}_{G}\) generates the whole ring \(\mathcal{H}_{K}\) as an ideal. We use the explicit results in Proposition 7.3.2, which shows that the image is exactly two dimensional, spanned by the image of \(\phi_{0}\) and any one of the \(\phi_{m},\,m\geq 1\). In particular, the kernel of \(\partial\mathrm{Orb}_{G}\) is spanned by the set \(\{\phi_{m}-\phi_{1}\mid m\geq 2\}\). Equivalently it remains to show that the elements of this set have no common zero. From (7.1.4), we have for \(m\geq 2\)
\[\operatorname{Sat}(\phi_{m}-\phi_{1})=q^{m}\sum_{i=-m}^{m}X^{i}-q^{m-1}\sum_{ i=-(m-1)}^{m-1}X^{i}-q(X+1+X^{-1})+1.\]
It is straightforward to check that these Laurent polynomials have no common zero in \(X\in\mathbb{C}^{\times}\). Indeed, already these Laurent polynomials for \(m=2\) and \(m=3\) have no common zero. To see this, we can write these Laurent polynomials as polynomials \(P_{2}\), resp. \(P_{3}\), of degree \(2\), resp. \(3\), in \(\mathbb{C}[X_{1}]\), where \(X_{1}=X+X^{-1}\). Long division shows that these polynomials are coprime, hence there are polynomials \(R_{2},R_{3}\in\mathbb{C}[X_{1}]\) with \(P_{2}R_{2}+P_{3}R_{3}=1\). Rewriting this identity in terms of \(X^{\pm 1}\) shows that the Laurent polynomials for \(m=2,3\) have no common zero. The proof is complete.
**Remark 8.2.4**.: A similar result in the setting of the linear AFL (for \(\mathrm{GL}_{2}\) rather than \(\mathrm{U}_{2}\)) could be deduced from the result of Q. Li [20, Prop. 7.6].
## 9. Appendix: Correspondences for formal schemes and maps on K-groups
In this appendix, we explain the calculus of correspondences that we use, comp. also [40, App. B]. Let \(\breve{O}\) be a strictly henselian DVR. We consider locally noetherian formal schemes, locally formally of finite type over \(\breve{O}\). For such a formal scheme \(\mathcal{X}\) and a closed formal subscheme \(A\) of \(\mathcal{X}\), we denote by \(K^{A}(\mathcal{X})\) the Grothendieck group of finite complexes
of locally free modules over the structure sheaf which are acyclic outside \(A\), comp. [11]. We similarly have the Grothendieck group \(K^{\prime}(A)\) formed by finite complexes of coherent modules on \(A\).
### Induced map on K-groups
Let \(\mathcal{X}\) and \(\mathcal{Y}\) be formal \(\check{O}\)-schemes as above. Let \(\mathcal{T}\to\mathcal{X}\times_{\check{O}}\mathcal{Y}\) be a _geometric correspondence_, i.e., a formal scheme as specified above, with a morphism of formal schemes as indicated. We also simply write \(\mathcal{T}\) for this correspondence and denote by \(p_{\mathcal{X}}\), resp. \(p_{\mathcal{Y}}\), the maps to \(\mathcal{X}\), resp. \(\mathcal{Y}\). We assume that \(\mathcal{Y}\) is regular and that \(p_{\mathcal{Y}}\) is a proper morphism. Then \(\mathcal{T}\) defines a map on K-groups with supports,
\[\mathcal{T}_{*}\colon K^{A}(\mathcal{X})\longrightarrow K^{\mathcal{T}(A)}( \mathcal{Y}),\quad x\longmapsto Rp_{\mathcal{Y},*}(p_{\mathcal{X}}^{*}(x)). \tag{9.1.1}\]
Here \(\mathcal{T}(A)=p_{\mathcal{Y}}(p_{\mathcal{X}}^{-1}(A))\). Let us explain the construction of the map. The map is the composition of the maps
\[p_{\mathcal{X}}^{*}\colon K^{A}(\mathcal{X})\longrightarrow K^{\bar{p_{ \mathcal{X}}^{-1}}(A)}(\mathcal{T}),\quad p_{\mathcal{Y},*}\colon K^{\bar{p_{ \mathcal{X}}^{-1}}(A)}(\mathcal{T})\longrightarrow K^{\mathcal{T}(A)}( \mathcal{Y}). \tag{9.1.2}\]
For the first map, let \(x\in K^{A}(\mathcal{X})\) be represented by a finite complex of locally free \(\mathcal{O}_{\mathcal{X}}\)-modules acyclic outside \(A\). Then its base change to \(\mathcal{T}\) is a finite complex of locally free \(\mathcal{O}_{\mathcal{T}}\)-modules which is acyclic outside \(p_{\mathcal{X}}^{-1}(A)\), cf. [11, SS1.5]. The image \(p_{\mathcal{X}}^{*}(x)\) is defined to be the class of this complex. The second map is the composition of three maps. First, the natural map \(K^{p_{\mathcal{X}}^{-1}(A)}(\mathcal{T})\to K^{\prime}(p_{\mathcal{X}}^{-1}(A))\), sending a complex \(C\) which is acyclic outside a closed subset to the alternating sum of the classes in \(K^{\prime}\) of the cohomology sheaves of \(C\). Second, the full direct image map \(K^{\prime}(p_{\mathcal{X}}^{-1}(A))\to K^{\prime}(p_{\mathcal{Y}}(\mathcal{T}( A)))\), defined by the properness of \(p_{\mathcal{Y}}\). Third, the identification \(K^{\prime}(\mathcal{T}(A))=K^{\mathcal{T}(A)}(\mathcal{Y})\) by the regularity of \(\mathcal{Y}\), cf. [11, Lem. 1.9].
### Composition
Recall the composition of geometric correspondences. Let \(\mathcal{T}\to\mathcal{X}\times_{\check{O}}\mathcal{Y}\) and \(\mathcal{S}\to\mathcal{Y}\times_{\check{O}}\mathcal{Z}\) be geometric correspondences as above. The composition \(\mathcal{U}=\mathcal{S}\circ\mathcal{T}\) of these correspondences \(\mathcal{T}\) and \(\mathcal{S}\) is defined by the following diagram with cartesian square,
In other words, the projections for \(\mathcal{U}\) are \(r_{\mathcal{X}}=p_{\mathcal{X}}\circ q_{\mathcal{Y}}^{\prime}\) and \(r_{\mathcal{Z}}=q_{\mathcal{Z}}\circ p_{\mathcal{Y}}^{\prime}\).
Assume now that \(\mathcal{Y}\) and \(\mathcal{Z}\) are regular and the morphisms \(p_{\mathcal{Y}}\) and \(q_{\mathcal{Z}}\) proper, so that also \(r_{\mathcal{Z}}\) is proper. Let \(A\) be a closed formal subscheme of \(\mathcal{X}\). Then the three maps are defined,
\[\mathcal{T}_{*}^{A}\colon K^{A}(\mathcal{X})\longrightarrow K^{\mathcal{T}(A )}(\mathcal{Y}),\quad\mathcal{S}_{*}^{\mathcal{T}(A)}\colon K^{\mathcal{T}(A)} (\mathcal{Y})\longrightarrow K^{\mathcal{U}(A)}(\mathcal{Z}),\quad\mathcal{U }_{*}^{A}\colon K^{A}(\mathcal{X})\longrightarrow K^{\mathcal{U}(A)}( \mathcal{Z}).\]
**Lemma 9.2.1**.: _Assume that the maps \(p_{\mathcal{Y}}\) and \(q_{\mathcal{Y}}\) are tor-independent, i.e., \(\mathcal{T}or_{j}^{\mathcal{O}_{\mathcal{Y}}}(\mathcal{O}_{\mathcal{T}}, \mathcal{O}_{\mathcal{S}})=0,\forall j>0\), cf. [31, Def. 36.22.2]. Then_
\[\mathcal{U}_{*}^{A}=\mathcal{S}_{*}^{\mathcal{T}(A)}\circ\mathcal{T}_{*}^{A} \colon K^{A}(X)\longrightarrow K^{\mathcal{U}(A)}(Z).\]
Proof.: We need to show that
\[Rq_{\mathcal{Z},*}\big{(}Rp^{\prime}_{\mathcal{Y},*}(q^{\prime*}_{\,\, \mathcal{X}}(p^{*}_{\mathcal{X}}(x)))\big{)}=Rq_{\mathcal{Z},*}\big{(}q^{*}_{ \mathcal{Y}}(Rp_{\mathcal{Y},*}(p^{*}_{\mathcal{X}}(x)))\big{)}.\]
Here \(q^{*}_{\mathcal{Y}}(Rp_{\mathcal{Y},*}(p^{*}_{\mathcal{X}}(x)))\) makes sense as an element in \(K(\mathcal{S})\) since, by the regularity of \(\mathcal{Y}\), the complex \(Rp_{\mathcal{Y},*}(p^{*}_{\mathcal{X}}(x))\) can be interpreted as an element in \(K(\mathcal{Y})\). It therefore suffices to prove the equality of elements in \(K^{\prime}(\mathcal{S})\),
\[Rp^{\prime}_{\mathcal{Y},*}\big{(}q^{\prime*}_{\,\,\mathcal{X}}(p^{*}_{ \mathcal{X}}(x))\big{)}=q^{*}_{\mathcal{Y}}\big{(}Rp_{\mathcal{Y},*}(p^{*}_{ \mathcal{X}}(x))\big{)}.\]
This follows from base change for the cartesian square, again representing \(Rp_{\mathcal{Y},*}(p^{*}_{\mathcal{X}}(x))\) by a finite complex of locally free \(\mathcal{O}_{\mathcal{Y}}\)-modules, cf. [31, Lem. 36.22.5].
**Remark 9.2.2**.: We use this lemma only in the case when one of the two morphisms \(p_{\mathcal{Y}}\) and \(q_{\mathcal{Y}}\) is flat. In general, the identity \(\mathcal{U}_{*}^{A}=\mathcal{S}_{*}^{\mathcal{T}(A)}\circ\mathcal{T}_{*}^{A}\) does not hold. However, it does always hold in the context of _derived formal schemes_. Indeed, in this context, the base change formula used above always holds, comp. [10, part III, ch. 3, Prop. 2.2.2].
|
2307.15102 | Best Ulam constants for two-dimensional non-autonomous linear
differential systems | This study deals with the Ulam stability of non-autonomous linear
differential systems without assuming the condition that they admit an
exponential dichotomy. In particular, the best (minimal) Ulam constants for
two-dimensional non-autonomous linear differential systems with generalized
Jordan normal forms are derived. The obtained results are applicable not only
to systems with solutions that exist globally on $(-\infty,\infty)$, but also
to systems with solutions that blow up in finite time. New results are included
even for constant coefficients. A wealth of examples are presented, and
approximations of node, saddle, and focus are proposed. In addition, this is
the first study to derive the best Ulam constants for non-autonomous systems
other than periodic systems. | Douglas R. Anderson, Masakazu Onitsuka, Donal O'Regan | 2023-07-27T17:05:47Z | http://arxiv.org/abs/2307.15102v1 | # Best Ulam constants for two-dimensional nonautonomous linear differential systems
###### Abstract.
This study deals with the Ulam stability of nonautonomous linear differential systems without assuming the condition that they admit an exponential dichotomy. In particular, the best (minimal) Ulam constants for two-dimensional nonautonomous linear differential systems with generalized Jordan normal forms are derived. The obtained results are applicable not only to systems with solutions that exist globally on \((-\infty,\infty)\), but also to systems with solutions that blow up in finite time. New results are included even for constant coefficients. A wealth of examples are presented, and approximations of node, saddle, and focus are proposed. In addition, this is the first study to derive the best Ulam constants for nonautonomous systems other than periodic systems.
Key words and phrases:Ulam stability; Ulam constant; nonautonomous; linear differential system; Jordan normal form 2010 Mathematics Subject Classification: 34D10, 34D20, 34A30
## 1. Introduction
Let \(I\) be an interval of \(\mathbb{R}\), and let \(n\in\mathbb{N}\). We consider the nonautonomous inhomogeneous linear differential equation
\[\mathbf{x}^{\prime}=A(t)\mathbf{x}+\mathbf{f}(t), \tag{1.1}\]
where \(A:I\to\mathbb{C}^{n\times n}\) is a continuous matrix function and \(\mathbf{f}:I\to\mathbb{C}^{n}\) is a continuous vector function, and \(\mathbf{x}\in\mathbb{C}^{n}\). When \(\mathbf{f}(t)\equiv\mathbf{0}\), equation (1.1) is reduced to the homogeneous linear differential equation
\[\mathbf{x}^{\prime}=A(t)\mathbf{x}. \tag{1.2}\]
The concept of Ulam stability was proposed by S. M. Ulam in 1940 in the field of functional equations, and introduced to ordinary differential equations in the 1990s (see [1, 13, 14]). Ulam stability for various types of differential equations has been considered in recent years. However, work on vector equations is still in progress, while work on scalar equations with constant coefficients has been extensively explored. We can cite references [5, 6, 8, 12, 15, 16] as previous studies on (1.1) and (1.2), and nonlinear equations containing them or their discrete versions. It should be noted that these previous studies clarify the relationship between Ulam stability and the property that the linear part admits an exponential dichotomy. The exponential dichotomy is a very useful property that works well with constant coefficient systems and periodic systems. In addition, the relationship between Ulam stability and the shadowing property has recently been clarified. For
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The \(\mathbb{C}^{n}\)-dimensional space \(\mathbb{C}^{
* if \(\lambda_{1}>0>\lambda_{2}\), then the origin of (1.3) with matrix (i) is a saddle point;
* if \(\lambda>0\), then the origin of (1.3) with matrix (ii) is an unstable node;
* if \(0>\lambda\), then the origin of (1.3) with matrix (ii) is a stable node;
* if \(\alpha>0\), then the origin of (1.3) with matrix (iii) is an unstable focus;
* if \(0>\alpha\), then the origin of (1.3) with matrix (iii) is a stable focus;
* if \(\alpha=0\), then the origin of (1.3) with matrix (iii) is a center.
As can be seen from Definition 1.1, to guarantee the Ulam stability of (1.3) is to give approximations of node, saddle, and focus points with accuracy. In fact, using the results of [3], it is possible to determine whether cases (i)-(iii) are Ulam stable or not. In conclusion, under the assumption of \(I=\mathbb{R}\), it is unstable if \(\alpha=0\), but Ulam stable otherwise (see, [3, Theorem 3.1-3.3, and Lemma 4.2.]). Moreover, if \(\det A=0\), then \(A\) has at least one \(0\) eigenvalue. For this case, it is known that (1.2) is not Ulam stable on \(\mathbb{R}\) (see [3, Lemma 4.1].). Therefore, it is possible to determine the Ulam stability of (1.1) with real-valued constant matrices in all cases. As mentioned above, Ulam stability is an approximation concept. Thus, the next important question is "How close are the approximate and true solutions?". If the minimum Ulam constant exists, it is called the best Ulam constant. In recent years, the existence of the best Ulam constant has been known in some constant coefficients linear differential equations, linear difference equations, and functional equations (see, [2, 3, 4, 9, 10, 11, 23, 24, 25]). Some of the best Ulam constants for (1.2) have been obtained in [3], when \(A(t)\) is a real-valued constant matrix \(A\in\mathbb{R}^{2\times 2}\). In particular, in the case of matrices (i) and (ii), we can obtain the best Ulam constants. Thus, some best Ulam constants for constant coefficients equations have been obtained, whereas, to the best of the authors' knowledge, the existence of best Ulam constants for variable coefficients equations is limited to scalar and periodic coefficient equations (see, [4, 18, 19]). In this study, we focus on this point of view to establish the Ulam stability of Jordan normal forms generalized to variable coefficients, and we aim to derive their best Ulam constants. Specifically, the main purpose of this paper is to derive the best Ulam constants of (1.2) whose coefficients are the generalized Jordan normal forms:
\[\mbox{(I)}\ \ \begin{pmatrix}\lambda_{1}(t)&0\\ 0&\lambda_{2}(t)\end{pmatrix},\quad\mbox{(II)}\ \ \begin{pmatrix}\lambda(t)&\mu(t)\\ 0&\lambda(t)\end{pmatrix},\quad\mbox{(III)}\ \ \begin{pmatrix}\alpha(t)&\beta(t)\\ -\beta(t)&\alpha(t)\end{pmatrix},\]
where \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda\), \(\mu:I\to\mathbb{C}\), and \(\alpha\), \(\beta:I\to\mathbb{R}\) are continuous functions. In addition, if we restrict these functions to real-valued functions, we propose that Ulam stability ensures that phase portraits of the orbits of the approximate solutions on the phase plane give approximations of node, saddle, and focus points, respectively.
## 2. General theory of Ulam stability
In this section, we show that equivalence relations are obtained for the Ulam stability of the three equations (1.1), (1.2), and (1.3). The first result is as follows.
**Theorem 2.1**.: _Equation (1.1) is Ulam stable on \(I\), with an Ulam constant \(K\) if and only if equation (1.2) is Ulam stable on \(I\), and an Ulam constant is the same \(K\)._
Proof.: We only have to show that if (1.2) is Ulam stable on \(I\), with an Ulam constant \(K\), then (1.1) is Ulam stable on \(I\), with an Ulam constant \(K\), since the converse is trivial. Let \(\varepsilon>0\). We consider the continuously differentiable function \(\boldsymbol{\phi}:I\to\mathbb{C}^{n}\) satisfying
\[\sup_{t\in I}\|\boldsymbol{\phi}^{\prime}-A(t)\boldsymbol{\phi}-\boldsymbol{ f}(t)\|\leq\varepsilon.\]
If \(\boldsymbol{z}:I\to\mathbb{C}^{n}\) is a solution of (1.1), then we obtain
\[\varepsilon \geq \sup_{t\in I}\|\boldsymbol{\phi}^{\prime}-A(t)\boldsymbol{\phi} -\boldsymbol{f}(t)\|=\sup_{t\in I}\|\boldsymbol{\phi}^{\prime}-A(t)\boldsymbol{ \phi}-(\boldsymbol{z}^{\prime}-A(t)\boldsymbol{z})\|\] \[= \sup_{t\in I}\|(\boldsymbol{\phi}-\boldsymbol{z})^{\prime}-A(t)( \boldsymbol{\phi}-\boldsymbol{z})\|.\]
This, together with the Ulam stability of (1.2), yields that there is a solution \(\boldsymbol{y}:I\to\mathbb{C}^{n}\) of (1.2) such that
\[\sup_{t\in I}\|(\boldsymbol{\phi}(t)-\boldsymbol{z}(t))-\boldsymbol{y}(t)\| \leq K\varepsilon.\]
Let \(\boldsymbol{x}:=\boldsymbol{y}+\boldsymbol{z}\). Then \(\boldsymbol{x}\) is a solution of (1.1). From
\[\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|=\sup_{t\in I}\|( \boldsymbol{\phi}(t)-\boldsymbol{z}(t))-\boldsymbol{y}(t)\|\leq K\varepsilon,\]
we conclude that (1.1) is Ulam stable on \(I\), with an Ulam constant \(K\). This completes the proof.
The second result is as follows.
**Theorem 2.2**.: _Suppose that \(\|R(t)\|\) and \(\|R^{-1}(t)\|\) are bounded on \(I\). If equation (1.2) is Ulam stable on \(I\), with an Ulam constant \(K\), then equation (1.3) is Ulam stable on \(I\), and an Ulam constant is \(\left(\sup_{t\in I}\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1}(t)\|\right)K\)._
Proof.: Assume that \(\|R(t)\|\) and \(\|R^{-1}(t)\|\) are bounded on \(I\), and equation (1.2) is Ulam stable on \(I\), with an Ulam constant \(K\). Let \(\varepsilon>0\), and let \(\boldsymbol{\eta}:I\to\mathbb{C}^{n}\) satisfy
\[\sup_{t\in I}\|\boldsymbol{\eta}^{\prime}-J(t)\boldsymbol{\eta}\|\leq\varepsilon,\]
where \(J(t)=\left(R^{\prime}(t)+R(t)A(t)\right)R^{-1}(t)\). The purpose of this proof is to show that there exists a solution \(\boldsymbol{y}:I\to\mathbb{C}^{n}\) to equation (1.3) that satisfies the inequality
\[\sup_{t\in I}\|\boldsymbol{\eta}(t)-\boldsymbol{y}(t)\|\leq\left(\sup_{t\in I }\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1}(t)\|\right)K\varepsilon.\]
Let \(\mathbf{\phi}=R^{-1}(t)\mathbf{\eta}\). Then, by \(J(t)=\left(R^{\prime}(t)+R(t)A(t)\right)R^{-1}(t)\) and \(R^{-1}(t)R(t)=I\), we have
\[A(t)=R^{-1}(t)J(t)R(t)-R^{-1}(t)R^{\prime}(t)\]
and
\[(R^{-1}(t))^{\prime}=-R^{-1}(t)R^{\prime}(t)R^{-1}(t).\]
Using these equalities, we obtain
\[\sup_{t\in I}\|\mathbf{\phi}^{\prime}-A(t)\mathbf{\phi}\| =\sup_{t\in I}\|R^{-1}(t)\mathbf{\eta}^{\prime}+(R^{-1}(t))^{\prime} \mathbf{\eta}-A(t)R^{-1}(t)\mathbf{\eta}\|\] \[=\sup_{t\in I}\|R^{-1}(t)(\mathbf{\eta}^{\prime}-J(t)\mathbf{\eta})\|\] \[\leq\sup_{t\in I}\|R^{-1}(t)\|\sup_{t\in I}\|\mathbf{\eta}^{\prime}-J (t)\mathbf{\eta}\|\leq\varepsilon\sup_{t\in I}\|R^{-1}(t)\|.\]
Since \(\|R^{-1}(t)\|\) is bounded on \(I\), and (1.2) is Ulam stable on \(I\), with an Ulam constant \(K\), we see that there is a solution \(\mathbf{x}:I\to\mathbb{C}^{n}\) of (1.2) such that
\[\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|\leq K\varepsilon\sup_{t\in I}\|R^{-1} (t)\|<\infty. \tag{2.1}\]
Letting \(\mathbf{y}=R(t)\mathbf{x}\), we have
\[\mathbf{y}^{\prime}=R(t)\mathbf{x}^{\prime}+R^{\prime}(t)\mathbf{x}=(R(t)A(t)+R^{\prime}(t ))\mathbf{x}=J(t)\mathbf{y},\]
and so that \(\mathbf{y}\) is a solution to (1.3). In addition, by (2.1) and the boundedness of \(\|R(t)\|\), we obtain
\[\sup_{t\in I}\|\mathbf{\eta}(t)-\mathbf{y}(t)\| =\sup_{t\in I}\|R(t)\mathbf{\phi}(t)-R(t)\mathbf{x}(t)\|\leq\sup_{t\in I} \|R(t)\|\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|\] \[\leq\left(\sup_{t\in I}\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1} (t)\|\right)K\varepsilon.\]
Hence, (1.3) is Ulam stable on \(I\), with an Ulam constant \(\left(\sup_{t\in I}\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1}(t)\|\right)K\).
Using the same method of the proof of Theorem 2.2, we also obtain the following result.
**Theorem 2.3**.: _Suppose that \(\|R(t)\|\) and \(\|R^{-1}(t)\|\) are bounded on \(I\). If equation (1.3) is Ulam stable on \(I\), with an Ulam constant \(K\), then equation (1.2) is Ulam stable on \(I\), and an Ulam constant is \(\left(\sup_{t\in I}\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1}(t)\|\right)K\)._
Proof.: Assume that \(\|R(t)\|\) and \(\|R^{-1}(t)\|\) are bounded on \(I\), and equation (1.3) is Ulam stable on \(I\), with an Ulam constant \(K\). Let \(\varepsilon>0\), and let \(\mathbf{\phi}:I\to\mathbb{C}^{n}\) satisfy
\[\sup_{t\in I}\|\mathbf{\phi}^{\prime}-A(t)\mathbf{\phi}\|\leq\varepsilon.\]
Let \(\boldsymbol{\eta}=R(t)\boldsymbol{\phi}\). Then, we obtain
\[\sup_{t\in I}\|\boldsymbol{\eta}^{\prime}-J(t)\boldsymbol{\eta}\| =\sup_{t\in I}\|R(t)\boldsymbol{\phi}^{\prime}+R^{\prime}(t) \boldsymbol{\phi}-\left(R^{\prime}(t)+R(t)A(t)\right)\boldsymbol{\phi}\|\] \[=\sup_{t\in I}\|R(t)(\boldsymbol{\phi}^{\prime}-A(t)\boldsymbol{ \phi})\|\] \[\leq\varepsilon\sup_{t\in I}\|R(t)\|.\]
Since \(\|R(t)\|\) is bounded on \(I\), and (1.3) is Ulam stable on \(I\), with an Ulam constant \(K\), we see that there is a solution \(\boldsymbol{y}:I\to\mathbb{C}^{n}\) of (1.3) such that
\[\sup_{t\in I}\|\boldsymbol{\eta}(t)-\boldsymbol{y}(t)\|\leq K\varepsilon\sup _{t\in I}\|R(t)\|<\infty. \tag{2.2}\]
Letting \(\boldsymbol{x}=R^{-1}(t)\boldsymbol{y}\), we have
\[\boldsymbol{x}^{\prime}=R^{-1}(t)\boldsymbol{y}^{\prime}+(R^{-1}(t))^{\prime} \boldsymbol{y}=A(t)\boldsymbol{x},\]
and so that \(\boldsymbol{x}\) is a solution to (1.2). In addition, by (2.2) and the boundedness of \(\|R^{-1}(t)\|\), we obtain
\[\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\| =\sup_{t\in I}\|R^{-1}(t)\boldsymbol{\eta}(t)-R^{-1}(t)\boldsymbol {y}(t)\|\leq\sup_{t\in I}\|R^{-1}(t)\|\sup_{t\in I}\|\boldsymbol{\eta}(t)- \boldsymbol{y}(t)\|\] \[\leq\left(\sup_{t\in I}\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1 }(t)\|\right)K\varepsilon.\]
Hence, (1.2) is Ulam stable on \(I\), with an Ulam constant \(\left(\sup_{t\in I}\|R(t)\|\right)\left(\sup_{t\in I}\|R^{-1}(t)\|\right)K\).
Combining Theorems 2.1-2.3, we obtain the following result.
**Theorem 2.4**.: _Suppose that \(\|R(t)\|\) and \(\|R^{-1}(t)\|\) are bounded on \(I\). Then equation (1.1) is Ulam stable on \(I\) if and only if equation (1.3) is Ulam stable on \(I\)._
Proof.: We omit the proof because it is obvious.
## 3. Ulam stability of generalized Jordan normal form (I)
We consider the two-dimensional linear differential system
\[\boldsymbol{x}^{\prime}=A(t)\boldsymbol{x},\quad A(t)=\begin{pmatrix}\lambda_ {1}(t)&0\\ 0&\lambda_{2}(t)\end{pmatrix}, \tag{3.1}\]
where \(\lambda_{1}\), \(\lambda_{2}:I\to\mathbb{C}\) are continuous, and \(\boldsymbol{x}\in\mathbb{C}^{2}\). In this case, we can easily find a fundamental matrix for this system. For example, a fundamental matrix of (3.1) and its inverse matrix are as follows:
\[X(t)=\begin{pmatrix}e^{\int_{t_{0}}^{t}\lambda_{1}(s)ds}&0\\ 0&e^{\int_{t_{0}}^{t}\lambda_{2}(s)ds}\end{pmatrix},\quad X^{-1}(t)=\begin{pmatrix} e^{-\int_{t_{0}}^{t}\lambda_{1}(s)ds}&0\\ 0&e^{-\int_{t_{0}}^{t}\lambda_{2}(s)ds}\end{pmatrix} \tag{3.2}\]
for \(t\in I\), where \(t_{0}\in I\). In this section, we use the norm of the vector \(\boldsymbol{v}=(v_{1},v_{2})^{T}\) as the maximum norm \(\|\boldsymbol{v}\|_{\infty}:=\max\{|v_{1}|,|v_{2}|\}\). From the properties of the norm, if Ulam stability is shown for a norm, Ulam stability is guaranteed for any norm. Throughout this paper, the real part of \(z\in\mathbb{C}\) will be written as \(\Re(z)\). The first result in this section is as follows.
**Theorem 3.1**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(-\infty\leq a<b\leq\infty\). Then the following (i), (ii), and (iii) below hold:_
* _if_ \[\kappa_{11}(t):=\int_{t}^{b}e^{-\int_{t}^{s}\min\{\Re(\lambda_{1}(\tau)),\Re( \lambda_{2}(\tau))\}d\tau}ds\ \ \text{exists for all}\ \ t\in I,\] (3.3) _and_ \(\sup_{t\in I}\kappa_{11}(t)<\infty\)_, then (_3.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{11}:=\sup_{t\in I}\kappa_{11}(t)\)_. Furthermore, if_ \[\lim_{t\to b-0}\int_{t_{0}}^{t}\min\{\Re(\lambda_{1}(s)),\Re(\lambda_{2}(s) )\}ds=\infty,\quad t_{0}\in(a,b),\] (3.4) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\boldsymbol{\phi}(t)\) _satisfying_ \[\sup_{t\in I}\|\boldsymbol{\phi}^{\prime}(t)-A(t)\boldsymbol{\phi}(t)\|_{ \infty}\leq\varepsilon,\] (3.5) _there exists the unique solution of (_3.1_) denoted by_ \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] _such that_ \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{11}\varepsilon\)_, where_ \(X(t)\) _and_ \(X^{-1}(t)\) _are given in (_3.2_), and_ \((\lim_{t\to b-0}X^{-1}(t)\boldsymbol{\phi}(t))\) _is a well-defined constant vector;_
* _if_ \[\kappa_{12}(t):=\int_{a}^{t}e^{\int_{s}^{t}\max\{\Re(\lambda_{1}(\tau)),\Re( \lambda_{2}(\tau))\}d\tau}ds\ \ \text{exists for all}\ \ t\in I,\] (3.6) _and_ \(\sup_{t\in I}\kappa_{12}(t)<\infty\)_, then (_3.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{12}:=\sup_{t\in I}\kappa_{12}(t)\)_. Furthermore, if_ \[\lim_{t\to a+0}\int_{t}^{t_{0}}\min\{-\Re(\lambda_{1}(s)),-\Re(\lambda_{2}(s) )\}ds=\infty,\quad t_{0}\in(a,b),\] (3.7) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\boldsymbol{\phi}(t)\) _satisfying (_3.5_), there exists the unique solution of (_3.1_) denoted by_ \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] _such that_ \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{12}\varepsilon\)_, where_ \(X(t)\) _and_ \(X^{-1}(t)\) _are given in (_3.2_), and_ \((\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}(t))\) _is a well-defined constant vector;_
_
3. _if_ \[\kappa_{13}(t):=\max\left\{\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda_{1}(\tau))d \tau}ds,\int_{a}^{t}e^{\int_{s}^{t}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\ \text{ exists for all }\ t\in I,\] (3.8) _and_ \(\sup_{t\in I}\kappa_{13}(t)<\infty\)_, then (_3.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{13}:=\sup_{t\in I}\kappa_{13}(t)\)_. Furthermore, if_ \[\lim_{t\to b-0}\int_{t_{0}}^{t}\Re(\lambda_{1}(s))ds=\lim_{t\to a+0} \int_{t_{0}}^{t}\Re(\lambda_{2}(s))ds=\infty,\quad t_{0}\in(a,b),\] (3.9) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\boldsymbol{\phi}(t)=(\phi_{1}(t),\phi_{2}(t))^{T}\) _satisfying (_3.5_), there exists the unique solution of (_3.1_) denoted by_ \[\boldsymbol{x}(t)=X(t)\begin{pmatrix}\lim_{t\to b-0}\phi_{1}(t)e^{-\int_{t_{0} }^{t}\lambda_{1}(s)ds}\\ \lim_{t\to a+0}\phi_{2}(t)e^{\int_{t}^{t_{0}}\lambda_{2}(s)ds}\end{pmatrix}\] _such that_ \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{13}\varepsilon\)_, where_ \(\begin{pmatrix}\lim_{t\to b-0}\phi_{1}(t)e^{-\int_{t_{0}}^{t}\lambda_{1}(s)ds} \\ \lim_{t\to a+0}\phi_{2}(t)e^{\int_{t}^{t_{0}}\lambda_{2}(s)ds}\end{pmatrix}\) _is a well-defined constant vector._
Proof.: Case (i). Suppose that (3.3) and \(\sup_{t\in I}\kappa_{11}(t)<\infty\) hold. Let an arbitrary \(\varepsilon>0\) be given, and let \(\boldsymbol{\phi}(t)\) be continuously differentiable on \(I\) and satisfy (3.5). Define
\[\boldsymbol{f}(t):=\boldsymbol{\phi}^{\prime}(t)-A(t)\boldsymbol{\phi}(t), \quad t\in I. \tag{3.10}\]
Then \(\sup_{t\in I}\|\boldsymbol{f}(t)\|_{\infty}\leq\varepsilon\) holds, and \(\boldsymbol{\phi}(t)\) is a solution of (1.1). Thus, by the variation of parameters formula, we have
\[\boldsymbol{\phi}(t):=X(t)X^{-1}(t_{0})\boldsymbol{\phi}(t_{0})+X(t)\int_{t_{ 0}}^{t}X^{-1}(s)\boldsymbol{f}(s)ds,\quad t_{0}\in(a,b) \tag{3.11}\]
for all \(t\in I\), where \(X(t)\) and \(X^{-1}(t)\) are given in (3.2). Now we consider a solution \(\boldsymbol{x}(t)\) of (3.1) defined by
\[\boldsymbol{x}(t):=X(t)\boldsymbol{x}_{11},\]
where
\[\boldsymbol{x}_{11}:=X^{-1}(t_{0})\boldsymbol{\phi}(t_{0})+\int_{t_{0}}^{b}X^ {-1}(s)\boldsymbol{f}(s)ds. \tag{3.12}\]
Note that the (improper) integral in (3.12) converges. In fact, using (3.2), (3.3), and the definition of the maximum norm,
\[\left\|\int_{t_{0}}^{b}X^{-1}(s)\boldsymbol{f}(s)ds\right\|_{\infty} \leq\int_{t_{0}}^{b}\left\|X^{-1}(s)\boldsymbol{f}(s)\right\|_{ \infty}ds\] \[=\int_{t_{0}}^{b}\left\|\begin{pmatrix}e^{-\int_{t_{0}}^{s} \lambda_{1}(\tau)d\tau}&0\\ 0&e^{-\int_{t_{0}}^{s}\lambda_{2}(\tau)d\tau}\end{pmatrix}\begin{pmatrix}f_{1} (s)\\ f_{2}(s)\end{pmatrix}\right\|_{\infty}ds\] \[=\int_{t_{0}}^{b}\left\|\begin{pmatrix}f_{1}(s)e^{-\int_{t_{0}}^{ s}\lambda_{1}(\tau)d\tau}\\ f_{2}(s)e^{-\int_{t_{0}}^{s}\lambda_{2}(\tau)d\tau}\end{pmatrix}\right\|_{ \infty}ds\] \[=\int_{t_{0}}^{b}\max\left\{|f_{1}(s)|e^{-\int_{t_{0}}^{s} \Re(\lambda_{1}(\tau))d\tau},|f_{2}(s)|e^{-\int_{t_{0}}^{s}\Re(\lambda_{2}( \tau))d\tau}\right\}ds\] \[\leq\int_{t_{0}}^{b}\|\boldsymbol{f}(s)\|_{\infty}e^{-\int_{t_{0} }^{s}\min\{\Re(\lambda_{1}(\tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds\] \[\leq\varepsilon\int_{t_{0}}^{b}e^{-\int_{t_{0}}^{s}\min\{\Re( \lambda_{1}(\tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds=\varepsilon\kappa_{11}(t _{0})<\infty,\]
where \(f_{1}(t)\) and \(f_{2}(t)\) are the components of the vector \(\boldsymbol{f}(t)\); that is, \(\boldsymbol{f}(t)=(f_{1}(t),f_{2}(t))^{T}\). Therefore, the constant vector \(\boldsymbol{x}_{11}\) is well-defined, and \(\boldsymbol{x}(t)\) is a solution of (3.1). By (3.11) and (3.12), we obtain
\[\boldsymbol{\phi}(t)-\boldsymbol{x}(t)=-X(t)\int_{t}^{b}X^{-1}(s) \boldsymbol{f}(s)ds \tag{3.13}\]
for all \(t\in I\). Hence, using (3.2), (3.3), (3.13), and the definition of the maximum norm again, we have
\[\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty} \leq\int_{t}^{b}\left\|X(t)X^{-1}(s)\boldsymbol{f}(s)\right\|_{ \infty}ds\] \[=\int_{t}^{b}\left\|\begin{pmatrix}e^{-\int_{t}^{s}\lambda_{1}( \tau)d\tau}&0\\ 0&e^{-\int_{t}^{s}\lambda_{2}(\tau)d\tau}\end{pmatrix}\begin{pmatrix}f_{1}(s) \\ f_{2}(s)\end{pmatrix}\right\|_{\infty}ds\] \[=\int_{t}^{b}\left\|\begin{pmatrix}f_{1}(s)e^{-\int_{t}^{s}\lambda _{1}(\tau)d\tau}\\ f_{2}(s)e^{-\int_{t}^{s}\lambda_{2}(\tau)d\tau}\end{pmatrix}\right\|_{\infty}ds\] \[=\int_{t}^{b}\max\left\{|f_{1}(s)|e^{-\int_{t}^{s}\Re(\lambda_{1} (\tau))d\tau},|f_{2}(s)|e^{-\int_{t}^{s}\Re(\lambda_{2}(\tau))d\tau}\right\}ds\] \[\leq\int_{t}^{b}\|\boldsymbol{f}(s)\|_{\infty}e^{-\int_{t}^{s} \min\{\Re(\lambda_{1}(\tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds\] \[\leq\varepsilon\int_{t}^{b}e^{-\int_{t}^{s}\min\{\Re(\lambda_{1} (\tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds=\varepsilon\kappa_{11}(t)\]
for all \(t\in I\), and thus,
\[\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq\varepsilon \sup_{t\in I}\kappa_{11}(t)=K_{11}\varepsilon<\infty.\]
Therefore, (3.1) is Ulam stable on \(I\), with an Ulam constant \(K_{11}\). Moreover, from (3.11) and (3.12), we have \(\boldsymbol{x}_{11}=\lim_{t\to b-0}X^{-1}(t)\boldsymbol{\phi}(t)\), and so that \(\boldsymbol{x}(t)\) is given by
\[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\boldsymbol{\phi}(t) \right).\]
Next, we discuss the uniqueness of the solution \(\boldsymbol{x}(t)\). To show by contradiction, we assume that there is a solution \(\boldsymbol{y}(t)\) of (3.1) such that
\[\boldsymbol{y}(t):=X(t)\boldsymbol{y}_{11},\quad\boldsymbol{y}_{11}\neq \boldsymbol{x}_{11}\]
on \(I\), and
\[\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{y}(t)\|_{\infty}\leq K_{11}\varepsilon.\]
This implies
\[\|X(t)(\boldsymbol{y}_{11}-\boldsymbol{x}_{11})\|_{\infty}\leq\|\boldsymbol{ \phi}(t)-\boldsymbol{y}(t)\|_{\infty}+\|\boldsymbol{\phi}(t)-\boldsymbol{x} (t)\|_{\infty}\leq 2K_{11}\varepsilon\]
for all \(t\in I\). However, by (3.4), we see that
\[\lim_{t\to b-0}\|X(t)(\boldsymbol{y}_{11}-\boldsymbol{x}_{11})\|_{\infty} =\lim_{t\to b-0}\left\|\begin{pmatrix}e^{\int_{t_{0}}^{t} \lambda_{1}(s)ds}&0\\ 0&e^{\int_{t_{0}}^{t}\lambda_{2}(s)ds}\end{pmatrix}\begin{pmatrix}y_{1}\\ y_{2}\end{pmatrix}\right\|_{\infty}\] \[=\lim_{t\to b-0}\left\|\begin{pmatrix}y_{1}e^{\int_{t_{0}}^{t} \lambda_{1}(s)ds}\\ y_{2}e^{\int_{t_{0}}^{t}\lambda_{2}(s)ds}\end{pmatrix}\right\|_{\infty}\] \[=\lim_{t\to b-0}\max\left\{|y_{1}|e^{\int_{t_{0}}^{t}\Re(\lambda_{1 }(s))ds},|y_{2}|e^{\int_{t_{0}}^{t}\Re(\lambda_{2}(s))ds}\right\}\] \[\geq\max\{|y_{1}|,|y_{2}|\}\lim_{t\to b-0}e^{\int_{t_{0}}^{t}\min \{\Re(\lambda_{1}(s)),\Re(\lambda_{2}(s))\}ds}=\infty,\]
where \(\boldsymbol{y}_{11}-\boldsymbol{x}_{11}=(y_{1},y_{2})^{T}\) and \((y_{1},y_{2})\neq(0,0)\). This is a contradiction. Hence \(\boldsymbol{x}(t)\) is the unique solution of (3.1) satisfying \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{11}\varepsilon\).
Case (ii). Suppose that (3.6) and \(\sup_{t\in I}\kappa_{12}(t)<\infty\) hold. Let \(\varepsilon>0\), and let \(\boldsymbol{\phi}(t)\) be continuously differentiable on \(I\) and satisfy (3.5). Define \(\boldsymbol{f}(t)\) by (3.10). Then \(\sup_{t\in I}\|\boldsymbol{f}(t)\|_{\infty}\leq\varepsilon\) holds, and \(\boldsymbol{\phi}(t)\) satisfies (3.11) for all \(t\in I\), where \(X(t)\) and \(X^{-1}(t)\) are given in (3.2). Now we consider a solution \(\boldsymbol{x}(t)\) of (3.1) defined by
\[\boldsymbol{x}(t):=X(t)\boldsymbol{x}_{12},\]
where
\[\boldsymbol{x}_{12}:=X^{-1}(t_{0})\boldsymbol{\phi}(t_{0})-\int_{a}^{t_{0}}X^{ -1}(s)\boldsymbol{f}(s)ds. \tag{3.14}\]
We will prove that the (improper) integral in (3.14) converges. Put \(\mathbf{f}(t)=(f_{1}(t),f_{2}(t))^{T}\). By (3.2), (3.6), and the definition of the maximum norm,
\[\left\|\int_{a}^{t_{0}}X^{-1}(s)\mathbf{f}(s)ds\right\|_{\infty} \leq\int_{a}^{t_{0}}\left\|\begin{pmatrix}f_{1}(s)e^{\int_{s}^{t_{ 0}}\lambda_{1}(\tau)d\tau}\\ f_{2}(s)e^{\int_{s}^{t_{0}}\lambda_{2}(\tau)d\tau}\end{pmatrix}\right\|_{ \infty}ds\] \[\leq\varepsilon\int_{a}^{t_{0}}e^{\int_{s}^{t_{0}}\max\{\Re( \lambda_{1}(\tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds=\varepsilon\kappa_{12}(t_ {0})<\infty.\]
Thus, \(\mathbf{x}_{12}\) is well-defined. From (3.11) and (3.14), we obtain
\[\mathbf{\phi}(t)-\mathbf{x}(t)=X(t)\int_{a}^{t}X^{-1}(s)\mathbf{f}(s)ds \tag{3.15}\]
for all \(t\in I\). Hence, using (3.2), (3.6), (3.15), and the definition of the maximum norm again, we have
\[\left\|\mathbf{\phi}(t)-\mathbf{x}(t)\right\|_{\infty} \leq\int_{a}^{t}\left\|\begin{pmatrix}f_{1}(s)e^{\int_{s}^{t} \lambda_{1}(\tau)d\tau}\\ f_{2}(s)e^{\int_{s}^{t}\lambda_{2}(\tau)d\tau}\end{pmatrix}\right\|_{\infty}ds\] \[\leq\varepsilon\int_{a}^{t}e^{\int_{s}^{t}\max\{\Re(\lambda_{1}( \tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds=\varepsilon\kappa_{12}(t)\]
for all \(t\in I\), and thus,
\[\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq\varepsilon\sup_{t\in I} \kappa_{12}(t)=K_{12}\varepsilon<\infty.\]
Consequently, (3.1) is Ulam stable on \(I\), with an Ulam constant \(K_{12}\). Moreover, from (3.11) and (3.14), we have \(\mathbf{x}_{12}=\lim_{t\to a+0}X^{-1}(t)\mathbf{\phi}(t)\), and so that \(\mathbf{x}(t)\) is given by
\[\mathbf{x}(t)=X(t)\left(\lim_{t\to a+0}X^{-1}(t)\mathbf{\phi}(t)\right).\]
Next, we will prove the uniqueness of the solution \(\mathbf{x}(t)\). To show by contradiction, we assume that there is a solution \(\mathbf{y}(t)\) of (3.1) such that
\[\mathbf{y}(t):=X(t)\mathbf{y}_{12},\quad\mathbf{y}_{12}\neq\mathbf{x}_{12}\]
on \(I\), and
\[\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{y}(t)\|_{\infty}\leq K_{12}\varepsilon.\]
This implies
\[\|X(t)(\mathbf{y}_{12}-\mathbf{x}_{12})\|_{\infty}\leq\|\mathbf{\phi}(t)-\mathbf{y}(t)\|_{ \infty}+\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq 2K_{12}\varepsilon\]
for all \(t\in I\). However, by (3.7), we see that
\[\lim_{t\to a+0}\|X(t)(\mathbf{y}_{12}-\mathbf{x}_{12})\|_{\infty} =\lim_{t\to a+0}\max\left\{|y_{1}|e^{-\int_{t}^{t_{0}}\Re( \lambda_{1}(s))ds},|y_{2}|e^{-\int_{t}^{t_{0}}\Re(\lambda_{2}(s))ds}\right\}\] \[\geq\max\{|y_{1}|,|y_{2}|\}\lim_{t\to a+0}e^{\int_{t}^{t_{0}}\min\{ -\Re(\lambda_{1}(s)),-\Re(\lambda_{2}(s))\}ds}=\infty,\]
where \(\mathbf{y}_{12}-\mathbf{x}_{12}=(y_{1},y_{2})^{T}\) and \((y_{1},y_{2})\neq(0,0)\). This is a contradiction. Hence \(\mathbf{x}(t)\) is the unique solution of (3.1) satisfying \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq K_{12}\varepsilon\).
Case (iii). Suppose that (3.8) and \(\sup_{t\in I}\kappa_{13}(t)<\infty\) hold. Let \(\varepsilon>0\), and let \(\mathbf{\phi}(t)\) be continuously differentiable on \(I\) and satisfy (3.5). Define \(\mathbf{f}(t)\) by (3.10). Then \(\sup_{t\in I}\|\mathbf{f}(t)\|_{\infty}\leq\varepsilon\) holds, and \(\mathbf{\phi}(t)\) satisfies (3.11) for all \(t\in I\), where \(X(t)\) and \(X^{-1}(t)\) are given in (3.2). Put \(\mathbf{f}(t)=(f_{1}(t),f_{2}(t))^{T}\). Now we consider a solution \(\mathbf{x}(t)\) of (3.1) defined by
\[\mathbf{x}(t):=X(t)\mathbf{x}_{13},\]
where
\[\mathbf{x}_{13}:=X^{-1}(t_{0})\mathbf{\phi}(t_{0})+\begin{pmatrix}\int_{t_{0}}^{b}f_{ 1}(s)e^{-\int_{t_{0}}^{s}\lambda_{1}(\tau)d\tau}ds\\ -\int_{a}^{t_{0}}f_{2}(s)e^{\int_{s}^{t_{0}}\lambda_{2}(\tau)d\tau}ds\end{pmatrix}. \tag{3.16}\]
We will prove that the (improper) integrals in (3.16) converge. By (3.2), (3.8), and the definition of the maximum norm,
\[\left\|\begin{pmatrix}\int_{t_{0}}^{b}f_{1}(s)e^{-\int_{t_{0}}^{s} \lambda_{1}(\tau)d\tau}ds\\ -\int_{a}^{t_{0}}f_{2}(s)e^{\int_{s}^{t_{0}}\lambda_{2}(\tau)d\tau}ds\end{pmatrix}\right\|_ {\infty} =\max\left\{\left|\int_{t_{0}}^{b}f_{1}(s)e^{-\int_{t_{0}}^{s} \lambda_{1}(\tau)d\tau}ds\right|,\left|\int_{a}^{t_{0}}f_{2}(s)e^{\int_{s}^{t_ {0}}\lambda_{2}(\tau)d\tau}ds\right|\right\}\] \[\leq\max\left\{\int_{t_{0}}^{b}|f_{1}(s)|e^{-\int_{t_{0}}^{s} \Re(\lambda_{1}(\tau))d\tau}ds,\int_{a}^{t_{0}}|f_{2}(s)|e^{\int_{s}^{t_{0}} \Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[\leq\max\left\{\int_{t_{0}}^{b}\|\mathbf{f}(s)\|_{\infty}e^{-\int_{t _{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds,\int_{a}^{t_{0}}\|\mathbf{f}(s)\|_{\infty} e^{\int_{t_{0}}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[\leq\varepsilon\max\left\{\int_{t_{0}}^{b}e^{-\int_{t_{0}}^{s} \Re(\lambda_{1}(\tau))d\tau}ds,\int_{a}^{t_{0}}e^{\int_{s}^{t_{0}}\Re(\lambda_ {2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\kappa_{13}(t_{0})<\infty.\]
Thus, \(\mathbf{x}_{13}\) is well-defined. From (3.11) and (3.16), we obtain
\[\mathbf{\phi}(t)-\mathbf{x}(t) =X(t)\left(\int_{t_{0}}^{t}X^{-1}(s)\mathbf{f}(s)ds-\begin{pmatrix} \int_{t_{0}}^{b}f_{1}(s)e^{-\int_{t_{0}}^{s}\lambda_{1}(\tau)d\tau}ds\\ -\int_{a}^{t_{0}}f_{2}(s)e^{\int_{s}^{t_{0}}\lambda_{2}(\tau)d\tau}ds\end{pmatrix}\right)\] \[=\begin{pmatrix}-\int_{t}^{b}f_{1}(s)e^{-\int_{t}^{s}\lambda_{1}( \tau)d\tau}ds\\ \int_{a}^{t}f_{2}(s)e^{\int_{s}^{t}\lambda_{2}(\tau)d\tau}ds\end{pmatrix} \tag{3.17}\]
for all \(t\in I\). Hence, using (3.2), (3.8), (3.17), and the definition of the maximum norm again, we have
\[\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty} =\max\left\{\left|\int_{t}^{b}f_{1}(s)e^{-\int_{t}^{s}\lambda_{1} (\tau)d\tau}ds\right|,\left|\int_{a}^{t}f_{2}(s)e^{\int_{s}^{t}\lambda_{2}( \tau)d\tau}ds\right|\right\}\] \[\leq\max\left\{\int_{t}^{b}|f_{1}(s)|e^{-\int_{t}^{s}\Re(\lambda_ {1}(\tau))d\tau}ds,\int_{a}^{t}|f_{2}(s)|e^{\int_{s}^{t}\Re(\lambda_{2}(\tau))d \tau}ds\right\}\] \[\leq\max\left\{\int_{t}^{b}\|\boldsymbol{f}(s)\|_{\infty}e^{- \int_{t}^{s}\Re(\lambda_{1}(\tau))d\tau}ds,\int_{a}^{t}\|\boldsymbol{f}(s)\|_ {\infty}e^{\int_{s}^{t}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[\leq\varepsilon\max\left\{\int_{t}^{b}e^{-\int_{t}^{s}\Re( \lambda_{1}(\tau))d\tau}ds,\int_{a}^{t}e^{\int_{s}^{t}\Re(\lambda_{2}(\tau))d \tau}ds\right\}\] \[=\varepsilon\kappa_{13}(t)\]
for all \(t\in I\), and thus,
\[\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq \varepsilon\sup_{t\in I}\kappa_{13}(t)=K_{13}\varepsilon<\infty.\]
Consequently, (3.1) is Ulam stable on \(I\), with an Ulam constant \(K_{13}\). Moreover, from (3.11) and (3.16), we have
\[\boldsymbol{x}_{13}=\begin{pmatrix}\lim_{t\to b-0}\phi_{1}(t)e^{-\int_{t_{0}} ^{t}\lambda_{1}(s)ds}\\ \lim_{t\to a+0}\phi_{2}(t)e^{\int_{t}^{t_{0}}\lambda_{2}(s)ds}\end{pmatrix},\]
where \(\boldsymbol{\phi}(t)=(\phi_{1}(t),\phi_{2}(t))^{T}\). Hence \(\boldsymbol{x}(t)\) is given by
\[\boldsymbol{x}(t)=X(t)\begin{pmatrix}\lim_{t\to b-0}\phi_{1}(t)e^{-\int_{t_{0} }^{t}\lambda_{1}(s)ds}\\ \lim_{t\to a+0}\phi_{2}(t)e^{\int_{t}^{t_{0}}\lambda_{2}(s)ds}\end{pmatrix}.\]
Next, we will prove the uniqueness of the solution \(\boldsymbol{x}(t)\). To show by contradiction, we assume that there is a solution \(\boldsymbol{y}(t)\) of (3.1) such that
\[\boldsymbol{y}(t):=X(t)\boldsymbol{y}_{13},\quad\boldsymbol{y}_{13}\neq \boldsymbol{x}_{13}\]
on \(I\), and
\[\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{y}(t)\|_{\infty}\leq K_{13}\varepsilon.\]
This implies
\[\|X(t)(\boldsymbol{y}_{13}-\boldsymbol{x}_{13})\|_{\infty}\leq\|\boldsymbol{ \phi}(t)-\boldsymbol{y}(t)\|_{\infty}+\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t )\|_{\infty}\leq 2K_{13}\varepsilon\]
for all \(t\in I\). However, by (3.9), we see that
\[\lim_{t\to b-0}\|X(t)(\boldsymbol{y}_{13}-\boldsymbol{x}_{13})\|_{\infty} =\lim_{t\to b-0}\max\left\{|y_{1}|e^{\int_{t_{0}}^{t}\Re(\lambda_{1}(s)) ds},|y_{2}|e^{\int_{t_{0}}^{t}\Re(\lambda_{2}(s))ds}\right\}\] \[\geq\lim_{t\to b-0}|y_{1}|e^{\int_{t_{0}}^{t}\Re(\lambda_{1}(s)) ds}=\infty,\]
if \(y_{1}\neq 0\), and
\[\lim_{t\to a+0}\|X(t)(\mathbf{y}_{13}-\mathbf{x}_{13})\|_{\infty} =\lim_{t\to a+0}\max\left\{|y_{1}|e^{-\int_{t}^{t_{0}}\Re(\lambda_{1 }(s))ds},|y_{2}|e^{-\int_{t}^{t_{0}}\Re(\lambda_{2}(s))ds}\right\}\] \[\geq\lim_{t\to a+0}|y_{2}|e^{-\int_{t}^{t_{0}}\Re(\lambda_{2}(s)) ds}=\infty,\]
if \(y_{2}\neq 0\), where \(\mathbf{y}_{13}-\mathbf{x}_{13}=(y_{1},y_{2})^{T}\) and \((y_{1},y_{2})\neq(0,0)\). This is a contradiction. Hence \(\mathbf{x}(t)\) is the unique solution of (3.1) satisfying \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq K_{13}\varepsilon\). Thus, the proof is now complete.
Next, we discuss the lower bound of the Ulam constants.
**Theorem 3.2**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(a\leq a<b\leq\infty\). Then the following (i), (ii) and (iii) below hold:_
* _if_ (_3.3_),_ (_3.4_), and_ \(\sup_{t\in I}\kappa_{11}(t)<\infty\) _hold, then (_3.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{11}=\sup_{t\in I}\kappa_{11}(t)\)_;_
* _if_ (_3.6_),_ (_3.7_), and_ \(\sup_{t\in I}\kappa_{12}(t)<\infty\) _hold, then (_3.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{12}=\sup_{t\in I}\kappa_{12}(t)\)_;_
* _if_ (_3.8_),_ (_3.9_), and_ \(\sup_{t\in I}\kappa_{13}(t)<\infty\) _hold, then (_3.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{13}=\sup_{t\in I}\kappa_{13}(t)\)_._
Proof.: Case (i). Suppose that (3.3), (3.4), and \(\sup_{t\in I}\kappa_{11}(t)<\infty\) hold. Define
\[\mathbf{f}_{1}(t):=\varepsilon\begin{pmatrix}e^{i\int_{t_{0}}^{t}\Im(\lambda_{1} (s))ds}\\ e^{i\int_{t_{0}}^{t}\Im(\lambda_{2}(s))ds}\end{pmatrix},\quad t\in I, \tag{3.18}\]
where \(\Im(z)\) is the imaginary part of \(z\in\mathbb{C}\), and \(i\) is the imaginary unit. Now we consider the solution \(\mathbf{\phi}_{1}(t)\) of the linear differential system
\[\mathbf{\phi}_{1}^{\prime}-A(t)\mathbf{\phi}_{1}=\mathbf{f}_{1}(t).\]
Since \(\|\mathbf{f}_{1}(t)\|_{\infty}=\varepsilon\) holds for all \(t\in I\), \(\mathbf{\phi}_{1}(t)\) satisfies (3.5). By Theorem 3.1 (i), there exists the unique solution of (3.1) denoted by
\[\mathbf{x}_{1}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}_{1}(t)\right)\]
such that \(\sup_{t\in I}\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\|_{\infty}\leq K_{11}\varepsilon\), where
\[K_{11}=\sup_{t\in I}\kappa_{11}(t)=\sup_{t\in I}\int_{t}^{b}e^{-\int_{t}^{s} \min\{\Re(\lambda_{1}(\tau)),\Re(\lambda_{2}(\tau))\}d\tau}ds.\]
Hence, we obtain (3.13) (see the proof of Theorem 3.1 (i)), and so that
\[\left\|\boldsymbol{\phi}_{1}(t)-\boldsymbol{x}_{1}(t)\right\|_{\infty} =\left\|X(t)\int_{t}^{b}X^{-1}(s)\boldsymbol{f}_{1}(s)ds\right\| _{\infty}=\varepsilon\left\|\begin{pmatrix}\int_{t}^{b}e^{-\int_{t}^{s}\lambda _{1}(\tau)d\tau+i\int_{t_{0}}^{s}\Im(\lambda_{1}(\tau))d\tau}ds\\ \int_{t}^{b}e^{-\int_{t}^{s}\lambda_{2}(\tau)d\tau+i\int_{t_{0}}^{s}\Im( \lambda_{2}(\tau))d\tau}ds\end{pmatrix}\right\|_{\infty}\] \[=\varepsilon\left\|\begin{pmatrix}e^{-\int_{t}^{t_{0}}\lambda_{1} (\tau)d\tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds\\ e^{-\int_{t}^{t_{0}}\lambda_{2}(\tau)d\tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s} \Re(\lambda_{2}(\tau))d\tau}ds\end{pmatrix}\right\|_{\infty}\] \[=\varepsilon\max\left\{\left|e^{-\int_{t}^{t_{0}}\lambda_{1}( \tau)d\tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds \right|,\left|e^{-\int_{t}^{t_{0}}\lambda_{2}(\tau)d\tau}\int_{t}^{b}e^{-\int_ {t_{0}}^{s}\Re(\lambda_{2}(\tau))d\tau}ds\right|\right\}\] \[=\varepsilon\max\left\{\left|e^{-\int_{t}^{t_{0}}\lambda_{1}(\tau )d\tau}\right|\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds, \left|e^{-\int_{t}^{t_{0}}\lambda_{2}(\tau)d\tau}\right|\int_{t}^{b}e^{-\int_ {t_{0}}^{s}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\max\left\{e^{-\int_{t}^{t_{0}}\Re(\lambda_{1}(\tau)) d\tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds,e^{-\int_ {t}^{t_{0}}\Re(\lambda_{2}(\tau))d\tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re( \lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\max\left\{\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda_{1 }(\tau))d\tau}ds,\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\kappa_{11}(t).\]
This implies that
\[\sup_{t\in I}\left\|\boldsymbol{\phi}_{1}(t)-\boldsymbol{x}_{1}(t)\right\|_{ \infty}=K_{11}\varepsilon<\infty.\]
Since \(\boldsymbol{x}_{1}(t)\) is the unique solution of (3.1) satisfying \(\sup_{t\in I}\|\boldsymbol{\phi}_{1}(t)-\boldsymbol{x}_{1}(t)\|_{\infty}\leq K _{11}\varepsilon\), there is no solution \(\boldsymbol{x}(t)\) to (3.1) that satisfies \(\sup_{t\in I}\|\boldsymbol{\phi}_{1}(t)-\boldsymbol{x}(t)\|_{\infty}<K_{11}\varepsilon\). Consequently, the Ulam constant is at least \(K_{11}\).
Case (ii). Suppose that (3.6), (3.7), and \(\sup_{t\in I}\kappa_{12}(t)<\infty\) hold. Consider the solution \(\boldsymbol{\phi}_{2}(t)\) of the linear differential system
\[\boldsymbol{\phi}_{2}^{\prime}-A(t)\boldsymbol{\phi}_{2}=\boldsymbol{f}_{1}(t),\]
where \(\boldsymbol{f}_{1}(t)\) is given by (3.18). Since \(\|\boldsymbol{f}_{1}(t)\|_{\infty}=\varepsilon\) holds for all \(t\in I\), \(\boldsymbol{\phi}_{2}(t)\) satisfies (3.5). By Theorem 3.1 (ii), there exists the unique solution of (3.1) denoted by
\[\boldsymbol{x}_{2}(t)=X(t)\left(\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}_{2}( t)\right)\]
such that \(\sup_{t\in I}\|\mathbf{\phi}_{2}(t)-\mathbf{x}_{2}(t)\|_{\infty}\leq K_{12}\varepsilon\), where \(K_{12}=\sup_{t\in I}\kappa_{12}(t)\). Hence, we obtain (3.15) (see the proof of Theorem 3.1 (ii)), and so that
\[\left\|\mathbf{\phi}_{2}(t)-\mathbf{x}_{2}(t)\right\|_{\infty} =\left\|X(t)\int_{a}^{t}X^{-1}(s)\mathbf{f}_{1}(s)ds\right\|_{\infty}= \varepsilon\left\|\begin{pmatrix}e^{\int_{t_{0}}^{t}\lambda_{1}(\tau)d\tau} \int_{a}^{t}e^{\int_{s}^{t_{0}}\Re(\lambda_{1}(\tau))d\tau}ds\\ e^{\int_{t_{0}}^{t}\lambda_{2}(\tau)d\tau}\int_{a}^{t}e^{\int_{s}^{t_{0}}\Re( \lambda_{2}(\tau))d\tau}ds\end{pmatrix}\right\|_{\infty}\] \[=\varepsilon\max\left\{\left|e^{\int_{t_{0}}^{t}\lambda_{1}(\tau) d\tau}\right|\int_{a}^{t}e^{\int_{s}^{t_{0}}\Re(\lambda_{1}(\tau))d\tau}ds, \left|e^{\int_{t_{0}}^{t}\lambda_{2}(\tau)d\tau}\right|\int_{a}^{t}e^{\int_{t_ {0}}^{t_{0}}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\max\left\{e^{\int_{t_{0}}^{t}\Re(\lambda_{1}(\tau)) d\tau}\int_{a}^{t}e^{\int_{s}^{t_{0}}\Re(\lambda_{1}(\tau))d\tau}ds,e^{\int_{t_ {0}}^{t}\Re(\lambda_{2}(\tau))d\tau}\int_{a}^{t}e^{\int_{t_{0}}^{t_{0}}\Re( \lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\kappa_{12}(t).\]
Thus, we obtain \(\sup_{t\in I}\left\|\mathbf{\phi}_{2}(t)-\mathbf{x}_{2}(t)\right\|_{\infty}=K_{12} \varepsilon<\infty\). Consequently, any Ulam constant is at least \(K_{12}\).
Case (iii). Suppose that (3.8), (3.9), and \(\sup_{t\in I}\kappa_{13}(t)<\infty\) hold. Consider the solution \(\mathbf{\phi}_{3}(t)\) of the linear differential system
\[\mathbf{\phi}_{3}^{\prime}-A(t)\mathbf{\phi}_{3}=\mathbf{f}_{1}(t),\]
where \(\mathbf{f}_{1}(t)\) is given by (3.18). Since \(\|\mathbf{f}_{1}(t)\|_{\infty}=\varepsilon\) holds for all \(t\in I\), \(\mathbf{\phi}_{3}(t)\) satisfies (3.5). By Theorem 3.1 (iii), there exists the unique solution of (3.1) denoted by
\[\mathbf{x}_{3}(t)=X(t)\begin{pmatrix}\lim_{t\to b-0}\phi_{1}(t)e^{-\int_{t_{0}}^{t }\lambda_{1}(s)ds}\\ \lim_{t\to a+0}\phi_{2}(t)e^{\int_{t}^{t_{0}}\lambda_{2}(s)ds}\end{pmatrix}\]
such that \(\sup_{t\in I}\|\mathbf{\phi}_{3}(t)-\mathbf{x}_{3}(t)\|_{\infty}\leq K_{13}\varepsilon\), where \(K_{13}=\sup_{t\in I}\kappa_{13}(t)\). Hence, we obtain (3.17) (see the proof of Theorem 3.1 (iii)), and so that
\[\left\|\mathbf{\phi}_{3}(t)-\mathbf{x}_{3}(t)\right\|_{\infty} =\left\|\begin{pmatrix}-\int_{t}^{b}f_{1}(s)e^{-\int_{t}^{s} \lambda_{1}(\tau)d\tau}ds\\ \int_{a}^{t}f_{2}(s)e^{\int_{s}^{t}\lambda_{2}(\tau)d\tau}ds\end{pmatrix} \right\|_{\infty}=\varepsilon\left\|\begin{pmatrix}-\int_{t}^{b}e^{-\int_{t}^{s }\lambda_{1}(\tau)d\tau+i\int_{t_{0}}^{s}\Im(\lambda_{1}(\tau))d\tau}ds\\ \int_{a}^{t}e^{\int_{s}^{t}\lambda_{2}(\tau)d\tau+i\int_{t_{0}}^{s}\Im(\lambda _{2}(\tau))d\tau}ds\end{pmatrix}\right\|_{\infty}\] \[=\varepsilon\left\|\begin{pmatrix}-e^{-\int_{t}^{t_{0}}\lambda_{1} (\tau)d\tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds\\ e^{\int_{t_{0}}^{t}\lambda_{2}(\tau)d\tau}\int_{a}^{t}e^{\int_{t_{0}}^{t_{0}} \Re(\lambda_{2}(\tau))d\tau}ds\end{pmatrix}\right\|_{\infty}\] \[=\varepsilon\max\left\{\left|e^{-\int_{t}^{t_{0}}\lambda_{1}(\tau )d\tau}\right|\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds, \left|e^{\int_{t_{0}}^{t}\lambda_{2}(\tau)d\tau}\right|\int_{a}^{t}e^{\int_{t _{0}}^{t_{0}}\Re(\lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\max\left\{e^{-\int_{t}^{t_{0}}\Re(\lambda_{1}(\tau))d \tau}\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda_{1}(\tau))d\tau}ds,e^{\int_{t _{0}}^{t}\Re(\lambda_{2}(\tau))d\tau}\int_{a}^{t}e^{\int_{t_{0}}^{t_{0}}\Re( \lambda_{2}(\tau))d\tau}ds\right\}\] \[=\varepsilon\kappa_{13}(t).\]
Thus, we have \(\sup_{t\in I}\left\|\mathbf{\phi}_{3}(t)-\mathbf{x}_{3}(t)\right\|_{\infty}=K_{13} \varepsilon<\infty\). Consequently, any Ulam constant is at least \(K_{13}\). The proof is now complete.
Using Theorems 3.1 and 3.2, we obtain the following result, immediately.
**Theorem 3.3**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(a\leq a<b\leq\infty\). Then the following (i), (ii) and (iii) below hold:_
* _if_ (_3.3_),_ (_3.4_), and_ \(\sup_{t\in I}\kappa_{11}(t)<\infty\) _hold, then (_3.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{11}=\sup_{t\in I}\kappa_{11}(t)\)_;_
* _if_ (_3.6_),_ (_3.7_), and_ \(\sup_{t\in I}\kappa_{12}(t)<\infty\) _hold, then (_3.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{12}=\sup_{t\in I}\kappa_{12}(t)\)_;_
* _if_ (_3.8_),_ (_3.9_), and_ \(\sup_{t\in I}\kappa_{13}(t)<\infty\) _hold, then (_3.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{13}=\sup_{t\in I}\kappa_{13}(t)\)_._
When \(\lambda_{1}(t)\equiv\lambda_{1}\) and \(\lambda_{2}(t)\equiv\lambda_{2}\), (3.1) reduces to the constant coefficients two-dimensional linear differential system
\[\mathbf{x}^{\prime}=A\mathbf{x},\quad A=\begin{pmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{pmatrix}. \tag{3.19}\]
For this system, we have the following result.
**Corollary 3.4**.: _Let \(I=\mathbb{R}\) and \(\Re(\lambda_{1})\Re(\lambda_{2})\neq 0\). Then (3.19) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c1}:=\max\left\{\frac{1}{|\Re(\lambda_{1})|},\frac{1}{|\Re(\lambda_{2})|}\right\}\)._
Proof.: First, we consider the case \(\Re(\lambda_{1})\geq\Re(\lambda_{2})>0\). Since \(\min\{\Re(\lambda_{1}),\Re(\lambda_{2})\}=\Re(\lambda_{2})\) holds, we have
\[\kappa_{11}(t)=\int_{t}^{\infty}e^{-\int_{t}^{s}\min\{\Re(\lambda_{1}),\Re( \lambda_{2})\}dr}ds=\int_{t}^{\infty}e^{-\Re(\lambda_{2})(s-t)d\tau}ds=\frac {1}{\Re(\lambda_{2})}=\max\left\{\frac{1}{|\Re(\lambda_{1})|},\frac{1}{|\Re( \lambda_{2})|}\right\}\]
for all \(t\in\mathbb{R}\). Using Theorem 3.3 (i) with \(I=\mathbb{R}\), (3.19) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c1}\).
Next, we consider the case \(0>\Re(\lambda_{1})\geq\Re(\lambda_{2})\). Since \(\max\{\Re(\lambda_{1}),\Re(\lambda_{2})\}=\Re(\lambda_{1})\) holds, we have
\[\kappa_{12}(t)=\int_{-\infty}^{t}e^{\int_{s}^{t}\max\{\Re(\lambda_{1}),\Re( \lambda_{2})\}d\tau}ds=\int_{-\infty}^{t}e^{\Re(\lambda_{1})(t-s)}ds=\frac{1}{ -\Re(\lambda_{1})}=\max\left\{\frac{1}{|\Re(\lambda_{1})|},\frac{1}{|\Re( \lambda_{2})|}\right\}\]
for all \(t\in\mathbb{R}\). Using Theorem 3.3 (ii) with \(I=\mathbb{R}\), (3.19) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c1}\).
Finally, we consider the case \(\Re(\lambda_{1})>0>\Re(\lambda_{2})\). Then
\[\kappa_{13}(t) =\max\left\{\int_{t}^{\infty}e^{-\int_{t}^{s}\Re(\lambda_{1})d \tau}ds,\int_{-\infty}^{t}e^{\int_{s}^{t}\Re(\lambda_{2})d\tau}ds\right\}\] \[=\max\left\{\frac{1}{\Re(\lambda_{1})},\frac{1}{-\Re(\lambda_{2})} \right\}=\max\left\{\frac{1}{|\Re(\lambda_{1})|},\frac{1}{|\Re(\lambda_{2})|}\right\}\]
for all \(t\in\mathbb{R}\). By Theorem 3.3 (iii) with \(I=\mathbb{R}\), (3.19) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c1}\)
**Remark 3.5**.: When \(\lambda_{1}\), \(\lambda_{2}\in\mathbb{R}\) and \(\lambda_{1}\lambda_{2}>0\), we can check that the Ulam constant
\[\max\left\{\frac{1}{|\Re(\lambda_{1})|},\frac{1}{|\Re(\lambda_{2})|}\right\}= \max\left\{\frac{1}{|\lambda_{1}|},\frac{1}{|\lambda_{2}|}\right\}=\|A^{-1}\|_ {\infty}\]
in Corollary 3.4 is the best Ulam constant, by using the result in [3, Theorem 4.3.], where \(\|\cdot\|_{\infty}\) is the induced matrix norm defined by \(\|M\|_{\infty}:=\max_{j\in\{1,2\}}\{|m_{j1}|+|m_{j2}|\}\) for \(M=\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix}\). However, to our knowledge no best Ulam constant for (3.19) has ever been derived for the other cases satisfying \(\Re(\lambda_{1})\Re(\lambda_{2})\neq 0\). In other words, it can be argued that new results have been obtained from this study, even if only for constant coefficients.
## 4. Ulam stability of generalized Jordan normal form (II)
In this section, we consider the two-dimensional linear differential system
\[\mathbf{x}^{\prime}=A(t)\mathbf{x},\quad A(t)=\begin{pmatrix}\lambda(t)&\mu(t)\\ 0&\lambda(t)\end{pmatrix}, \tag{4.1}\]
where \(\lambda\), \(\mu:I\to\mathbb{C}\) are continuous, and \(\mathbf{x}\in\mathbb{C}^{2}\). A fundamental matrix of (4.1) and its inverse matrix are as follows:
\[X(t)=e^{\int_{t_{0}}^{t}\lambda(s)ds}\begin{pmatrix}1&\int_{t_{0}}^{t}\mu(s) ds\\ 0&1\end{pmatrix},\quad X^{-1}(t)=e^{-\int_{t_{0}}^{t}\lambda(s)ds}\begin{pmatrix}1& -\int_{t_{0}}^{t}\mu(s)ds\\ 0&1\end{pmatrix} \tag{4.2}\]
for \(t\in I\), where \(t_{0}\in I\). In this section, the maximum norm \(\|\cdot\|_{\infty}\) will be used. The obtained result is as follows.
**Theorem 4.1**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(-\infty\leq a<b\leq\infty\). Then the following (i) and (ii) below hold:_
* _if_ \[\kappa_{21}(t):=\int_{t}^{b}\left(1+\left|\int_{t}^{s}\mu(\tau)d\tau\right| \right)e^{-\int_{t}^{s}\Re(\lambda(\tau))d\tau}ds\;\;\text{exists for all}\;\;t\in I,\] (4.3) _and_ \(\sup_{t\in I}\kappa_{21}(t)<\infty\)_, then (_4.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{21}:=\sup_{t\in I}\kappa_{21}(t)\)_. Furthermore, if_ \[\lim_{t\to b-0}\int_{t_{0}}^{t}\Re(\lambda(s))ds=\infty,\quad t_{0}\in(a,b),\] (4.4) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\mathbf{\phi}(t)\) _satisfying (_3.5_), there exists the unique solution of (_4.1_) denoted by_ \[\mathbf{x}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}(t)\right)\] _such that_ \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq K_{21}\varepsilon\)_, where_ \(X(t)\) _and_ \(X^{-1}(t)\) _are given in (_4.2_), and_ \((\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}(t))\) _is a well-defined constant vector;_
_
2. _if_ \[\kappa_{22}(t):=\int_{a}^{t}\left(1+\left|\int_{s}^{t}\mu(\tau)d\tau\right| \right)e^{\int_{s}^{t}\Re(\lambda(\tau))d\tau}ds\;\;\text{exists for all}\;\;t\in I,\] (4.5) _and_ \(\sup_{t\in I}\kappa_{22}(t)<\infty\)_, then (_4.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{22}:=\sup_{t\in I}\kappa_{22}(t)\)_. Furthermore, if_ \[\lim_{t\to a+0}\int_{t}^{t_{0}}\Re(\lambda(s))ds=-\infty,\quad t_{0}\in(a,b),\] (4.6) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\boldsymbol{\phi}(t)\) _satisfying (_3.5_), there exists the unique solution of (_4.1_) denoted by_ \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] _such that_ \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{22}\varepsilon\)_, where_ \(X(t)\) _and_ \(X^{-1}(t)\) _are given in (_4.2_), and_ \((\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}(t))\) _is a well-defined constant vector._
Proof.: Since the proof of this theorem can be proved in the same way as Theorem 3.1, only the essential parts will be written below.
Case (i). Suppose that (4.3) and \(\sup_{t\in I}\kappa_{21}(t)<\infty\) hold. Let \(\varepsilon>0\) be given, and let \(\boldsymbol{\phi}(t)\) satisfy (3.5). Define \(\boldsymbol{f}(t)\) by (3.10). Then \(\sup_{t\in I}\|\boldsymbol{f}(t)\|_{\infty}\leq\varepsilon\) holds, and \(\boldsymbol{\phi}(t)\) is a solution of (1.1), and so that \(\boldsymbol{\phi}(t)\) is written by (3.11), where \(X(t)\) and \(X^{-1}(t)\) are given in (4.2). Now we consider a solution \(\boldsymbol{x}(t)\) of (4.1) defined by
\[\boldsymbol{x}(t):=X(t)\boldsymbol{x}_{21},\quad\boldsymbol{x}_{21}:=X^{-1}(t _{0})\boldsymbol{\phi}(t_{0})+\int_{t_{0}}^{b}X^{-1}(s)\boldsymbol{f}(s)ds. \tag{4.7}\]
It will be shown later in the calculation that the (improper) integral in (4.7) converges. By (3.11) and (4.7), we obtain (3.13). Let \(\boldsymbol{f}(t)=(f_{1}(t),f_{2}(t))^{T}\). Then using (3.13), (4.2), and (4.3), we have
\[\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty} \leq\int_{t}^{b}\left\|X(t)X^{-1}(s)\boldsymbol{f}(s)\right\|_{ \infty}ds\] \[=\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda(\tau))d\tau}\left\| \begin{pmatrix}1&-\int_{t}^{s}\mu(\tau)d\tau\\ 0&1\end{pmatrix}\begin{pmatrix}f_{1}(s)\\ f_{2}(s)\end{pmatrix}\right\|_{\infty}ds\] \[=\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda(\tau))d\tau}\left\| \begin{pmatrix}f_{1}(s)-f_{2}(s)\int_{t}^{s}\mu(\tau)d\tau\\ f_{2}(s)\end{pmatrix}\right\|_{\infty}ds\] \[\leq\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda(\tau))d\tau}\max \left\{|f_{1}(s)|+|f_{2}(s)|\left|\int_{t}^{s}\mu(\tau)d\tau\right|,|f_{2}(s)| \right\}ds\] \[\leq\varepsilon\kappa_{21}(t)\]
for all \(t\in I\). Hence \(\mathbf{x}_{21}\) is well-defined, and \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq K_{21}\varepsilon\). Therefore, (4.1) is Ulam stable on \(I\), with an Ulam constant \(K_{21}\). Moreover, from (3.11) and (4.7), we have
\[\mathbf{x}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}(t)\right).\]
Next, we consider the solution to (4.1) given by \(\mathbf{y}(t):=X(t)\mathbf{y}_{21}\) with \(\mathbf{y}_{21}\neq\mathbf{x}_{21}\). If \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{y}(t)\|_{\infty}\leq K_{21}\varepsilon\), then
\[\|X(t)(\mathbf{y}_{21}-\mathbf{x}_{21})\|_{\infty}\leq\|\mathbf{\phi}(t)-\mathbf{y}(t)\|_{ \infty}+\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq 2K_{21}\varepsilon\]
for all \(t\in I\). However, by (4.4), we see that
\[\lim_{t\to b-0}\|X(t)(\mathbf{y}_{21}-\mathbf{x}_{21})\|_{\infty} =\lim_{t\to b-0}e^{\int_{t_{0}}^{t}\Re(\lambda(s))ds}\left\| \begin{pmatrix}y_{1}-y_{2}\int_{t_{0}}^{t}\mu(s)ds\\ y_{2}\end{pmatrix}\right\|_{\infty}\] \[=\lim_{t\to b-0}e^{\int_{t_{0}}^{t}\Re(\lambda(s))ds}\max\left\{ \left|y_{1}-y_{2}\int_{t_{0}}^{t}\mu(s)ds\right|,|y_{2}|\right\}\] \[\geq\lim_{t\to b-0}e^{\int_{t_{0}}^{t}\Re(\lambda(s))ds}\begin{cases} |y_{1}|&\text{if}&y_{2}=0\\ |y_{2}|&\text{if}&y_{2}\neq 0\end{cases}=\infty,\]
where \(\mathbf{y}_{21}-\mathbf{x}_{21}=(y_{1},y_{2})^{T}\) and \((y_{1},y_{2})\neq(0,0)\). This is a contradiction. Hence \(\mathbf{x}(t)\) is the unique solution of (4.1) satisfying \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{\infty}\leq K_{21}\varepsilon\).
Case (ii) can be proved by combining the proofs of Case (i) and Theorem 3.1 (ii), so we omit the proof. Thus, the proof is now complete.
The lower bound of the Ulam constants are as follows.
**Theorem 4.2**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(a\leq a<b\leq\infty\). Suppose that \(\mu(t)\) is nonnegative on \(I\). Then the following (i) and (ii) below hold:_
1. _if (_4.3_), (_4.4_), and_ \(\sup_{t\in I}\kappa_{21}(t)<\infty\) _hold, then (_4.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{21}=\sup_{t\in I}\kappa_{21}(t)\)_;_
2. _if (_4.5_), (_4.6_), and_ \(\sup_{t\in I}\kappa_{22}(t)<\infty\) _hold, then (_4.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{22}=\sup_{t\in I}\kappa_{22}(t)\)_._
Proof.: Case (i). Suppose that (4.3), (4.4), and \(\sup_{t\in I}\kappa_{21}(t)<\infty\) hold. Define
\[\mathbf{f}_{2}(t):=\varepsilon e^{i\int_{t_{0}}^{t}\Im(\lambda(s))ds}\begin{pmatrix} 1\\ -1\end{pmatrix},\quad t\in I. \tag{4.8}\]
Now we consider the solution \(\mathbf{\phi}_{1}(t)\) of the system \(\mathbf{\phi}_{1}^{\prime}-A(t)\mathbf{\phi}_{1}=\mathbf{f}_{2}(t)\). Since \(\|\mathbf{f}_{2}(t)\|_{\infty}=\varepsilon\) holds for all \(t\in I\), \(\mathbf{\phi}_{1}(t)\) satisfies (3.5). By Theorem 4.1 (i), there exists the unique solution of (4.1)
denoted by
\[\mathbf{x}_{1}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}_{1}(t)\right)\]
such that \(\sup_{t\in I}\left\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\right\|_{\infty}\leq K_{21}\varepsilon\), where \(K_{21}=\sup_{t\in I}\kappa_{21}(t)\). Hence, we obtain (3.13). Since \(\mu(t)\) is nonnegative on \(I\), we see that
\[\left\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\right\|_{\infty}=\left\|X(t )\int_{t}^{b}X^{-1}(s)\mathbf{f}_{2}(s)ds\right\|_{\infty}\] \[=\varepsilon\left\|\int_{t}^{b}e^{-\int_{t}^{s}\lambda(\tau)d\tau +i\int_{t_{0}}^{s}\Im(\lambda(\tau))d\tau}\begin{pmatrix}1+\int_{t}^{s}\mu( \tau)d\tau\\ -1\end{pmatrix}ds\right\|_{\infty}\] \[=\varepsilon e^{-\int_{t}^{t_{0}}\Re(\lambda(\tau))d\tau}\left\| \int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda(\tau))d\tau}\begin{pmatrix}1+\int _{t}^{s}\mu(\tau)d\tau\\ -1\end{pmatrix}ds\right\|_{\infty}\] \[=\varepsilon e^{-\int_{t}^{t_{0}}\Re(\lambda(\tau))d\tau}\max \left\{\left|\int_{t}^{b}e^{-\int_{t_{0}}^{s}\Re(\lambda(\tau))d\tau}\left(1+ \int_{t}^{s}\mu(\tau)d\tau\right)ds\right|,\left|-\int_{t}^{b}e^{-\int_{t_{0} }^{s}\Re(\lambda(\tau))d\tau}ds\right|\right\}\] \[=\varepsilon\int_{t}^{b}e^{-\int_{t}^{s}\Re(\lambda(\tau))d\tau} \left(1+\int_{t}^{s}\mu(\tau)d\tau\right)ds=\varepsilon\kappa_{21}(t),\]
and so that \(\sup_{t\in I}\left\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\right\|_{\infty}=K_{21}\varepsilon\). Consequently, the Ulam constant is at least \(K_{21}\).
Case (ii) can be proved in the same way as Case (i), so we omit the proof. The proof is now complete.
Using Theorems 4.1 and 4.2, we obtain the following result, immediately.
**Theorem 4.3**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(a\leq a<b\leq\infty\). Suppose that \(\mu(t)\) is nonnegative on \(I\). Then the following (i) and (ii) below hold:_
* _if (_4.3_), (_4.4_), and_ \(\sup_{t\in I}\kappa_{21}(t)<\infty\) _hold, then (_4.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{21}=\sup_{t\in I}\kappa_{21}(t)\)_;_
* _if (_4.5_), (_4.6_), and_ \(\sup_{t\in I}\kappa_{22}(t)<\infty\) _hold, then (_4.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{22}=\sup_{t\in I}\kappa_{22}(t)\)_._
When \(\lambda(t)\equiv\lambda\) and \(\mu(t)\equiv 1\), (4.1) reduces to the constant coefficients two-dimensional linear differential system
\[\mathbf{x}^{\prime}=A\mathbf{x},\quad A=\begin{pmatrix}\lambda&1\\ 0&\lambda\end{pmatrix}. \tag{4.9}\]
For this system, we have the following result.
**Corollary 4.4**.: _Let \(I=\mathbb{R}\) and \(\Re(\lambda)\neq 0\). Then (4.9) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c2}:=\frac{|\Re(\lambda)|+1}{(\Re(\lambda))^{2}}\)._
Proof.: First, we consider the case \(\Re(\lambda)>0\). Then
\[\kappa_{21}(t)=\int_{t}^{\infty}(1+s-t)e^{-\Re(\lambda)(s-t)}ds=\frac{1}{\Re( \lambda)}+\frac{1}{(\Re(\lambda))^{2}}=\frac{|\Re(\lambda)|+1}{(\Re(\lambda))^ {2}}.\]
for all \(t\in\mathbb{R}\). By Theorem 4.3 (i) with \(I=\mathbb{R}\), (4.9) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c2}\).
Next, we consider the case \(\Re(\lambda)<0\). Then
\[\kappa_{22}(t)=\int_{-\infty}^{t}(1+t-s)e^{\Re(\lambda)(t-s)}ds=\frac{1}{-\Re( \lambda)}+\frac{1}{(\Re(\lambda))^{2}}=\frac{|\Re(\lambda)|+1}{(\Re(\lambda) )^{2}}.\]
for all \(t\in\mathbb{R}\). By Theorem 4.3 (ii) with \(I=\mathbb{R}\), (4.9) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c2}\).
**Remark 4.5**.: When \(\lambda\in\mathbb{R}\setminus\{0\}\), we can check that the Ulam constant
\[\frac{|\Re(\lambda)|+1}{(\Re(\lambda))^{2}}=\frac{|\lambda|+1}{\lambda^{2}}= \|A^{-1}\|_{\infty}\]
in Corollary 4.4 is the best Ulam constant, by using the result in [3, Theorem 4.3.].
## 5. Ulam stability of generalized Jordan normal form (III)
In this section, we consider the two-dimensional linear differential system
\[\boldsymbol{x}^{\prime}=A(t)\boldsymbol{x},\quad A(t)=\begin{pmatrix}\alpha(t )&\beta(t)\\ -\beta(t)&\alpha(t)\end{pmatrix}, \tag{5.1}\]
where \(\alpha\), \(\beta:I\to\mathbb{R}\) are continuous, and \(\boldsymbol{x}\in\mathbb{R}^{2}\). A fundamental matrix of (5.1) and its inverse matrix are as follows:
\[\begin{split} X(t)=e^{\int_{t_{0}}^{t}\alpha(s)ds}\begin{pmatrix} \cos\int_{t_{0}}^{t}\beta(s)ds&\sin\int_{t_{0}}^{t}\beta(s)ds\\ -\sin\int_{t_{0}}^{t}\beta(s)ds&\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix}, \\ X^{-1}(t)=e^{-\int_{t_{0}}^{t}\alpha(s)ds}\begin{pmatrix}\cos\int_{t_{0}}^{t} \beta(s)ds&-\sin\int_{t_{0}}^{t}\beta(s)ds\\ \sin\int_{t_{0}}^{t}\beta(s)ds&\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix}\end{split} \tag{5.2}\]
for \(t\in I\), where \(t_{0}\in I\). In this section, we use the norm of the vector \(\boldsymbol{v}=(v_{1},v_{2})^{T}\) with the Euclidean norm \(\|\boldsymbol{v}\|_{2}:=\sqrt{|v_{1}|^{2}+|v_{2}|^{2}}\). Differences in norms do not affect Ulam stability, but they can affect Ulam constants. By referring to the proof of the theorem below, the reader will understand that the Euclidean norm is one of the appropriate norms for (5.1).
**Theorem 5.1**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(-\infty\leq a<b\leq\infty\). Then the following (i) and (ii) below hold:_
1. _if_ \[\kappa_{31}(t):=\int_{t}^{b}e^{-\int_{t}^{s}\alpha(\tau)d\tau}ds\ \ \text{exists for all}\ \ t\in I,\] (5.3) _and_ \(\sup_{t\in I}\kappa_{31}(t)<\infty\)_, then (_5.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{31}:=\sup_{t\in I}\kappa_{31}(t)\)_. Furthermore, if_ \[\lim_{t\to b-0}\int_{t_{0}}^{t}\alpha(s)ds=\infty,\quad t_{0}\in(a,b),\] (5.4) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\boldsymbol{\phi}(t)\) _satisfying (_3.5_), there exists the unique solution of (_5.1_) denoted by_ \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] _such that_ \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{2}\leq K_{31}\varepsilon\)_, where_ \(X(t)\) _and_ \(X^{-1}(t)\) _are given in (_5.2_), and_ \((\lim_{t\to b-0}X^{-1}(t)\boldsymbol{\phi}(t))\) _is a well-defined constant vector;_
2. _if_ \[\kappa_{32}(t):=\int_{a}^{t}e^{\int_{s}^{t}\alpha(\tau)d\tau}ds\ \ \text{exists for all}\ \ t\in I,\] (5.5) _and_ \(\sup_{t\in I}\kappa_{32}(t)<\infty\)_, then (_5.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \(K_{32}:=\sup_{t\in I}\kappa_{32}(t)\)_. Furthermore, if_ \[\lim_{t\to a+0}\int_{t}^{t_{0}}\alpha(s)ds=-\infty,\quad t_{0}\in(a,b),\] (5.6) _then for any_ \(\varepsilon>0\) _and for any continuously differentiable function_ \(\boldsymbol{\phi}(t)\) _satisfying (_3.5_), there exists the unique solution of (_5.1_) denoted by_ \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] _such that_ \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{2}\leq K_{32}\varepsilon\)_, where_ \(X(t)\) _and_ \(X^{-1}(t)\) _are given in (_5.2_), and_ \((\lim_{t\to a+0}X^{-1}(t)\boldsymbol{\phi}(t))\) _is a well-defined constant vector._
Proof.: Since the proof of this theorem can be proved in the same way as Theorem 3.1, only the essential parts will be written below. Note that the maximum norm that has been used in the proofs of the previous theorems can be changed to the Euclidean norm except for computations using the definition of the maximum norm, and this proof uses everything with the Euclidean norm.
Case (i). Suppose that (5.3) and \(\sup_{t\in I}\kappa_{31}(t)<\infty\) hold. Let \(\varepsilon>0\) be given, and let \(\boldsymbol{\phi}(t)\) satisfy (3.5). Define \(\boldsymbol{f}(t)\) by (3.10). Then \(\sup_{t\in I}\|\boldsymbol{f}(t)\|_{2}\leq\varepsilon\) holds, and \(\boldsymbol{\phi}(t)\) is a solution of (1.1), and so that \(\boldsymbol{\phi}(t)\) is written by (3.11), where \(X(t)\) and \(X^{-1}(t)\) are given in (5.2). Now we consider a solution \(\boldsymbol{x}(t)\) of (5.1) defined by
\[\boldsymbol{x}(t):=X(t)\boldsymbol{x}_{31},\quad\boldsymbol{x}_{31}:=X^{-1}(t _{0})\boldsymbol{\phi}(t_{0})+\int_{t_{0}}^{b}X^{-1}(s)\boldsymbol{f}(s)ds. \tag{5.7}\]
It will be shown later in the calculation that the (improper) integral in (5.7) converges. By (3.11) and (5.7), we obtain (3.13). Let \(\mathbf{f}(t)=(f_{1}(t),f_{2}(t))^{T}\). Then using (3.13), (5.2), (5.3), and the definition of the Euclidean norm, we have
\[\begin{split}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{2}&\leq \int_{t}^{b}\left\|X(t)X^{-1}(s)\mathbf{f}(s)\right\|_{2}ds\\ &=\int_{t}^{b}e^{-\int_{t}^{s}\alpha(\tau)d\tau}\left\|\begin{pmatrix} f_{1}(s)\cos\int_{s}^{t}\beta(\tau)d\tau-f_{2}(s)\sin\int_{s}^{t}\beta(\tau)d\tau \\ f_{1}(s)\sin\int_{s}^{t}\beta(\tau)d\tau+f_{2}(s)\cos\int_{s}^{t}\beta(\tau)d \tau\end{pmatrix}\right\|_{2}ds\\ &=\int_{t}^{b}e^{-\int_{t}^{s}\alpha(\tau)d\tau}\|\mathbf{f}(s)\|_{2}ds \\ &\leq\varepsilon\kappa_{31}(t)\end{split} \tag{5.8}\]
for all \(t\in I\). Hence \(\mathbf{x}_{31}\) is well-defined, and \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{2}\leq K_{31}\varepsilon\). Therefore, (5.1) is Ulam stable on \(I\), with an Ulam constant \(K_{31}\). Moreover, from (3.11) and (5.7), we have
\[\mathbf{x}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}(t)\right).\]
Next, we consider the solution to (5.1) given by \(\mathbf{y}(t):=X(t)\mathbf{y}_{31}\) with \(\mathbf{y}_{31}\neq\mathbf{x}_{31}\). If \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{y}(t)\|_{2}\leq K_{31}\varepsilon\), then
\[\|X(t)(\mathbf{y}_{31}-\mathbf{x}_{31})\|_{2}\leq\|\mathbf{\phi}(t)-\mathbf{y}(t)\|_{2}+\|\mathbf{ \phi}(t)-\mathbf{x}(t)\|_{2}\leq 2K_{31}\varepsilon\]
for all \(t\in I\). However, by (5.4), we see that
\[\begin{split}\lim_{t\to b-0}\|X(t)(\mathbf{y}_{31}-\mathbf{x}_{31})\|_{2}& =\lim_{t\to b-0}e^{\int_{t_{0}}^{t}\alpha(s)ds}\left\|\begin{pmatrix} y_{1}\cos\int_{t_{0}}^{t}\beta(s)ds+y_{2}\sin\int_{t_{0}}^{t}\beta(s)ds \\ -y_{1}\sin\int_{t_{0}}^{t}\beta(s)ds+y_{2}\cos\int_{t_{0}}^{t}\beta(s)ds \end{pmatrix}\right\|_{2}\\ &=\lim_{t\to b-0}e^{\int_{t_{0}}^{t}\alpha(s)ds}\|(\mathbf{y}_{31}-\mathbf{x}_{31}) \|_{2}=\infty,\end{split}\]
where \(\mathbf{y}_{31}-\mathbf{x}_{31}=(y_{1},y_{2})^{T}\) and \((y_{1},y_{2})\neq(0,0)\). This is a contradiction. Hence \(\mathbf{x}(t)\) is the unique solution of (5.1) satisfying \(\sup_{t\in I}\|\mathbf{\phi}(t)-\mathbf{x}(t)\|_{2}\leq K_{31}\varepsilon\).
Case (ii) can be proved by combining the proofs of Case (i) and Theorem 3.1 (ii), so we omit the proof. Thus, the proof is now complete.
The lower bound of the Ulam constants are as follows.
**Theorem 5.2**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(a\leq a<b\leq\infty\). Then the following (i) and (ii) below hold:_
1. _if (_5.3_), (_5.4_), and_ \(\sup_{t\in I}\kappa_{31}(t)<\infty\) _hold, then (_5.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{31}=\sup_{t\in I}\kappa_{31}(t)\)_;_
2. _if (_5.5_), (_5.6_), and_ \(\sup_{t\in I}\kappa_{32}(t)<\infty\) _hold, then (_5.1_) is Ulam stable on_ \(I\)_, and any Ulam constant is greater than or equal to_ \(K_{32}=\sup_{t\in I}\kappa_{32}(t)\)
Proof.: Case (i). Suppose that (5.3), (5.4), and \(\sup_{t\in I}\kappa_{31}(t)<\infty\) hold. Let \(\mathbf{x}_{*}\in\mathbb{R}^{2}\setminus\{\mathbf{0}\}\). Define
\[\mathbf{f}_{3}(t):=\frac{\varepsilon e^{-\int_{t_{0}}^{t}\alpha(s)ds}}{\|\mathbf{x}_{*} \|_{2}}X(t)\mathbf{x}_{*},\quad t\in I, \tag{5.9}\]
where \(X(t)\) is given in (5.2). Now we consider the solution \(\mathbf{\phi}_{1}(t)\) of the system \(\mathbf{\phi}_{1}^{\prime}-A(t)\mathbf{\phi}_{1}=\mathbf{f}_{3}(t)\). Since \(\|\mathbf{f}_{3}(t)\|_{2}=\varepsilon\) holds for all \(t\in I\), \(\mathbf{\phi}_{1}(t)\) satisfies (3.5). By Theorem 5.1 (i), there exists the unique solution of (5.1) denoted by
\[\mathbf{x}_{1}(t)=X(t)\left(\lim_{t\to b-0}X^{-1}(t)\mathbf{\phi}_{1}(t)\right)\]
such that \(\sup_{t\in I}\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\|_{2}\leq K_{31}\varepsilon\), where \(K_{31}=\sup_{t\in I}\kappa_{31}(t)\). Hence, we obtain (3.13). Since \(\|X(t)\mathbf{x}_{*}\|_{2}=e^{\int_{t_{0}}^{t}\alpha(s)ds}\|\mathbf{x}_{*}\|_{2}\) holds, we see that
\[\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\|_{2} =\left\|X(t)\int_{t}^{b}X^{-1}(s)\mathbf{f}_{3}(s)ds\right\|_{2}= \left\|X(t)\int_{t}^{b}\frac{\varepsilon e^{-\int_{t_{0}}^{s}\alpha(\tau)d\tau }}{\|\mathbf{x}_{*}\|_{2}}\mathbf{x}_{*}ds\right\|_{2}\] \[=\left\|X(t)\mathbf{x}_{*}\right\|_{2}\int_{t}^{b}\frac{\varepsilon e ^{-\int_{t_{0}}^{s}\alpha(\tau)d\tau}}{\|\mathbf{x}_{*}\|_{2}}ds=\varepsilon\kappa _{31}(t),\]
and so that \(\sup_{t\in I}\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\|_{2}=K_{31}\varepsilon\). Consequently, the Ulam constant is at least \(K_{31}\).
Case (ii) can be proved in the same way as Case (i), so we omit the proof. The proof is now complete.
Using Theorems 5.1 and 5.2, we obtain the following result, immediately.
**Theorem 5.3**.: _Let \(I\) be either \((a,b)\), \((a,b]\), \([a,b)\) or \([a,b]\), where \(a\leq a<b\leq\infty\). Then the following (i) and (ii) below hold:_
* _if (_5.3_), (_5.4_), and_ \(\sup_{t\in I}\kappa_{31}(t)<\infty\) _hold, then (_5.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{31}=\sup_{t\in I}\kappa_{31}(t)\)_;_
* _if (_5.5_), (_5.6_), and_ \(\sup_{t\in I}\kappa_{32}(t)<\infty\) _hold, then (_5.1_) is Ulam stable on_ \(I\)_, and the best Ulam constant is_ \(K_{32}=\sup_{t\in I}\kappa_{32}(t)\)_._
When \(\alpha(t)\equiv\alpha\) and \(\beta(t)\equiv\beta\), (5.1) reduces to the constant coefficients two-dimensional linear differential system
\[\mathbf{x}^{\prime}=A\mathbf{x},\quad A=\begin{pmatrix}\alpha&\beta\\ -\beta&\alpha\end{pmatrix}. \tag{5.10}\]
For this system, we have the following result.
**Corollary 5.4**.: _Let \(I=\mathbb{R}\) and \(\alpha\neq 0\). Then (5.10) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c3}:=\frac{1}{|\alpha|}\)._
Proof.: First, we consider the case \(\alpha>0\). Then
\[\kappa_{31}(t)=\int_{t}^{\infty}e^{-\alpha(s-t)}ds=\frac{1}{\alpha}=\frac{1}{| \alpha|},\]
for all \(t\in\mathbb{R}\). By Theorem 5.3 (i) with \(I=\mathbb{R}\), (5.10) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c3}\).
Next, we consider the case \(\alpha<0\). Then
\[\kappa_{32}(t)=\int_{-\infty}^{t}e^{\alpha(t-s)}ds=\frac{1}{-\alpha}=\frac{1}{ |\alpha|},\]
for all \(t\in\mathbb{R}\). By Theorem 5.3 (ii) with \(I=\mathbb{R}\), (5.10) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant is \(K_{c3}\).
**Remark 5.5**.: If we use the maximum norm instead of the Euclidean norm in (5.8) in the proof of Theorem 5.1, then we obtain
\[\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty} \leq\int_{t}^{b}e^{-\int_{t}^{s}\alpha(\tau)d\tau}\left\|\begin{pmatrix} f_{1}(s)\cos\int_{s}^{t}\beta(\tau)d\tau-f_{2}(s)\sin\int_{s}^{t}\beta(\tau)d \tau\\ f_{1}(s)\sin\int_{s}^{t}\beta(\tau)d\tau+f_{2}(s)\cos\int_{s}^{t}\beta(\tau)d \tau\end{pmatrix}\right\|_{\infty}ds\] \[=\int_{t}^{b}e^{-\int_{t}^{s}\alpha(\tau)d\tau}\sqrt{(f_{1}(s))^{ 2}+(f_{2}(s))^{2}}\] \[\quad\times\max\left\{\left|\sin\left(\frac{\pi}{2}+\int_{s}^{t} \beta(\tau)d\tau+\gamma(\tau)\right)\right|,\left|\sin\left(\int_{s}^{t}\beta( \tau)d\tau+\gamma(\tau)\right)\right|\right\}ds\] \[\leq\varepsilon\sqrt{2}\kappa_{31}(t),\]
where
\[\frac{f_{2}(s)}{\sqrt{(f_{1}(s))^{2}+(f_{2}(s))^{2}}}=\sin\gamma(t),\quad \frac{f_{1}(s)}{\sqrt{(f_{1}(s))^{2}+(f_{2}(s))^{2}}}=\cos\gamma(t)\]
for \(t\in I\). This says that an Ulam constant is \(\sqrt{2}K_{31}\), when we use the maximum norm. On the other hand, if we use the maximum norm instead of the Euclidean norm in (5.9) in the proof of Theorem 5.2, and we choose \(\boldsymbol{x}_{*}=(1,-1)^{T}\), then we obtain
\[\|\boldsymbol{\phi}_{1}(t)-\boldsymbol{x}_{1}(t)\|_{\infty}=\|X( t)\boldsymbol{x}_{*}\|_{\infty}\int_{t}^{b}\frac{\varepsilon e^{-\int_{t_{0}}^{s} \alpha(\tau)d\tau}}{\|\boldsymbol{x}_{*}\|_{\infty}}ds\] \[=\left\|\begin{pmatrix}\cos\int_{t_{0}}^{t}\beta(s)ds-\sin\int_{t _{0}}^{t}\beta(s)ds\\ -\sin\int_{t_{0}}^{t}\beta(s)ds-\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix} \right\|_{\infty}\int_{t}^{b}\varepsilon e^{-\int_{t}^{s}\alpha(\tau)d\tau}ds\] \[=\sqrt{2}\max\left\{\left|\sin\left(\int_{t_{0}}^{t}\beta(s)ds+ \frac{3\pi}{4}\right)\right|,\left|\sin\left(\int_{t_{0}}^{t}\beta(s)ds+\frac {\pi}{4}\right)\right|\right\}\int_{t}^{b}\varepsilon e^{-\int_{t}^{s} \alpha(\tau)d\tau}ds\]
for \(t\in I\), and thus, \(\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\|_{\infty}\geq\varepsilon\kappa_{31}(t)\) for \(t\in I\). Moreover, we assume that \(I=\mathbb{R}\), \(\alpha(t)\equiv\alpha>0\) and \(\beta(t)\equiv\beta\neq 0\). Then we see that
\[\|\mathbf{\phi}_{1}(t)-\mathbf{x}_{1}(t)\|_{\infty}=\frac{\sqrt{2}\varepsilon}{\alpha} \max\left\{\left|\sin\left(\beta(t-t_{0})+\frac{3\pi}{4}\right)\right|,\left| \sin\left(\beta(t-t_{0})+\frac{\pi}{4}\right)\right|\right\}\]
for \(t\in I\). Define \(t_{n}=\frac{1}{\beta}\left(\frac{\pi}{4}+n\pi\right)+t_{0}\) for \(n\in\mathbb{Z}\). If we choose \(t=t_{n}\) for \(n\in\mathbb{Z}\), we have
\[\|\mathbf{\phi}_{1}(t_{n})-\mathbf{x}_{1}(t_{n})\|_{\infty}=\frac{\sqrt{2}\varepsilon }{\alpha},\quad n\in\mathbb{Z}.\]
Therefore, any Ulam constant is greater than or equal to \(\frac{\sqrt{2}}{\alpha}\), and so that the best Ulam constant for (5.10) when using the maximum norm is \(\frac{\sqrt{2}}{\alpha}\).
By the above remark, the following result holds.
**Theorem 5.6**.: _Let \(I=\mathbb{R}\) and \(\alpha\neq 0\) and \(\beta\neq 0\). Then (5.10) is Ulam stable on \(\mathbb{R}\), and the best Ulam constant when using the maximum norm is \(K_{c4}:=\frac{\sqrt{2}}{|\alpha|}\)._
Proof.: The proof for \(\alpha<0\) is similar to that for \(\alpha>0\). Therefore, the proof is omitted.
**Remark 5.7**.: Note that Theorem 5.6 is also a new theorem. In [3], the lower bound of the Ulam constant for (5.10) was not proved. In addition, from the conclusions of Corollary 5.4 and Theorem 5.6, we find that the best Ulam constants vary depending on the choice of norm.
## 6. Examples and approximations of node, saddle, and focus
**Example 6.1**.: In (3.1), let \(I=(0,1)\), \(A(t)=\begin{pmatrix}\frac{1}{1-t}&0\\ 0&-\frac{1}{t}\end{pmatrix}\), and \(t_{0}\in I\). Then
\[X(t)=\begin{pmatrix}\frac{1-t_{0}}{1-t}&0\\ 0&\frac{t_{0}}{t}\end{pmatrix}\quad\text{and}\quad X^{-1}(t)=\begin{pmatrix} \frac{1-t}{1-t_{0}}&0\\ 0&\frac{t}{t_{0}}\end{pmatrix},\]
and all conditions of Theorem 3.1 (iii) are satisfied, namely:
* Since \[\kappa_{13}(t):=\max\left\{\int_{t}^{1}e^{-\int_{t}^{s}\frac{1}{1-\tau}d\tau} ds,\int_{0}^{t}e^{\int_{s}^{t}\left(-\frac{1}{\tau}\right)d\tau}ds\right\}= \frac{1}{2}\left(\left|t-\frac{1}{2}\right|+\frac{1}{2}\right)\] exists for all \(t\in I\), and \(\sup_{t\in I}\kappa_{13}(t)=\frac{1}{2}<\infty\), then (3.1) is Ulam stable on \(I\), with an Ulam constant \(K_{13}:=\frac{1}{2}\). Furthermore, since \[\lim_{t\to 1-0}\int_{t_{0}}^{t}\frac{1}{1-s}ds=\lim_{t\to 1-0}\log\frac{1-t_{0}} {1-t}=\infty,\quad\lim_{t\to+0}\int_{t_{0}}^{t}\frac{1}{s}ds=\lim_{t\to+0} \log\frac{t_{0}}{t}=\infty,\]
then for any \(\varepsilon>0\) and for any continuously differentiable function \(\boldsymbol{\phi}(t)=(\phi_{1}(t),\phi_{2}(t))^{T}\) satisfying (3.5), there exists the unique solution of (3.1) denoted by
\[\boldsymbol{x}(t)=X(t)\begin{pmatrix}\lim_{t\to 1-0}\frac{(1-t)\phi_{1}(t)}{1-t_{0}} \\ \lim_{t\to 0}\frac{t\phi_{2}(t)}{t_{0}}\end{pmatrix}\]
such that \(\sup_{t\in I}\left\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\right\|_{\infty} \leq K_{13}\varepsilon\), where \(\begin{pmatrix}\lim_{t\to 1-0}\frac{(1-t)\phi_{1}(t)}{1-t_{0}}\\ \lim_{t\to 0}\frac{t\phi_{2}(t)}{t_{0}}\end{pmatrix}\) is a well-defined constant vector.
In addition, by Theorem 3.3 (iii), \(K_{13}=\frac{1}{2}\) is the best Ulam constant for (3.1) on \(I\).
**Remark 6.2**.: Example 6.1 illustrates that our results (Theorems 3.1-3.3, 4.1-4.3, and 5.1-5.3) also apply to systems that have blow-up solutions. To the best of the author's knowledge, no studies of Ulam stability for equations with blow-up solutions are known so far.
**Remark 6.3**.: In many results [5, 6, 8, 12, 15], Ulam stability is established under the assumption that the linear part (in nonlinear system) admits an exponential dichotomy on \(I=[0,\infty)\). For the discrete problem it corresponds to a discrete dichotomy, for periodic coefficients case it corresponds to that the monodromy matrix is hyperbolic. However, Example 6.1 suggests that the exponential dichotomy is not required for our results.
**Example 6.4**.: In (4.1), let \(I=\mathbb{R}\) and
\[A(t)=A_{1}(t)=\begin{pmatrix}1+i\beta(t)&\frac{2}{\sqrt{\pi}}e^{-t^{2}}\\ 0&1+i\beta(t)\end{pmatrix}\quad\text{or}\quad A(t)=A_{-1}(t)=\begin{pmatrix}-1 +i\beta(t)&\frac{2}{\sqrt{\pi}}e^{-t^{2}}\\ 0&-1+i\beta(t)\end{pmatrix}\]
for \(t\in I\), where \(\beta:I\to\mathbb{R}\) is continuous. Note that either
\[X(t)=X_{1}(t) =e^{t}e^{i\int_{0}^{t}\beta(s)ds}\begin{pmatrix}1&\text{erf}(t)\\ 0&1\end{pmatrix},\quad X_{1}^{-1}(t)=e^{-t}e^{-i\int_{0}^{t}\beta(s)ds} \begin{pmatrix}1&-\text{erf}(t)\\ 0&1\end{pmatrix},\quad\text{or} \tag{6.1}\] \[X(t)=X_{-1}(t) =e^{-t}e^{i\int_{0}^{t}\beta(s)ds}\begin{pmatrix}1&\text{erf}(t) \\ 0&1\end{pmatrix},\quad X_{-1}^{-1}(t)=e^{t}e^{-i\int_{0}^{t}\beta(s)ds} \begin{pmatrix}1&-\text{erf}(t)\\ 0&1\end{pmatrix}, \tag{6.2}\]
where erf is the error function. Then Theorem 4.1 (i) holds for \(A_{1}\) and (ii) holds for \(A_{-1}\). In particular, we have the following details.
* Since \[\kappa_{21}(t):=\int_{t}^{\infty}\left(1+\left|\int_{t}^{s}\frac{2}{\sqrt{\pi }}e^{-\tau^{2}}d\tau\right|\right)e^{-\int_{t}^{s}(1)d\tau}ds=1+e^{\frac{1}{4 }+t}\,\text{erfc}\left(\frac{1}{2}+t\right)\] (6.3) exists for all \(t\in I\), where erfc is the complementary error function, and \(\sup_{t\in I}\kappa_{21}(t)=1.78395<\infty\) at \(t=-0.603489\), then (4.1) is Ulam stable on \(I\), with an Ulam constant
\(K_{21}:=1.78395\). Furthermore, since \[\lim_{t\to\infty}\int_{t_{0}}^{t}(1)ds=\infty,\quad t_{0}\in(-\infty,\infty),\] (6.4) then for any \(\varepsilon>0\) and for any continuously differentiable function \(\boldsymbol{\phi}(t)\) satisfying (3.5), there exists the unique solution of (4.1) denoted by \[\boldsymbol{x}(t)=X_{1}(t)\left(\lim_{t\to\infty}X_{1}^{-1}(t)\boldsymbol{\phi} (t)\right)\] such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{21} \varepsilon=1.78395\varepsilon\), where \(X_{1}(t)\) and \(X_{1}^{-1}(t)\) are given in (6.1), and \(\left(\lim_{t\to\infty}X_{1}^{-1}(t)\boldsymbol{\phi}(t)\right)\) is a well-defined constant vector.
2. Since \[\kappa_{22}(t):=\int_{-\infty}^{t}\left(1+\left|\int_{s}^{t}\frac{2}{\sqrt{ \pi}}e^{-\tau^{2}}d\tau\right|\right)e^{\int_{s}^{t}(-1)d\tau}ds=1+e^{\frac{ 1}{4}-t}\operatorname{erfc}\left(\frac{1}{2}-t\right)\] (6.5) exists for all \(t\in I\), and \(\sup_{t\in I}\kappa_{22}(t)=1.78395<\infty\) at \(t=0.603489\), then (4.1) is Ulam stable on \(I\), with an Ulam constant \(K_{22}=1.78395\). Furthermore, since \[\lim_{t\to-\infty}\int_{t}^{t_{0}}(-1)ds=-\infty,\quad t_{0}\in(-\infty,\infty)\] (6.6) then for any \(\varepsilon>0\) and for any continuously differentiable function \(\boldsymbol{\phi}(t)\) satisfying (3.5), there exists the unique solution of (4.1) denoted by \[\boldsymbol{x}(t)=X_{-1}(t)\left(\lim_{t\to-\infty}X_{-1}^{-1}(t)\boldsymbol{ \phi}(t)\right)\] such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{22} \varepsilon=1.78395\varepsilon\), where \(X_{-1}(t)\) and \(X_{-1}^{-1}(t)\) are given in (6.2), and \(\left(\lim_{t\to-\infty}X_{-1}^{-1}(t)\boldsymbol{\phi}(t)\right)\) is a well-defined constant vector.
**Example 6.5**.: In (4.1), let \(A(t)=\begin{pmatrix}\frac{2t}{1+t^{2}}&1\\ 0&\frac{2t}{1+t^{2}}\end{pmatrix}\) and \(I=\mathbb{R}\). Then \(X(t)=(1+t^{2})\begin{pmatrix}1&t\\ 0&1\end{pmatrix}\) is a fundamental matrix for (4.1). Given \(\varepsilon>0\), take \(\boldsymbol{f}(t)=\begin{pmatrix}\varepsilon\\ 0\end{pmatrix}\) and \(\boldsymbol{\phi}(t)=X(t)\begin{pmatrix}\varepsilon\tan^{-1}t\\ 0\end{pmatrix}\). It follows that \(\boldsymbol{\phi}\) satisfies (3.10), and for any solution \(\boldsymbol{x}(t)=X(t)\boldsymbol{x}_{0}\) of (4.1), we have that
\[\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}=\left\|X(t)\left(\begin{pmatrix} \varepsilon\tan^{-1}t\\ 0\end{pmatrix}-\boldsymbol{x}_{0}\right)\right\|_{\infty}\]
is unbounded on \(I=(-\infty,\infty)\) for any choice of \(\boldsymbol{x}_{0}\). Thus (4.1) is not Ulam stable on \(I\) for this \(A\).
Note that, since \(\lambda(t)=\frac{2t}{1+t^{2}}\) and \(\mu(t)\equiv 1\), we have
\[\int_{t}^{T}\left(1+\left|\int_{t}^{s}\mu(\tau)d\tau\right|\right) e^{-\int_{t}^{s}\Re(\lambda(\tau))d\tau}ds=(1+t^{2})\int_{t}^{T}\frac{1+s-t}{1+s^ {2}}ds\] \[=\frac{1+t^{2}}{2}\left(\log\sqrt{\frac{1+T^{2}}{1+t^{2}}}+(1-t) \left(\tan^{-1}T-\tan^{-1}t\right)\right)\]
and
\[\int_{-T}^{t}\left(1+\left|\int_{s}^{t}\mu(\tau)d\tau\right| \right)e^{\int_{s}^{t}\Re(\lambda(\tau))d\tau}ds=(1+t^{2})\int_{t}^{T}\frac{1+s -t}{1+s^{2}}ds\] \[=\frac{1+t^{2}}{2}\left(\log\sqrt{\frac{1+T^{2}}{1+t^{2}}}+(1+t) \left(\tan^{-1}t+\tan^{-1}T\right)\right)\]
for \(T>0\). When \(T\to\infty\), these left-hand integrals do not converge for any \(t\in I\); that is, conditions (4.3) and (4.5) are not satisfied. Therefore, it is concluded that (4.3) and (4.5) are sharp conditions.
**Example 6.6**.: In (5.1), let \(I=\mathbb{R}\), \(\beta:I\to\mathbb{R}\) be continuous, and
\[\alpha(t)=\alpha_{1}(t)=1-\frac{2t}{1+t^{2}}\quad\text{or}\quad\alpha(t)= \alpha_{-1}(t)=-1-\frac{2t}{1+t^{2}}\]
for \(t\in I\). Then by Theorem 5.1, the following (i) and (ii) below hold.
* For \(\alpha=\alpha_{1}\), since \[\kappa_{31}(t):=\int_{t}^{\infty}e^{-\int_{t}^{s}\left(1-\frac{2\tau}{1+t^{2} }\right)d\tau}ds=\frac{t^{2}+2t+3}{t^{2}+1}\] (6.7) exists for all \(t\in I\), and \(\sup_{t\in I}\kappa_{31}(t)=2+\sqrt{2}<\infty\) at \(t=\sqrt{2}-1\), then (5.1) is Ulam stable on \(I\), with an Ulam constant \(K_{31}:=2+\sqrt{2}\). Furthermore, as \[\lim_{t\to\infty}\int_{t_{0}}^{t}\alpha(s)ds=\lim_{t\to\infty}\int_{t_{0}}^{t} \left(1-\frac{2s}{1+s^{2}}\right)ds=\infty,\quad t_{0}\in(-\infty,\infty),\] (6.8) then for any \(\varepsilon>0\) and for any continuously differentiable function \(\boldsymbol{\phi}(t)\) satisfying (3.5), there exists the unique solution of (5.1) denoted by \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to\infty}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{2}\leq K_{31} \varepsilon=\left(2+\sqrt{2}\right)\varepsilon\), where \(X(t)\) and \(X^{-1}(t)\) are given by \[X(t)=\frac{e^{t-t_{0}}\left(1+t_{0}^{2}\right)}{1+t^{2}}\begin{pmatrix} \cos\int_{t_{0}}^{t}\beta(s)ds&\sin\int_{t_{0}}^{t}\beta(s)ds\\ -\sin\int_{t_{0}}^{t}\beta(s)ds&\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix},\] \[X^{-1}(t)=\frac{e^{t_{0}-t}\left(1+t^{2}\right)}{1+t_{0}^{2}} \begin{pmatrix}\cos\int_{t_{0}}^{t}\beta(s)ds&-\sin\int_{t_{0}}^{t}\beta(s)ds \\ \sin\int_{t_{0}}^{t}\beta(s)ds&\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix}\] respectively, and \((\lim_{t\to\infty}X^{-1}(t)\boldsymbol{\phi}(t))\) is a well-defined constant vector.
2. For \(\alpha=\alpha_{-1}\), since \[\kappa_{32}(t):=\int_{-\infty}^{t}e^{\int_{s}^{t}\alpha(\tau)d\tau}ds=\frac{t^{2} -2t+3}{t^{2}+1}\] (6.9) exists for all \(t\in I\), and \(\sup_{t\in I}\kappa_{32}(t)=2+\sqrt{2}<\infty\) at \(t=1-\sqrt{2}\), then (5.1) is Ulam stable on \(I\), with an Ulam constant \(K_{32}:=2+\sqrt{2}\). Furthermore, as \[\lim_{t\to-\infty}\int_{t}^{t_{0}}\alpha(s)ds=-\infty,\quad t_{0}\in(-\infty, \infty),\] (6.10) then for any \(\varepsilon>0\) and for any continuously differentiable function \(\boldsymbol{\phi}(t)\) satisfying (3.5), there exists the unique solution of (5.1) denoted by \[\boldsymbol{x}(t)=X(t)\left(\lim_{t\to-\infty}X^{-1}(t)\boldsymbol{\phi}(t)\right)\] such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{2}\leq K_{32} \varepsilon=\left(2+\sqrt{2}\right)\varepsilon\), where \(X(t)\) and \(X^{-1}(t)\) are given by \[X(t)=\frac{e^{t_{0}-t}\left(1+t_{0}^{2}\right)}{1+t^{2}}\begin{pmatrix} \cos\int_{t_{0}}^{t}\beta(s)ds&\sin\int_{t_{0}}^{t}\beta(s)ds\\ -\sin\int_{t_{0}}^{t}\beta(s)ds&\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix},\] \[X^{-1}(t)=\frac{e^{t-t_{0}}\left(1+t^{2}\right)}{1+t_{0}^{2}} \begin{pmatrix}\cos\int_{t_{0}}^{t}\beta(s)ds&-\sin\int_{t_{0}}^{t}\beta(s)ds \\ \sin\int_{t_{0}}^{t}\beta(s)ds&\cos\int_{t_{0}}^{t}\beta(s)ds\end{pmatrix}\] respectively, and \((\lim_{t\to-\infty}X^{-1}(t)\boldsymbol{\phi}(t))\) is a well-defined constant vector.
**Example 6.7**.: Let \(I=\left(0,\frac{\pi}{2}\right)\). Consider (1.1) with
\[A(t)=\begin{pmatrix}2\cot t&i\\ -3i&-2\cot t\end{pmatrix}, \tag{6.11}\]
where \(\boldsymbol{f}:I\to\mathbb{C}^{n}\) is an arbitrary continuous vector function on \(I\). By Theorem 2.1, we see that (1.1) with (6.11) is Ulam stable on \(I\), with an Ulam constant \(K\) if and only if (1.2) with (6.11) is Ulam stable on \(I\), and an Ulam constant is the same \(K\). Therefore, we only need to consider the Ulam stability of (1.2) with (6.11). However, note here that \(A(t)\) does not belong to the generalized Jordan normal forms (I)-(III). Let
\[R(t)=\begin{pmatrix}\cos t&i\sin t\\ i\sin t&\cos t\end{pmatrix}.\]
Using \(\boldsymbol{y}=R(t)\boldsymbol{x}\), (1.2) is transformed into (1.3). From
\[J(t)=\left(R^{\prime}(t)+R(t)A(t)\right)R^{-1}(t)=\begin{pmatrix}\csc t\sec t& 0\\ 0&-\csc t\sec t\end{pmatrix}, \tag{6.12}\]
we see that (1.3) with (6.12) is just (3.1) with \(A(t)=J(t)\). Let \(t_{0}\in I\). Then
\[X(t)=\begin{pmatrix}\tan t\cot t_{0}&0\\ 0&\cot t\tan t_{0}\end{pmatrix}\quad\text{and}\quad X^{-1}(t)=\begin{pmatrix} \cot t\tan t_{0}&0\\ 0&\tan t\cot t_{0}\end{pmatrix},\]
and all conditions of Theorem 3.1 (iii) are satisfied, namely:
* Since \[\kappa_{13}(t) :=\max\left\{\int_{t}^{\frac{\pi}{2}}e^{-\int_{t}^{s}\csc\tau \sec\tau d\tau}ds,\int_{0}^{t}e^{\int_{s}^{t}(-\csc\tau\sec\tau)d\tau}ds\right\}\] \[=\max\{-\tan t\ln(\sin t),\;-\cot t\ln(\cos t)\}\] \[=\begin{cases}-\tan t\ln(\sin t)&:t\in\left(0,\frac{\pi}{4} \right]\\ -\cot t\ln(\cos t)&:t\in\left[\frac{\pi}{4},\frac{\pi}{2}\right)\end{cases}\] exists for all \(t\in I\), and \(\sup_{t\in I}\kappa_{13}(t)=0.4023711<\infty\), then (3.1) with \(A(t)=J(t)\) is Ulam stable on \(I\), with an Ulam constant \(K_{13}:=0.4023711\). Furthermore, since \[\lim_{t\to\frac{\pi}{2}-0}\int_{t_{0}}^{t}\csc s\sec sds =\lim_{t\to\frac{\pi}{2}-0}\ln(\tan t\cot t_{0})=\infty,\] \[\lim_{t\to+0}\int_{t_{0}}^{t}(-\csc s\sec s)\,ds =\lim_{t\to+0}\ln(\cot t\tan t_{0})=\infty,\] then for any \(\varepsilon>0\) and for any continuously differentiable function \(\boldsymbol{\phi}(t)=(\phi_{1}(t),\phi_{2}(t))^{T}\) satisfying (3.5), there exists the unique solution of (3.1) with \(A(t)=J(t)\) denoted by \[\boldsymbol{x}(t)=X(t)\begin{pmatrix}\lim_{t\to\frac{\pi}{2}-0}\phi_{1}(t)\cot t \tan t_{0}\\ \lim_{t\to+0}\phi_{2}(t)\tan t\cot t_{0}\end{pmatrix}\] such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{13} \varepsilon=0.4023711\varepsilon\), where \(\begin{pmatrix}\lim_{t\to\frac{\pi}{2}-0}\phi_{1}(t)\cot t\tan t_{0}\\ \lim_{t\to+0}\phi_{2}(t)\tan t\cot t_{0}\end{pmatrix}\) is a well-defined constant vector.
By Theorem 3.3 (iii), \(K_{13}=0.4023711\) is the best Ulam constant for (3.1) with \(A(t)=J(t)\) on \(I\). Moreover, since
\[\sup_{t\in I}\|R(t)\|_{\infty}=\sup_{t\in I}\|R^{-1}(t)\|_{\infty}=\sqrt{2} \sup_{t\in I}\sin\left(t+\frac{\pi}{4}\right)=\sqrt{2},\]
\(\|R(t)\|_{\infty}\) and \(\|R^{-1}(t)\|_{\infty}\) are bounded on \(I\). From Theorem 2.3, (1.2) with (6.11) is Ulam stable on \(I\), and an Ulam constant is \(2K_{13}=0.8047422\). In addition, Theorem 2.1, (1.1) with (6.11) is also Ulam stable on \(I\), and an Ulam constant is \(2K_{13}=0.8047422\).
Next, we propose that Ulam stability gives approximations of node, saddle, and focus.
**Example 6.8**.: In (3.19), let \(I=(-\infty,\infty)\), and let
\[A=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. \tag{6.13}\]
Then, by Corollary 3.4, (3.19) with (6.13) is Ulam stable on \(I\), and the best Ulam constant is \(K_{c1}=1\). Moreover, we consider (3.10) with (6.13) and
\[\boldsymbol{f}(t)=\begin{pmatrix}0.2\left(1-2\max\{\cos t,0\}\right)\\ 0\end{pmatrix}. \tag{6.14}\]
Let \(\varepsilon=0.2\). Clearly the solution \(\boldsymbol{\phi}(t)\) of (3.10) with (6.13) and (6.14) satisfies (3.5). Since (3.8), (3.9) and \(\sup_{t\in I}\kappa_{13}(t)=K_{13}=K_{c1}=1<\infty\) hold, by Theorem 3.1 (iii), we see that there exists the unique solution \(\boldsymbol{x}(t)\) of (3.19) with (6.13) such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{13} \varepsilon=0.2\). This means that there is only one orbit of (3.19) with (6.13) in the \(\varepsilon\)-neighborhood of any orbit of (3.10) with (6.13) and (6.14). Since the geometric classification of the origin of (3.19) with (6.13) is a saddle point (see [20, 21, 22]), a phase portrait of (3.10) with (6.13) and (6.14) can be proposed as an approximation of the saddle point (see Figure 1).
**Example 6.9**.: In (3.19), let \(I=(-\infty,\infty)\), and let
\[A=\begin{pmatrix}-1&1\\ 0&-1\end{pmatrix}. \tag{6.15}\]
Then, by Corollary 4.4, (4.9) with (6.15) is Ulam stable on \(I\), and the best Ulam constant is \(K_{c2}=1\). Moreover, we consider (3.10) with (6.14) and (6.15). Let \(\varepsilon=0.2\). Clearly the solution \(\boldsymbol{\phi}(t)\) of (3.10)
Figure 1. Approximation of saddle point.
with (6.14) and (6.15) satisfies (3.5). Since (4.3), (4.4) and \(\sup_{t\in I}\kappa_{21}(t)=K_{21}=K_{c2}=1<\infty\) hold, by Theorem 4.1 (i), we see that there exists the unique solution \(\boldsymbol{x}(t)\) of (4.9) with (6.15) such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{\infty}\leq K_{21} \varepsilon=0.2\). This means that there is only one orbit of (4.9) with (6.15) in the \(\varepsilon\)-neighborhood of any orbit of (3.10) with (6.14) and (6.15). Since the geometric classification of the origin of (4.9) with (6.15) is a stable node (see [20, 21, 22]), a phase portrait of (3.10) with (6.14) and (6.15) can be proposed as an approximation of the stable node (see Figure 2).
**Example 6.10**.: In (3.19), let \(I=(-\infty,\infty)\), and let
\[A=\begin{pmatrix}-1&2\\ -2&-1\end{pmatrix}. \tag{6.16}\]
Then, by Corollary 5.4, (5.10) with (6.16) is Ulam stable on \(I\), and the best Ulam constant is \(K_{c3}=1\). Moreover, we consider (3.10) with (6.14) and (6.16). Let \(\varepsilon=0.2\). Clearly the solution \(\boldsymbol{\phi}(t)\) of (3.10) with (6.14) and (6.16) satisfies (3.5). Since (5.3), (5.4) and \(\sup_{t\in I}\kappa_{31}(t)=K_{31}=K_{c3}=1<\infty\) hold, by Theorem 5.1 (i), we see that there exists the unique solution \(\boldsymbol{x}(t)\) of (5.10) with (6.16) such that \(\sup_{t\in I}\|\boldsymbol{\phi}(t)-\boldsymbol{x}(t)\|_{2}\leq K_{31} \varepsilon=0.2\). This means that there is only one orbit of (5.10) with (6.16) in the \(\varepsilon\)-neighborhood of any orbit of (3.10) with (6.14) and (6.16). Since the geometric classification of the origin of (5.10) with (6.16) is a stable focus (see [20, 21, 22]), a phase portrait of (3.10) with (6.14) and (6.16) can be proposed as an approximation of the stable focus (see Figure 3).
Figure 2. Approximation of stable node.
## 7. Conclusions
In this study, Ulam stability theorems for nonautonomous linear differential systems with generalized Jordan normal forms were established. Detailed estimates for the Ulam constants are obtained, and then the best Ulam constants are derived for all generalized Jordan normal forms. The obtained results do not require the assumption that the linear system admits an exponential dichotomy. As a result, we can get the true solution that is close to the approximate one that blows up. Analysis of Ulam stability for approximate solutions that blow up has not been done so far, so it can be said to be a new approach. In addition, the best Ulam constants for nonautonomous systems other than periodic systems have also not been presented so far. That is, this is the first study to derive the best Ulam constants for nonautonomous systems other than periodic systems. Also note that new results are obtained even in the case of constant coefficients. An example of instability is presented, as well as examples for each generalized Jordan normal form. Additionally, approximations of node, saddle, and focus are proposed.
## Acknowledgments
M. O. was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI (grant number JP20K03668) and the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University.
Figure 3. Approximation of stable focus. |
2308.05594 | Threshold studies for a hot beam superradiant laser including an atomic
guiding potential | Recent theoretical predictions hint at an implementation of a superradiant
laser based on narrow optical clock transitions by using a filtered thermal
beam at high density. Corresponding numerical studies give encouraging results
but the required very high densities are sensitive to beam collimation errors
and inhomogeneous shifts. Here we present extensive numerical studies of
threshold conditions and the predicted output power of such a superradiant
laser involving realistic particle numbers and velocities along the cavity
axis. Detailed studies target the threshold scaling as a function of
temperature as well as the influence of eliminating the hottest part of the
atomic distribution via velocity filtering and the benefits of additional
atomic beam guiding. Using a cumulant expansion approach allows us to quantify
the significance of atom-atom and atom-field correlations in such
configurations. We predict necessary conditions to achieve a certain threshold
photon number depending on the atomic temperature and density. In particular,
we show that the temperature threshold can be significantly increased by using
more atoms. Interestingly, a velocity filter removing very fast atoms has only
almost negligible influence despite their phase perturbing properties. On the
positive side an additional conservative optical guiding towards cavity mode
antinodes leads to significantly lower threshold and higher average photon
number. Interestingly we see that higher order atom-field and direct atom-atom
quantum correlations play only a minor role in the laser dynamics, which is a
bit surprising in the superradiant regime. | Martin Fasser, Christoph Hotter, David Plankensteiner, Helmut Ritsch | 2023-08-10T14:05:09Z | http://arxiv.org/abs/2308.05594v1 | # Threshold studies for a hot beam superradiant laser including an atomic guiding potential
###### Abstract
We present a new method for the use of a hot beam superradiant laser including an atomic guiding potential. We show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant laser including an atomic guiding potential. We also show that the method is applicable to a hot beam superradiant including an atomic guiding potential.
###### Abstract
Background:
Motivated by the outstanding short time stability and reliable continuous operation properties of microwave clock masers, intense worldwide efforts target the first implementation of their optical analogues based on narrow optical clock transitions and using laser cooled dilute atomic gases. While as a central line of research large efforts are devoted to create a suitably dense continuous ultracold and optically inverted atom beam source, recent theoretical predictions hint at an alternative implementation using a filtered thermal beam at much higher density. Corresponding numerical studies give encouraging results but the required very high densities are sensitive to beam collimation errors and inhomogeneous shifts. Here we present extensive numerical studies of threshold conditions and the predicted output power of such a superradiant laser involving realistic particle numbers and velocities along the cavity axis. Detailed studies target the threshold scaling as a function of temperature as well as the influence of eliminating the hottest part of the atomic distribution via velocity filtering and the benefits of additional atomic beam guiding. Using a cumulant expansion approach allows us to quantify the significance of atom-atom and atom-field correlations in such configurations.
Methods:
To enable studies with realistic high particle number we implement a simulation framework based on a first order as well as a second-order cumulant expansion of the coupled atom-field dynamics with the atomic center of mass motion represented by classical trajectories. We assume a velocity filtered initial thermal distribution and guiding forces from prescribed optical potentials along the cavity axis, while the transverse atomic motion is not explicitly included as dynamical variable. Effectively, atomic motion along the cavity axis induces a time varying atom-cavity coupling determined by the cavity mode structure and the optical guiding potentials. Our model includes a continuous effective incoherent pump mechanism. Using such simulations we study how the intra-cavity photon number depends on the atom number, initial velocity distribution and atomic guiding. Comparing the results in different expansion orders including customized mixed order models allows to extract the relative importance of atom-atom and atom-field correlations in different operating regimes.
Results:
We predict necessary conditions to achieve a certain threshold photon number depending on the atomic temperature and density. In particular, we show that the temperature threshold can be significantly increased by using more atoms. Interestingly, a velocity filter removing very fast atoms has only almost negligible influence despite their phase perturbing properties. On the positive side an additional conservative optical guiding towards cavity mode antinodes leads to significantly lower threshold and higher average photon number. Interestingly we see that higher order atom-field and direct atom-atom quantum correlations play only a minor role in the laser dynamics, which is a bit surprising in the superradiant regime.
Conclusions:
A hot atomic beam laser operated in the superradiant regime can achieve useful power levels, if the inhomogeneous atomic broadening from the velocity distribution along the cavity axis can be compensated by using sufficiently more atoms. Velocity filtering to remove very hot atoms negatively influences the average photon number but can potentially reduce power fluctuations as well as a reduced linewidth and cavity pulling effects. Adding a confining optical lattice potential, however, creates significant modifications of the dynamics, especially for higher temperatures where our model simulations predict significantly higher laser output power.
## Introduction
Ever since the demonstration of the superior operation of an optical atomic clock [1, 2, 3] with respect to microwave implementations people started wondering about a possible active clock implementation in form of a superradiant laser [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. In analogy to the very successful hydrogen micro-maser technology [21] such an active device in principle promises at least similar stability and accuracy at a reduced technical cost and fragility. Here an implementation based on a dilute gas of cold clock atoms within a high-Q optical cavity was predicted to be a very promising path to go [5]. As a particular example, model setups based on a beam of inverted ultracold atoms traversing an optical resonator [22, 23, 24] were predicted to yield unprecedented stability and accuracy. However, as the operating optical wavelength is several orders of magnitude shorter than the typical maser lines, the resulting technological challenges are similarly larger. Already the construction of a suitable beam source is virtually equivalent to operating a continuous atom laser [25, 26] and thus poses a way more challenging task than its microwave analog. As the cavity mode volume is also way smaller the atoms have to be tightly confined before sending them through the resonator. Yet another problem is creating atomic inversion with minimal perturbation [27]. Despite intense efforts which recently lead to amazing technical advancements on guided atom beams [28, 29, 30, 31] and pulsed superradiant lasing [32, 33, 34, 35, 36, 37, 38, 39], a fully continuous operating device [9, 19] has not been realized.
In recent calculations it was pointed out that a high precision could already be achieved using a much hotter atomic ensemble with a sufficiently high intra-cavity density [40, 41, 42, 44]. Here the sheer number of contributing atoms allows to cross the laser threshold inducing an efficient collective synchronization process towards a very narrow effective linewidth. This approach at least in principle requires significantly less components and would allow for a rather compact setup involving only a beam oven, an optical resonator and a pump laser to create inversion. The precise conditions for a concrete setup, however, cannot be reliably analytically predicted and thus require large scale numerical simulations [24]. Here we set up a framework for such simulations which can deal with realistically large atom numbers and velocities. This allows to identify the minimal operating conditions and study the influence of extra elements as velocity filters or guiding potentials.
## Model
We consider an ensemble of \(N\) two-level atoms with a transition frequency of \(\omega_{\mathrm{a}}\) coupled to a single mode cavity with resonance frequency \(\omega_{\mathrm{c}}\), see Figure 1. The coherent dynamics of this system is governed by the Tavis Cummings Hamiltonian, which in the rotating frame of the atomic transition frequency \(\omega_{\mathrm{a}}\) reads
\[H=\Delta a^{\dagger}a+\sum_{j=1}^{N}g(x_{j})(a^{\dagger}\sigma_{j}^{\mathrm{ seg}}+\sigma_{j}^{\mathrm{seg}}a), \tag{1}\]
with the atomic transition operator of the \(j\)-th atom \(\sigma_{j}^{\mathrm{seg}}=|g\rangle_{j}\langle e|\), the cavity photon creation (annihilation) operator \(a^{\dagger}\) (\(a\)), and the cavity-atom detuning \(\Delta=\omega_{\mathrm{c}}-\omega_{\mathrm{a}^{\dagger}}\). Atoms at a position \(x\) along the cavity axis couple to the cavity according to the mode function \(g\cdot f(x)=g\cdot\cos(2\pi x/\lambda)\), where \(g\) is the maximal cavity coupling strength and \(\lambda\) the
Figure 1: _Schematic of the system._ Atoms are incoherently pumped with rate \(\nu\) and couple to an optical cavity. The clusters move according to a Gaussian velocity distribution with standard deviation \(\sigma_{\nu}\), which relates to a certain temperature \(T\). This motion is included along the cavity axis with a position dependent coupling strength \(g\). The atoms decay with the rate \(\gamma\) and the photons leak out of the cavity with a rate \(\kappa\).
wavelength of the cavity photons. The motion of the atoms is treated classically by an initial atomic velocity drawn from a Gaussian distribution with width \(\sigma_{v}\) corresponding to a temperature \(T\). In the beginning we treat freely moving atoms, later on we investigate the influence of an external optical potential. In both cases the atomic motion is fully determined by the initial velocity and position.
Additional to the coherent dynamics we need to account for the dissipative processes, which are described by Liouvillian term \(\mathcal{L}[\rho]\) in the master equation for the system density matrix \(\rho\)
\[\dot{\rho}=i[\rho,H]+\mathcal{L}[\rho]. \tag{2}\]
In the Born-Markov approximation the Liouvillian super-operator for a jump operator \(J_{i}\) with a corresponding rate \(\Gamma_{i}\) can be written in Lindblad form as
\[\mathcal{L}[\rho]=\sum_{i}\frac{\Gamma_{i}}{2}\left(2J_{i}\rho J_{i}^{\dagger }-J_{i}^{\dagger}J_{i}\rho-\rho J_{i}^{\dagger}J_{i}\right). \tag{3}\]
The considered dissipative processes are listed in table 1, including cavity photon losses, individual spontaneous decay as well as incoherent pumping of the two-level atoms.
As we want to simulate systems with up to one million atoms, we cannot solve the full master equation due to exponential scaling of the size of the Hilbert space with the particle number. To this end we employ a mean field approach for operator averages. For a not explicitly time-dependent operator \(\mathcal{O}\) we can calculate the time-evolution of its expectation value as
\[\frac{\mathrm{d}}{\mathrm{d}t}\langle\sigma\rangle=\mathrm{Tr}\{\rho\dot{\rho} \}=i\left([H,\sigma]\right)+\langle\mathcal{L}[\sigma]\rangle \tag{4}\]
with
\[\mathcal{L}[\sigma]=\sum_{i}\frac{\Gamma_{i}}{2}\left(2J_{i}^{\dagger}\sigma J _{i}-J_{i}^{\dagger}J_{i}\sigma-\sigma J_{i}^{\dagger}J_{i}\right), \tag{5}\]
where we inserted the master equation (2) into Eq. (4) and used the cyclic property of the trace. To truncate the set of equations we approximate higher order quantum correlations with the cumulant expansion method [43, 44].
The first order approximation (mean field) is sufficient to accurately describe our system (see Appendix), the corresponding set of equations is
\[\dot{\langle a\rangle} =-\Big{(}\frac{\chi}{2}+i\Delta\Big{)}\langle a\rangle-ig\sum_{m} f\left(x_{m}\right)\langle\sigma_{m}^{\mathrm{ge}}\rangle \tag{6a}\] \[\langle\sigma_{m}^{\mathrm{ge}}\rangle =-\frac{\gamma+\nu}{2}\langle\sigma_{m}^{\mathrm{ge}}\rangle+2igf \left(x_{m}\right)\langle(\sigma_{m}^{\mathrm{ge}})-\frac{1}{2}\rangle\langle a\rangle\] (6b) \[\langle\sigma_{m}^{\mathrm{ge}}\rangle =-(\gamma+\nu)\langle\sigma_{m}^{\mathrm{ge}}\rangle+igf\left(x _{m}\right)\langle a\rangle^{*}\langle\sigma_{m}^{\mathrm{ge}}\rangle- \langle\sigma_{m}^{\mathrm{eg}}\rangle^{*}\langle a\rangle\rangle+\nu, \tag{6c}\]
where the last two equations exist for each atom from \(m=1\) to \(N\). The number of equations in this approximation scales linearly with the atom number \(N\). Still, to simulate a system with several hundred thousand atoms we need further approximations. To this end, we assume that groups of \(K\) atoms behave in a sufficiently identical way, such that we can describe them with the same equations. This means, we group atoms into clusters. For the set of equations in 6, this means that the index \(m\) now runs only from \(m=1\) to the number of clusters \(N_{\mathrm{Cl}}\), but the source term \(-ig\sum_{m}f\left(x_{m}\right)\langle\sigma_{m}^{\mathrm{ge}}\rangle\) in Eq. (6a) gets enhanced by the clustersize \(K\). In the following, we use a clusternumber of \(N_{\mathrm{Cl}}=400\), which is a good compromise between simulating the real setup and still having a feasible computation time (see Appendix).
### Photon number dependence on temperature
First, we investigate the influence of the atom number \(N\) and the temperature \(T\) on the steady state photon number. The results in mean field (for a comparison to higher orders see Appendix) are depicted in Figure 2.
Unsurprisingly, we always end up with the highest number of photons for the lowest temperature. The drop-off of the photon number for higher temperature can be explained by the Doppler effect, as higher velocities of the individual atoms lead to
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(i\) & \(c_{i}\) & \(\Gamma_{i}\) & Description \\ \hline
1 & \(a\) & \(\kappa\) & cavity photon losses \\
2 & \(\sigma_{\mathrm{ge}}^{\mathrm{ge}}\) & \(\gamma\) & decay from \(|2\rangle_{j}\) to \(|1\rangle_{j}\) \\
3 & \(\sigma_{j}^{\mathrm{ge}}\) & \(\nu\) & excitation of the \(j\)-th atom \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dissipative Processes. The system features a damped cavity mode, atomic decay and an incoherent pump.
them being shifted out of resonance. Comparing each curve to the others, we can investigate the significant influence of the atom number on the photon number: Higher atom numbers yield higher photon numbers. While this is not surprising, the crucial aspect of this result becomes apparent, when we normalize the curves to their photon number value for zero temperature (Figure 2(b)): The lacking coincidence of the curves hints to a nonlinear relation between atoms and photons. However, for low atom numbers we see the curves dropping to virtually 0 photons as early as for \(\sigma_{\mathrm{v}}/(\Lambda\kappa)\approx 0.05\), whereas for higher atom numbers such a temperature has hardly any effect on the photon number. For example, for \(10^{6}\) atoms there are still approximately 95% of the photons at zero temperature in the cavity. The red dots indicate the points, at which the photon number has dropped to half their maximal value at zero temperature for each individual curve. These values are depicted in Figure 2(c), they initially follow a quadratic relation, while for higher atom numbers the relation seems linear. From that graph we can clearly see, that using higher atom numbers allows a higher temperature while still having a decent amount of photons inside the cavity. This could be beneficial to experimental setups, where it is generally difficult to cool the atoms down to low temperature. However, it remains to be seen what this means to the linewidth of the outcoming photons, as higher temperature usually leads to Doppler broadening of the linewidth.
### Adding a velocity filter
An interesting idea is to consider a cutoff in the velocity distribution for the atoms. Experimentally, this may be achieved by using velocity filter for the atomic beam to get rid of the atoms with very high velocity, creating a smaller but colder ensemble. This could be more advantageous than trying to narrow the velocity distribution by cooling, as cooling the atoms proposes an experimental challenge. To simulate such a filter, we start with the same number of atoms as before, but exclude all atoms from the dynamics with velocities higher than the cutoff. We depict the results in Figure 3 for a cutoff at \(\sigma_{\mathrm{v}}\) (\(\approx 68\%\) slowest atoms) and at \(\sigma_{\mathrm{v}}/2\) (\(\approx 38\%\) slowest atoms). To compare, we also depict the previous results without cutoff. In Figure 3(a) the steady state photon number is shown for the different cutoffs. By cutting off the high velocity part of the atomic distribution, we lose only fast atoms, but it always leads to a decrease in photon number. A velocity filter therefore only reduces the photon number. It might still be useful with respect to the linewidth, which is not calculated at this point.
However, the photon number seems to be more stable with increasing temperature, which can be seen in Figure 3(b), where we normalize the graphs to their highest photon number (at zero temperature). While the photon number significantly drops already for low temperature (e.g. \(\sigma_{\mathrm{v}}\approx 0.1\)) for the uncut case, it stays relatively stable for the other cases, where we exclude the fast atoms.
In summary, including a velocity filter will negatively affect the photon number, but allow it to stay at a relatively stable value for low temperatures. However, it remains to be seen what effect such a velocity filter has on the linewidth of the atoms. In general, one would expect a velocity filter to be beneficial for the linewidth reduction, as the excluded, fast atoms increase the linewidth by means of the Doppler effect.
Figure 2: _Photon number dependence on temperature and atom number._ (a) steady state photon number (mean field, see Appendix for comparison to higher orders) as a function of the standard deviation of the velocity distribution \(\sigma_{\mathrm{v}}\) for different atom numbers. (b) Photon number normalized to \(\sigma_{\mathrm{v}}=0\) (maximum). The red dots indicate the value at which the curves have dropped to half of their initial value. After normalizing, the curves do not longer align, hinting to a nonlinear relation between atom and photon number. (c) Dependence of the threshold value of the velocity spread \(\sigma_{\mathrm{v}}^{2}\sim T\) on the number of atoms. Larger atom numbers allow for larger possible values of \(\sigma_{\mathrm{v}}\), while maintaining a lot of photons inside the cavity. The parameters are \(\nu=0.5\), \(\Delta=0\), \(g=0.00136\) and \(\kappa=1\). We neglect \(\gamma\) since it is much smaller than \(\nu\), see Eq. (6).
### Dynamics including an optical lattice potential
So far, we assumed that the atoms move freely without any potential. After some time at non-zero temperature, the atoms will be randomly distributed over the mode function. To compare that situation to the zero-temperature case with non-moving atoms, we always place the atoms equidistantly distributed over the mode function. Therefore, the differences in photon number stem from the movement of the atoms itself, not from the somewhat artificial location of atoms on the mode function. The coupling between atoms and cavity field can be controlled (atleast for low temperature) by including an optical lattice potential, as the atoms are trapped at the potential minimums. Aligning these potential minimums with the antinodes of the mode function leads to enhanced coupling compared to the random sampling of the mode function in the case where atoms move freely. The potential \(V(x)\) has the form \(V(x)=\Delta E\cdot\cos^{2}(\frac{\alpha x}{\lambda_{\mathrm{int}}}x)\) with spatial periodicity \(\lambda_{\mathrm{int}}\) and energy barrier \(\Delta E\). By using the Hamiltonian formalism one ends up with two linear differential equations of first order for each cluster. These additional equations can simply be added to the equations in mean field (6) for simulation purposes. To keep the computation time low, we now simulated only \(N_{\mathrm{CI}}=100\) clusters instead of 400 and only averaged over 10 trajectories instead of 50.
For the optimal case, we assume an optical potential with exactly half the wavelength of the mode function, that way the minima of the optical potential are situated at the maximum values of \(|f(x_{m})|\). The results of such a simulation are depicted in Figure 4(a). To compare to the previous results, we include the graph that we already calculated, which is the result for an energy barrier of \(\Delta E=0\). The energy is expressed in terms of the recoil energy \(E_{\mathrm{rec}}=\frac{\hbar^{2}k^{2}}{2m}\), where \(k=2\pi/\lambda\), \(\alpha_{\mathrm{rec}}\) is the associated recoil velocity and \(m\) is the mass of the atoms. To get exemplary values for these energy scales we use the \({}^{40}\)Ca intercombination line between \({}^{1}\mathrm{S}_{0}\) and \({}^{3}\mathrm{P}_{1}\) with \(\lambda=657\,\mathrm{nm}\) and \(\Gamma=2\pi\cdot 375\,\mathrm{Hz}\), the cavity linewidth is \(\kappa=2\pi\cdot 120\,\mathrm{kHz}\). To compare the two energy scales of the energy barrier and the velocity of the atoms, we color the data points according to the fraction of trapped atoms at each particular energy barrier and velocity. For high energy barriers and low velocities we have a lot of trapped atoms, these data points are colored blue, whereas for low energy barriers and high velocities almost no atoms are trapped, these points are colored yellow. As we can see from Figure 4, the results change significantly by including a barrier. Let us first look at the case of low velocities/low temperatures: When the barrier is sufficiently high, then almost all the atoms are trapped at the antinodes of the mode function, leading to high coupling between atoms and cavities and eventually to higher photon number. For higher velocities/temperature, the number of photons stays significantly above the comparison graph for no energy barrier. On the one hand, this is because there are still atoms, that are trapped near the antinode of the mode function. On the other hand, there is a different effect at play, of which we can get a better insight when choosing the wavelength of the lattice slightly different from the wavelength of the mode function. Such a system may be of interest, as there usually is a mismatch between the effective optical lattice periodicity and the optical wavelength.
The results can be seen in Figure 4(b), where we chose \(\lambda_{\mathrm{int}}=0.99\,\lambda\). Note, how the results converge for very low temperature, regardless of the value of the energy barrier \(\Delta E\). This means, that the coupling is the same as in the barrierless case, where we distributed the atoms equidistantly over the mode function. So by choosing the wavelength of the optical lattice only slightly different to the wavelength of the mode function, we once again sample an average over the mode function, as long as the atoms sit on sufficiently many sites. Therefore the whole advantage of keeping atoms near the antinode of the mode function disappears and the coupling between the atoms and the cavity is the same compared to the coupling in the barrierless case. However, increasing the temperature, we still see a significant difference: Even
Figure 3: _Photon number for different velocity cutoffs._ (a) Mean steady state photon number. (b) Normalized to maximum photon number. Cutting off the the high velocity part of the distribution always yields a smaller steady state photon number, but it remains at a high level for increasing \(\sigma_{\mathrm{v}}\). We only depict the results in mean field, as the results in mixed order almost perfectly align. The parameters are the same as in Figure 2.
though the average coupling is the same, the photon number is much higher. The atoms are decelerated, whenever they go over a potential barrier, yielding a reduced average velocity, which in turn decreases the Doppler effect and increases the photon number.
In summary, for low velocities/temperatures, the change in photon number depends mostly on the change of coupling of the atoms, which again depends on the location of the minima of the optical potential compared to the antinodes of the mode function. For higher velocities/temperatures, the other effect of having a lower average velocity due to the barriers comes also into play. The lower average velocity leads to a decreased Doppler effect and therefore a significant increase of photon number.
## Conclusions
We studied incoherently pumped two-level-atoms inside a single mode cavity. In particular, we investigated the influence of temperature on the photon number. For high temperatures, where the atoms are fast, the photon number is suppressed because of the Doppler effect. However, we saw that increasing the atom number, to get sufficient collective atom-cavity-coupling, can compensate this suppression. Including a velocity filter leads to a lower amount of atoms participating in the dynamics and therefore, as the number of atoms is a significant parameter, to a lower photon number. We plan to calculate the spectrum to see if excluding the fast atoms has a decreasing effect on the linewidth.
With an optical lattice potential the coupling between the atoms and the cavity field can be controlled. For low temperatures (compared to the energy barrier), most atoms are trapped and the atom-cavity coupling (and in consequence the photon number) can be engineered. However, for high temperatures, even though most of the atoms are not trapped, the photon number significantly increases compared to the situation without a potential. In some cases, where virtually no photons are generated anymore for a certain \(\sigma_{v}\) without the potential, we still have a macroscopic photon occupation number, even if \(\sigma_{v}\) is 4 times as high, by including the optical potential. Therefore including an optical potential becomes very beneficial for higher temperatures.
Figure 4: _Quasi-steady state photon number including a optical lattice potential with wavelength \(\lambda_{\text{intr}}=0.5\,\lambda\) (a) and \(\lambda_{\text{intr}}=0.99\,\lambda\) (b) with varying energy barriers._ The color of the data points indicates the fraction of trapped atoms, where at blue most of the atoms are trapped, while at yellow most of the atoms can escape their trap. (a) \(\lambda_{\text{intr}}=0.5\,\lambda\): Due to the choice of \(\lambda_{\text{intr}}\), for low temperatures most of the atoms are trapped at the antinode of the mode-function, leading to a higher atom-cavity coupling and higher photon number. For higher temperatures, most of the atoms escape their trap, but the photon number stays significantly above the barrierless case. This is due to the energy barriers, which slow the atoms down, yielding lower Doppler effect and therefore higher photon number. (b) \(\lambda_{\text{intr}}=0.99\,\lambda\): For low temperatures the results coincide, regardless of the chosen energy barrier. This means, that the atom-cavity coupling is the same as in the barrierless case, where we distributed the atoms equidistantly over a mode function. For higher temperatures, most of the atoms escape their trap, but the photon number behaves similarly to the case with \(\lambda_{\text{intr}}=0.5\,\lambda\). Once again, the atoms get slowed by the barrier, eventually leading to significantly higher photon number compared to the barrierless case. The parameters are the same as in Figure 2.
## Appendix. Equations in higher orders
As mentioned in the main text, for our parameter regime the physics is accurately described by mean field. To verify this, we derive the equations for higher orders in the cumulant expansion and compare the numerical results with the mean field. For second order, one derives equations for two-operator averages. These will depend on other averages of two-operator and three-operator products. In order to close the set of differential equations one needs to derive equations for the two-operator averages and approximate the three-operator averages according to [43]
\[\langle abc\rangle\approx\langle ab\rangle\langle c\rangle+\langle ac\rangle \langle b\rangle+\langle cb\rangle\langle a\rangle-2\langle a\rangle\langle b \rangle\langle c\rangle. \tag{7}\]
Using the phase invariance of the system [44, 5] one ends up with the set of equations in second order of the cumulant expansion:
\[\langle a^{\dagger}a\rangle= -\Bigl{(}\kappa-2i\Delta\Bigr{)}\langle a^{\dagger}a\rangle+igK \sum_{m}\!f(x_{m})\langle(a\sigma^{\text{eg}}_{m})-\langle a^{\dagger}\sigma ^{\text{ge}}_{m}\rangle \tag{8a}\] \[\langle a^{\dagger}\dot{\sigma}^{\text{ge}}_{m}\rangle= -\Bigl{(}\frac{\gamma+\kappa+\nu}{2}-i\Delta\Bigr{)}\langle a \sigma^{\text{eg}}_{m}\rangle+igf(x_{m})\langle(\sigma^{\text{ee}}_{m})- \langle a^{\dagger}a\rangle\rangle+ig\sum_{j=1}^{N}\!f(x_{j})\langle\sigma^{ \text{eg}}_{j}\sigma^{\text{ge}}_{m}\rangle\] (8b) \[\langle\dot{\sigma}^{\text{ee}}_{m}\rangle= igf(x_{m})\langle(a^{\dagger}\sigma^{\text{ge}}_{m})-\langle a \sigma^{\text{eg}}_{m}\rangle\rangle-\langle\gamma+\gamma\rangle(\sigma^{ \text{ee}}_{m})+\nu\] (8c) \[\langle\sigma^{\text{ge}}_{k}\sigma^{\text{eg}}_{l}\rangle= -(\nu+\gamma)\langle\sigma^{\text{ge}}_{k}\sigma^{\text{eg}}_{ l}\rangle-igf(x_{k})\langle a\sigma^{\text{eg}}_{l}\rangle+igf(x_{l})\langle a \sigma^{\text{eg}}_{k}\rangle-\nu\langle\sigma^{\text{ge}}_{k}\sigma^{\text{ eg}}_{l}\rangle\] (8d) \[+2igf(x_{k})\langle\sigma^{\text{eg}}_{k}\rangle\langle a\sigma^{ \text{eg}}_{j}\rangle-2igf(x_{l})\langle\sigma^{\text{eg}}_{j}\rangle\langle a \sigma^{\text{eg}}_{k}\rangle.\]
The advantage of this second order is, that we can include atom-field and atom-atom correlations, however, we pay the price in form of a higher computation time, as the number of equations now scales quadratically with the cluster number instead of linear as above. The terms involving two atom operators \(\langle\sigma_{i}\sigma_{j}\rangle\) are responsible for the quadratic scaling, as both \(i\) and \(j\) are between \(1\) and \(N\). In an attempt to take the advantage of the upsides of both first order and second order, we approximate two-operator products if they both are atom operators by splitting them, but keep the correlations between atom and field, we call this particular truncation "mixed order":
\[\dot{\langle a\rangle}= -\Bigl{(}\frac{\kappa}{2}-i\Delta\Bigr{)}\langle a\rangle-K\cdot ig \sum_{m}\!f(x_{m})\langle\sigma^{\text{ge}}_{m}\rangle \tag{9a}\] \[\langle a^{\dagger}\dot{a}\rangle= -\Bigl{(}\kappa-2i\Delta\Bigr{)}\langle a^{\dagger}a\rangle+igK \sum_{m}\!f(x_{m})\langle(a\sigma^{\text{eg}}_{m})-\langle a^{\dagger}\sigma ^{\text{ge}}_{m}\rangle\] (9b) \[\langle\dot{a}\dot{a}\rangle= -\Bigl{(}\kappa-2i\Delta\Bigr{)}\langle aa\rangle-2igK\sum_{m} \!f(x_{m})\langle a\sigma^{\text{eg}}_{m}\rangle\] (9c) \[\langle a\dot{\sigma}^{\text{eg}}_{m}\rangle= -\Bigl{(}\frac{\gamma+\kappa+\nu}{2}-i\Delta\Bigr{)}\langle a \sigma^{\text{eg}}_{m}\rangle+igf(x_{m})\langle a^{\dagger}a\rangle-ig\sum_{j= 1}^{N_{0}}\sum_{l=1}^{K}\!f(x_{j})\langle\sigma^{\text{eg}}_{m}\sigma^{\text{ ge}}_{j,l}\rangle\] (9d) \[-2igf(x_{m})\langle a^{\dagger}a\sigma^{\text{eg}}_{m}\rangle\] \[\langle a\dot{\sigma}^{\text{ge}}_{m}\rangle= -\Bigl{(}\frac{\gamma+\kappa+\nu}{2}-i\Delta\Bigr{)}\langle a \sigma^{\text{ge}}_{m}\rangle-igf(x_{m})\langle aa\rangle-ig\sum_{j=1}^{N_{0}} \sum_{i=1}^{K}\!f(x_{j})\langle\sigma^{\text{ge}}_{m}\sigma^{\text{ge}}_{j,l}\rangle\] (9e) \[\langle\dot{\sigma}^{\text{ie}}_{m}\rangle= igf(x_{m})\langle(a^{\dagger}\sigma^{\text{ge}}_{m})-\langle a \sigma^{\text{eg}}_{m}\rangle\rangle-(\gamma+\gamma)\langle\sigma^{\text{ee}}_{ m}\rangle+\nu\] (9f) \[\langle\dot{\sigma}^{\text{ge}}_{m}\rangle= -\frac{\gamma+\nu}{2}(\sigma^{\text{ge}}_{m})-igf(x_{m})\langle a \rangle+2igf(x_{m})\langle a\sigma^{\text{ee}}_{m}\rangle\] (9g) \[\langle\dot{\sigma}^{\text{ee}}_{m}\rangle= -(\gamma+\nu)\langle a\sigma^{\text{ee}}_{m}\rangle-\frac{\kappa}{2} \langle a\sigma^{\text{ee}}_{m}\rangle+\nu(a)-igf(x_{m})\langle aa\sigma^{ \text{eg}}_{m}\rangle\] (9h) \[-ig\sum_{j=1}^{N_{0}}\sum_{i=1}^{K}\!f(x_{m})\langle\sigma^{\text {ee}}_{m}\sigma^{\text{ge}}_{j,l}\rangle+igf(x_{j})\langle a^{\dagger}a\sigma^ {\text{ge}}_{m}\rangle.\]
For the sake of better readability we refrain from carrying out all the expansions of three-operator expectation values like \(\langle aa\sigma^{\text{ee}}_{m}\rangle\). In order to end up with a closed set of equations, one needs to approximate the three-operator expectation
values by two-operator expectation values according to Eq. (7). By neglecting the atom-atom correlations we get rid of the quadratic scaling, but we are still able to keep higher order correlations, namely the ones between atom and field. Grouping the atoms into clusters is more involved in the second order and mixed order as opposed to first order, one has to carefully take into account for example the correlations within a cluster [17, 45].
### Determining the steady state photon number
In order to study the photon number values depending on different parameters we initialize the set of differential equations with a certain parameter set and let the system evolve until it reaches its quasi-steady-state. The velocities are drawn from a Gaussian distribution with width \(\sigma_{\mathrm{v}}\) corresponding to a temperature. We average over the photon number from the timepoint, at which the quasi-steady-state has been reached, see Figure 5. Using more clusters leads to much smaller fluctuations, however, this also requires more computation time. Finding a compromise between high clusternumbers and reasonable computation time is discussed in the next section. Moreover, one can imagine that certain samples of the distribution exhibit special properties, in order to average them out we use 50 (unless indicated otherwise) different samples and also average over them.
### Comparison expansion orders and number of clusters
Limited computation time forces us to make compromises, as both higher expansion orders and higher number of clusters describe the physical reality better, but also require more computation time.
In the following we compare the results for different expansion orders and number of cluster. In Figure 6(a) the solid line depicts the first order, the dotted line mixed order and the dashed line represents the data in second order. We see that all three lines almost perfectly align. Therefore, using higher expansion orders does not lead to significant differences and the first order approximation captures the essential physics in our parameter regime.
In Figure 6(b) we depict the results in first order for different clusternumbers ranging from \(N_{\mathrm{cl}}=50\) to \(N_{\mathrm{cl}}=1000\). Here we see, that varying the number of clusters \(N_{\mathrm{cl}}\) does have an influence on the results. We chose \(N_{\mathrm{cl}}=400\), as the difference between choosing \(N_{\mathrm{cl}}=400\) and as high as \(N_{\mathrm{cl}}=1000\) is only marginal, while the computation time significantly increases.
### Acknowledgements
We acknowledge funding from the Austrian Science Fund (FWF) doctoral college No. DK-ALM W1259-N27 (M.F) and the European Union's Horizon 2020 research and innovation program under the Grant Agreement No. 820404 iqClock (C. H., D. P, H. R.).
## Data availability
Figure 5: _Determining the steady state photon number._ Mean photon number time evolution for different \(\sigma_{\mathrm{v}}\) with (a) 200 atom clusters and (b) 1000 atom clusters. After a certain time, the system reaches its quasi steady state. The fluctuations become smaller if we divide the velocity distribution into a larger number of atom clusters. The other parameters are the same as in Figure 2 for \(N=10^{6}\).
### Underlying data
Fisshare: Superradiant_laser_Figures. [https://doi.org/10.6084/m9.figshare.c.6781920.v1](https://doi.org/10.6084/m9.figshare.c.6781920.v1) [46].
This project contains the following underlying data:
* Data used in Figures 2-6. All data are stored using DelimitedFiles.jl package in Julia.
* Data are stored together with the source code (see below), specific files to readout and plot the data are provided.
Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CCO 1.0 Public domain dedication).
### Software availability
* Source code available from: [https://github.com/martinf97/SRL_paper](https://github.com/martinf97/SRL_paper)
* Archived source code at time of publication: [https://doi.org/10.5281/zenodo.8232295](https://doi.org/10.5281/zenodo.8232295) [47]
* License: MIT License
|
2305.04676 | Enhancing Knowledge Graph Construction Using Large Language Models | The growing trend of Large Language Models (LLM) development has attracted
significant attention, with models for various applications emerging
consistently. However, the combined application of Large Language Models with
semantic technologies for reasoning and inference is still a challenging task.
This paper analyzes how the current advances in foundational LLM, like ChatGPT,
can be compared with the specialized pretrained models, like REBEL, for joint
entity and relation extraction. To evaluate this approach, we conducted several
experiments using sustainability-related text as our use case. We created
pipelines for the automatic creation of Knowledge Graphs from raw texts, and
our findings indicate that using advanced LLM models can improve the accuracy
of the process of creating these graphs from unstructured text. Furthermore, we
explored the potential of automatic ontology creation using foundation LLM
models, which resulted in even more relevant and accurate knowledge graphs. | Milena Trajanoska, Riste Stojanov, Dimitar Trajanov | 2023-05-08T12:53:06Z | http://arxiv.org/abs/2305.04676v1 | # Enhancing Knowledge Graph Construction Using Large Language Models
###### Abstract
The growing trend of Large Language Models (LLM) development has attracted significant attention, with models for various applications emerging consistently. However, the combined application of Large Language Models with semantic technologies for reasoning and inference is still a challenging task. This paper analyzes how the current advances in foundational LLM, like ChatGPT, can be compared with the specialized pretrained models, like REBEL, for joint entity and relation extraction. To evaluate this approach, we conducted several experiments using sustainability-related text as our use case. We created pipelines for the automatic creation of Knowledge Graphs from raw texts, and our findings indicate that using advanced LLM models can improve the accuracy of the process of creating these graphs from unstructured text. Furthermore, we explored the potential of automatic ontology creation using foundation LLM models, which resulted in even more relevant and accurate knowledge graphs.
ChatGPT, REBEL, LLMs, Relation-extraction, NLP, Sustainability
## I Introduction
The technological advancements, together with the availability of Big Data, have led to a surge in the development of Large Language Models (LLMs) [1]. This trend has paved the way for a cascade of new models being released on a regular basis, each outperforming its predecessors. These models have started a revolution in the field with their capability to process massive amounts of unstructured text data and by achieving state-of-the-art results on multiple Natural Language Processing (NLP) tasks.
However, one of the aspects which have not yet taken over the spotlight is the combined application of these models with semantic technologies to enable reasoning and inference. This paper attempts to fill this gap by making a connection between the Deep Learning (DL) space and the semantic space, through the use of NLP for creating Knowledge Graphs [2].
Knowledge Graphs are structured representations of information that capture the relationships between entities in a particular domain. They are used extensively in various applications, such as search engines, recommendation systems, and question-answering systems.
On a related note, there is a significant amount of raw texts available on the Web which contain valuable information. Nevertheless, this information is unusable if it cannot be extracted from the texts and applied for intelligent reasoning. This fact has motivated us to use some of the state-of-the-art models in an attempt to extract information from text data on the Web.
Yet, creating Knowledge Graphs from raw text data is a complex task that requires advanced NLP techniques such as Named Entity Recognition [3], Relation Extraction [4], and Semantic Parsing [5]. Large language models such as GPT-3 [6], T5 [7], and BERT [8] have shown remarkable performance in these tasks, and their use has resulted in significant improvements in the quality and accuracy of knowledge graphs.
To evaluate our approach in connecting both fields, we chose to analyze the specific use case of sustainability. Sustainability is a topic of great importance for our future, and a lot of emphasis has been placed on identifying ways to create more sustainable practices in organizations. Sustainability has become the norm for organizations in developed countries, mainly due to the rising awareness of their consumers and employees. However, this situation is not reflected in developing and underdeveloped countries to this extent. Although the perception of sustainability has improved, progress toward sustainable development has been slower, indicating the need for more concrete guidance [9]. Moreover, theoretical research has attempted to link strategic management and sustainable development in corporations in order to encourage the integration of sustainability issues into corporate activities and strategies [10]. Even though research has set a basis for developing standards and policies in favor of sustainability, a more empirical approach is needed for policy definitions and analyzing an organization's sustainability level with respect to the defined policies.
In this study, the goal is to make a connection between LLMs and semantic reasoning to automatically generate a Knowledge Graph on the topic of sustainability and populate it with concrete instances using news articles available on the Web. For this purpose, we create multiple experiments where we utilize popular NLP models, namely Relation Extraction By End-to-end Language generation (REBEL) [11] and ChatGPT [12]. We show that although REBEL is specifically trained for relation extraction, ChatGPT, a conversational agent using a generative model, can streamline the process
of automatically creating accurate Knowledge Graphs from an unstructured text when provided with detailed instructions.
The rest of the paper is structured as follows: Section II presents a brief literature overview, Section III describes the methods and experimental setup, Section IV outlines the results of the information extraction process, Section V states the propositions for future work, and finally section VI gives the conclusion of the work done in this paper.
## II Literature Review
### _Algorithms_
Our study focuses on the task of information extraction from news and reports available on the Web. For this purpose, we compare the capabilities of NLP models to generate a useful Knowledge Base on the topic.
A Knowledge Base represents information stored in a structured format, ready to be used for analysis or inference. Often, Knowledge Bases are stored in the form of a graph and are then called Knowledge Graphs.
In order to create such a Knowledge Base, we need to extract information from the raw texts in a triplet format. An example of a triplet would be \(<\)Person, Location, City\(>\). In the triplet, we have a structure consisting of the following links Entity -\(>\) Relation -\(>\) Entity, where the first entity is referred to as the subject, the relation is a predicate, and the second entity represents the object. In order to achieve this structured information extraction, we need to identify entities in the raw texts, as well as the relations connecting these entities.
In the past, this process was implemented by leveraging multi-step pipelines, where one step included Named-entity Recognition (NER) [3], and another step was Relation classification (RC) [13]. However, these multi-step pipelines often prove to have unsatisfactory performance due to the propagation of errors from the steps. In order to tackle this problem, end-to-end approaches have been implemented, referred to as Relation-Extraction (RE) [4] methods.
One of the models utilized in this study is REBEL (Relation Extraction By End-to-end Language generation) [11], which is an auto-regressive seq2seq model based on BART [14] that performs end-to-end relation extraction for more than 200 different relation types. The model achieves 74 micro-F1 and 51 macro-F1 scores. It was created for the purpose of joint entity-relation extraction.
REBEL is a generative seq2seq model which attempts to "translate" the raw text into a triplet format. The REBEL model outputs additional tokens, which are used during its training to identify a triplet. These tokens include \(<\)triplet\(>\), which represents the beginning of a triplet, \(<\)subj\(>\), which represents the end of the subject and the start of the predicate, and \(<\)obj\(>\), which represents the end of the predicate and start of the object. The authors of the paper for REBEL provide a parsing function for extracting the triplet from the output of REBEL.
The second approach we took was to use ChatGPT [12], as a conversational agent and compare the performance in the task of entity-relation extraction and creation of a common Knowledge Base. The agent consists of three steps, including separate models: a supervised fine-tuning (SFT) model based on GPT-3 [6], a reward model, and a reinforcement learning model.
ChatGPT was trained using Reinforcement Learning from Human Feedback (RLHF) [15], employing methods similar to InstructGPT with minor variations in data collection. An initial model is trained through supervised fine-tuning, with human AI trainers engaging in conversations, assuming both user and AI assistant roles. To aid in formulating responses, trainers were given access to model-generated suggestions. The newly created dialogue dataset was then combined with the InstructGPT dataset, which was transformed into a dialogue format. In order to establish a reward model for reinforcement learning, comparison data needed to be gathered, consisting of two or more model responses ranked by quality. This data was collected by taking conversations between AI trainers and the chatbot, randomly selecting a model-generated message, sampling multiple alternative completions, and having AI trainers rank them. The reward models enabled fine-tuning of ChatGPT using Proximal Policy Optimization [16], and several iterations of this procedure were executed.
### _Use case: Sustainability_
The Global sustainability study of 2022 has reported that 71% out of 11,500 surveyed consumers around the world are making changes to the way they live and the products they buy in an effort to live more sustainably [17]. This shows that corporations not only need to change their operations to be more sustainable for the sake of the environment but also to be able to stay competitive.
With the vast amount of unstructured data available on the Web, it is crucial to develop methods that can automatically identify sustainability-related information from news, reports, papers, and other forms of documents. One such study identifies this opportunity and attempts to create a method for directly extracting non-financial information generated by various media to provide objective ESG information [18]. The authors have trained an ESG classifier and recorded a classification accuracy of 86.66% on 4-class on texts which they manually labeled. On a related note, researchers have taken a step further to extract useful ESG information from texts. In this article [19], the authors have trained a joint entity and relation extraction model on a private dataset consisting of ESG and CSR reports annotated internally at Credit Agricole. They were able to identify entities such as coal activities and environmental or social issues. In [20], the authors presented an approach for knowledge graph generation based on ESG-related news and company official documents.
## III Methods
This section describes the methods used in this research, including the data collection process and the entity-relation extraction algorithms used to analyze the gathered data.
### _Data Collecting Process_
In order to conduct the experimental comparison of the two approaches for entity-relation extraction, news data was gathered from the Web on the topic of sustainability. For this purpose, the News API [21] system was used. News API is an HTTP REST API for searching and retrieving live articles from all over the Web. It provides the ability to search through the articles posted on the Web by specifying the following options: keyword or phrase, date of publication, source domain name, and language.
Using News API, 94 news articles from 2023-02-15 to 2023-03-19 on the topic of sustainability have been collected. The collected texts contained various numbers of words ranging from 50 to over 4200. With the limitation of the number of tokens that can be passed as input to a language model, additional pre-processing steps needed to be taken to account for the texts consisting of a large number of words.
### _Relation-Extraction Methods_
Relation-extraction is a fundamental task in NLP that aims to identify the semantic relationships between entities in a sentence or document. The task is challenging because it requires understanding the context in which the entities appear and the types of relationships that exist between them.
In this subsection, we describe how we utilize REBEL and ChatGPT for the task of relation extraction.
#### Iii-B1 Rebel
Our first approach was to use REBEL in an attempt to extract relations from unstructured news articles. In order for REBEL to be able to use the provided texts, they need to be tokenized with the corresponding tokenizer function. Tokenization is the process of separating the raw text into smaller units called tokens. Tokens can refer to words, characters, or sub-words. The model has a token limitation of 512 tokens, which means that the collected articles which are longer need to be pre-processed before sending them to the model for triplets extraction.
To address this limitation, we tokenize the raw text and divide the tokens into 256-token batches. These batches are processed separately by the REBEL model, and the results are subsequently merged to extract relations for longer texts. Metadata is also added to the extracted relations, referencing the token batch from which the relation was derived. With this approach, some relations may not be extracted accurately because the batch of tokens might begin or end in the middle of the sentence. However, the number of cases where this happens is insignificant. Thus, we leave their handling for future work.
Once the entity-relation extraction process is finished, the extracted information is stored in a triplet structure. To further normalize the extracted entities, we perform Entity Linking [22]. Entity Linking refers to the identification and association of entity mentions in raw text with their corresponding entities in a Knowledge Base. The process of Entity Linking is not part of the REBEL model, and it is an additional post-processing step that is used to refine the extracted relations. In this study, we utilize DBpedia as our Knowledge Base and consider two entities identical if they share the same DBpedia URL. This approach will not work for entities that are not present on DBpedia.
#### Iii-B2 ChatGPT
The second approach taken in this paper uses OpenAI's ChatGPT [12]. We have created two experiments using ChatGPT.
The first experiment prompts ChatGPT to extract relations from the collected news articles. After extracting the relations, we follow the same steps as with the REBEL model in order to create a comprehensive Knowledge Base.
The second experiment focuses on creating a prompt that would directly generate the entire Knowledge Base and write an ontology describing the concepts identified in the texts. This approach has the goal of reducing the number of manual steps which need to be performed in order to obtain the final Knowledge Graph.
For both experiments, we set the value of the parameter 'temperature' to 0 in order to get more deterministic outputs since OpenAI models are non-deterministic by nature.
**Experiment 1**. For the first experiment, we prompt ChatGPT to extract relations connected to sustainability. ChatGPT was able to successfully extract entities and connect them with relations, and return the results in a triple format. After the relations had been extracted, the same post-processing step of Entity Linking was implemented on the results from ChatGPT.
Although ChatGPT was able to extract entities from the articles and link them with relations, it was not successful at abstracting concepts. The entities and relations identified often represented whole phrases instead of concepts.
To overcome the obstacle, we prompted ChatGPT to map identified entities and relations to a suitable OWL ontology [23]. However, ChatGPT failed to identify relevant sustainability concepts or define their instances. The identified classes, such as _Company_, _Customer_, _MarketingEcosystem_, _Resource_, _CustomerExperience_, _Convenience_, and _DigitalMarketing_, had some potential relevance to sustainability, but ChatGPT did not identify any instances for these classes.
**Experiment 2**. In the second experiment, we refined the prompt to ask ChatGPT to explicitly generate an OWL ontology on sustainability, which includes concepts like _organizations, actions, practices, policies_, and related terms. We also allowed ChatGPT to create additional classes and properties if necessary. We explicitly requested the results to be returned in RDF Turtle format.
Providing additional information to ChatGPT resulted in the creation of an improved Knowledge Base. ChatGPT was able to define concepts such as _organizations_, _actions_, _practices_, and _policies_, as well as identify suitable relations to connect them together. Moreover, it was able to create instances of the defined classes and properties and link them together. This shows that adding more specific instructions to the prompts for ChatGPT can produce drastically different results.
## IV Results
This section presents the results from the experiments described in Section III. A comparison of the created Knowledge Base from both methods is given, and the characteristics of the
generated Knowledge Bases are outlined. Table I represents the Knowledge Bases from the REBEL model and the first experiment with ChatGPT, respectively. The table shows the number of entities, relations, and triplets extracted from the raw texts on sustainability.
As it is evident from the table, the number of triplets extracted by both algorithms is similar. However, the number of entities that ChatGPT extracts are larger than those from REBEL. Although this is true, a lot of the extracted entities are not connected to each other via any relation, thus defeating the purpose of creating a Knowledge Base. Moreover, the number of unique relations is far too large for ChatGPT to be able to produce an ontology that can be used for further experimentation.
The most frequent relation for the REBEL model is the _'subclass of'_ relation, being part of _120_ triplets. For ChatGPT, it's the _'has'_ relation, being identified in 29 triplets. In addition, ChatGPT often fails to generate standard relations and entities which represent abstract concepts and instead outputs an entire phrase, such as in the example 'has already surpassed a goal set in 2019 to install 100,000 heat pumps in homes and businesses', where it identifies this phrase as a relation.
The following subsections represent a visual display of a subset of the generated Knowledge Bases from both algorithms.
### _Rebel_
In order to be able to analyze the Knowledge Base generated using the REBEL model more accurately, we have created a visualization in a graph format, where each entity represents a node in the graph, and each relation represents an edge. Fig. IV-A displays a subset of the extracted Knowledge Base.
It is visible from the figure that the model successfully identifies entities related to sustainability, such as _'sustainability'_, _'recycling'_, _'clean technology'_, _'business model'_, _'repurposing'_, and even links corporations such as _'Samsung'_ to these entities. We can notice that multiple entities are interlinked in a meaningful way.
### _ChatGPT_
The same visualization for the Knowledge Base generated by the first experiment with ChatGPT is represented in this subsection. Fig. IV-B displays a subset of the extracted Knowledge Base.
We can see from the figure that ChatGPT is able to identify entities related to sustainability, but they are represented as phrases instead of concepts. For example, ChatGPT extracts _'small high-value items in jumbo packaging'_, _'steps and waste from its supply chain'_, and _'suppliers to use recycled and recyclable materials'_, as entities.
Although these phrases are related to sustainability, they do not represent specific entities. This happens as a result of the fact that ChatGPT is a conversational model trained on a task to generate responses to a provided prompt and not specifically trained to be able to recognize entities and relations. On the other hand, ChatGPT is able to identify some concepts that REBEL does not, and additionally, it is able to link corporations to specific sustainability-related phrases.
Prompt engineering [24] is of great importance when it comes to the results generated from ChatGPT [12]. Since it is a generative model, small variations in the input sequence can create large differences in the produced output.
Fig. 1: Subset of the Knowledge Base generated using the REBEL model. The Knowledge Base is displayed in a graph format where entities are represented as nodes and relations are represented as edges.
Fig. 2: Subset of the Knowledge Base generated using the first experiment with ChatGPT. The Knowledge Base is displayed in a graph format where entities are represented as nodes and relations are represented as edges.
Observing the full Knowledge Base generated using ChatGPT, most of the time, the extracted entities represent phrases or whole sentences, which is not beneficial for creating a Knowledge Base because it's hard to normalize the entities and relations and create a more general ontology consisting of the concepts represented in the graph.
For this reason, we conducted the second experiment with ChatGPT, where we defined a more detailed prompt and instructed ChatGPT to generate an ontology based on each article it sees and additionally define instances of the generated ontology based on the information present in each article.
Figure IV-B presents the results of the refined prompt, with the ontology and instances generated from a single article out of the 94 collected articles.
Not only does ChatGPT create an ontology using the concepts it was instructed to use, but it also defines classes on its own and is able to create instances of most of the classes accurately.
As an example, it identifies the entity _"Soluna"_ as an _"instanceOf"_ the class _"Organizations"_. Furthermore, it is able to identify the triplet \(<\)Soluna, utilizes, Excess Energy\(>\), and \(<\)Excess Energy, instanceOf, Practices\(>\).
These types of triplets already start representing an initial knowledge base, which can answer queries on companies that implement practices that use excess energy. Although the hierarchy of concepts can be better defined so that more complex queries can be answered, this method represents a solid start in building a shared Knowledge Base, using only unstructured texts.
Using another article, the ontology and instances given in Fig.IV-B have been generated. Looking at this second example, we can see that ChatGPT links practices, actions, and policies to the organizations, which was not the case in the previous example.
Additionally, it identifies the triplets \(<\)Starbucks, instanceOf, Organization\(>\), and \(<\)Starbucks, has Practice, ResourceSharing\(>\). This also allows for answering complex queries in the sustainability domain.
While the consistency of the generated ontologies may be limited, our analysis reveals that there are significant similarities between them. Therefore, future research can explore methods for unifying these ontologies across all articles, which has the potential to enhance the overall definition of concepts and their interrelationships in the sustainability domain.
It is important to mention that due to the limitations of the length of the input prompt passed to ChatGPT, it was not possible to prompt the model first to define an ontology based on all articles on sustainability and then create instances from all the other articles using the same ontology.
### _Quality Evaluation_
Since the evaluation of a Knowledge Base cannot be created in an automated way based on some metric, when ground truth data is not available, we need to utilize qualitative principles in order to evaluate the results. Based on the practical framework defined in the study [25], the following 18 principles identified:
1. Triples should be concise
2. Contextual information of entities should be captured
3. Knowledge graph does not contain redundant triples
4. Knowledge graph can be updated dynamically
5. Entities should be densely connected
6. Relations among different types of entities should be included
7. Data source should be multi-field
8. Data for constructing a knowledge graph should in different types and from different resources
9. Synonyms should be mapped, and ambiguities should be eliminated to ensure reconcilable expressions
10. Knowledge graph should be organized in structured triples for easily processed by machine
11. The scalability with respect to the KG size
12. The attributes of the entities should not be missed
13. Knowledge graph should be publicly available and proprietary
14. Knowledge graph should be an authority
15. Knowledge graph should be concentrated
16. The triples should not contradict each other
17. For domain-specific tasks, the knowledge graph should be related to that field
Fig. 4: Knowledge Base generated with ChatGPT for the second article. The identified concepts are represented as yellow rectangles, and the instances are represented with green rectangles.
Fig. 3: Knowledge Base generated with ChatGPT for the first article. The identified concepts are represented as yellow rectangles, and the instances are represented with green rectangles.
18. Knowledge graph should contain the latest resources to guarantee freshness According to these principles, in our use case, we manually inspected the Knowledge Graphs generated with the proposed methods, and we can conclude that the second ChatGPT approach creates a Knowledge Graph of greater quality compared to the other two Knowledge Bases. However, it should be noted that to create these Knowledge Bases, a few steps of refining the answers from ChatGPT are needed. Sometimes the produced output is erroneous and needs to be corrected before proceeding. Thus, this calls for methods for automatically identifying incorrect OWL syntax and requesting to fix the previous output.
## V Conclusion
In this paper, we presented a Natural Language Processing-based method for constructing a Knowledge Graph on the topic of sustainability using raw documents available on the Web. The study demonstrated that meaningful information could be extracted from unstructured data through an automated process, which can subsequently be utilized for decision-making and process modeling. The focus on sustainability served as a concrete use case, illustrating the effectiveness and potential of the presented approach. Although the experiments were conducted on the use case of sustainability, the primary emphasis is on the methodology itself, which lays the foundation for empirical analysis of qualitative data derived from various sources. The construction of a Knowledge Base using the presented approach can serve as a first step for analyzing diverse aspects of any subject matter and answering complex queries based on the gathered information. In future research, first, we plan to adopt a more formal framework for assessing the quality of generated knowledge graphs. Such a framework will enable us to effectively evaluate the quality of KGs and provide a standardized means of assessing their overall quality. We also want to extend the presented methodology to other domains, unifying generated knowledge bases and employing graph-based modeling to predict missing links between concepts and relationships for a given domain.
|
2307.05311 | Mechanism and kinetics of bacteria-killing in a batch study in presence
of Ag impregnated activated carbon | A mathematical model based on classical species balance equation has been
developed to explain the experimental observation on E.Coli cell-killing in a
batch reactor and to extract information on cell-killing mechanism. Maximum
likelihood optimization method has been used to obtain the best fit to the
experimental data to extract ten unknown kinetic and thermodynamic parameters,
which is otherwise practically impossible. Bacteria killing kinetics is found
to be dominated by contact killing mechanism with approximately one order
difference between contact-and bulk-killing kinetics. The order of Ag-bacteria
interaction is highly non-linear and thus the interaction is highly complex in
manner. Concentration of Ag on the outer surface of AC controls the kinetics of
bacteria killing. Mechanism of release of Ag from the inner surface of pores
does has minimal impact on the bacteria killing kinetics. Activity of silver
seems to remain preserved after cell killing and the surface concentration of
Ag used in the experiment is so high that bacteria killing kinetics is
independent of whether Ag remains active or not. | Vivekananda Bal | 2023-07-11T14:58:46Z | http://arxiv.org/abs/2307.05311v1 | Mechanism and kinetics of bacteria-killing in a batch study in presence of Ag impregnated activated carbon
#### Abstract:
A mathematical model based on classical species balance equation has been developed to explain the experimental observation on _E.Coli_ cell-killing in a batch reactor and to extract information on cell-killing mechanism. Maximum likelihood optimization method has been used to obtain the best fit to the experimental data to extract ten unknown kinetic and thermodynamic parameters, which is otherwise practically impossible. Bacteria killing kinetics is found to be dominated by contact killing mechanism with approximately one order difference between contact-and bulk-killing kinetics. The order of Ag-bacteria interaction is highly non-linear and thus the interaction is highly complex in manner. Concentration of Ag on the outer surface of AC controls the kinetics of bacteria killing. Mechanism of release of Ag from the inner surface of pores does has minimal impact on the bacteria killing kinetics. Activity of silver seems to remain preserved after cell killing and the surface concentration of Ag used in the experiment is so high that bacteria killing kinetics is independent of whether Ag remains active or not.
## 1 Introduction
Drinking water contains pathogens like bacteria which cause water borne disease (WHO, 2014; WHO, 2015). Removal of those bacteria from drinking water is a challenging task. One of the widely used methods of removal of these pathogens is allowing the contaminated water to pass through a packed bed filled with material having antibacterial property. One such material is Ag impregnated granular activated carbon (GAC) possessing a higher surface to volume ratio due to the highly porous nature of the carbon. Though there has been a plethora of experimental works to understand the bacteria removal efficiency of such a system, there are not many mathematical models to understand the mechanism of killing of bacteria in these systems.
The effect of Ag nanoparticles impregnated AC in removing bacteria from drinking water was studied by many researchers worldwide (Acevedo et al., 2014; Biswas and Bandyopadhyaya, 2016; Pal et al., 2006; Park et al., 2009; Srinivasan and Bandyopadhyaya, 2013; Zhao et al., 2013). The first study (Pal et al., 2006) of the antibacterial activity of Ag modified AC found that more than 90% bacteria is killed within 60 minutes and leaching of Ag is low close to 40 ppb. Antibacterial activity of Ag was found to be due to Ag mediated production of reactive oxygen (Park et al., 2009). Impregnated Ag particles were found to be present on the outer surface as well as inner surface of the AC particles, and impregnation of silver in the particle form reduces the release of silver into water (Zhao et al., 2013). Zhao et al., 2013 reported a Ag release of more than 25% of the deposited amount within 5-6 days. Acevedo et al., 2014 also fund that Ag nanoparticles were present on the outer as well as inner surface of AC and treatment of AC with KOH before Ag nanoparticle impregnation gives higher nanoparticle density and higher antibacterial activity with complete killing of bacteria within 15 minutes. However, they did not report any Ag release statistics. Later, it was found that plasma treatment of AC allows the deposition of Ag particles mostly on the outer surface of AC (Biswas and Bandyopadhyaya, 2016; Srinivasan and Bandyopadhyaya, 2013) and that reduces the Ag release to 4wt% within 10 days (Biswas and Bandyopadhyaya, 2016). This suggests that the Ag release and bacteria killing efficiency are the two challenging problems and there have been quite a few experimental works in addressing those problems. Understanding whether the bacteria killing mechanism is via contact killing or leaching of Ag followed by bulk killing and obtaining killing kinetics will help experimentalists in designing the experiments to obtain better killing efficiency. Determination of killing kinetics or mechanism experimentally can be a challenging task. In this regard, mathematical models can be useful in obtaining information on killing mechanism as well as kinetics.
In order to understand the bacterial random motion and chemotaxis in confined geometry, several discrete and continuum models have been developed over the years (Berg, 1983; Keller and Segel, 1971). Most popular one was the continuum model developed by Keller and Segel, 1971 based on the classical species balance equation. Later, many improved models have been proposed by (Dahlquist et al., 1976; Ford and
Cummings, 1992; Ford and Lauffenburger, 1991; Rivero et al., 1989)incorporating some modifications in the existing models to find the diffusion and chemotactic coefficient. Though modeling of packed bed reactor has been widely used (Cordero-Lanzac et al., 2019; Fogler, 2005; Zhu et al., 2020)to extract reaction kinetics and concentration profile of species in many different problems across many disciplines and is a classical method, there are not many rigorous and robust mathematical models to understand the mechanism of killing and to extract the killing kinetics.
Thus, it is clear that there have been many attempts either to understand the bacterial transport mechanism in a restricted environment or to extract chemical reaction kinetics in porous media or on catalyst particle surface, but no attempts have been made to understand the mechanism of killing of bacteria or the kinetics, specifically in the environment where antibacterial agent is present at the outer surface as well as in the inner surface of the complex tortuous pores of the AC. This is mainly because of the difficulty associated with performing experiment or the unavailability of experimental tools to track the killing mechanism. There are many theories on the mechanism of interaction of silver with bacteria. As per most of the literatures, silver penetrates the cell wall of the bacteria, binds with sulphur and phosphorous containing proteins, and thus prevent the cell division (Park et al., 2009; Yin et al., 2020). Therefore, a knowledge of the amount of Ag required to kill one CFU of bacteria and the knowledge of bacteria availability after killing are extremely important to design an optimum and efficient Ag-AC composite. In this work, to address above questions, a rigorous and robust mathematical model based on simple species balance equations has been presented by including in the model, the effects of (i) Ag impregnation in both inner and outer surface of AC, (ii) leaching of Ag into the bulk water from both outer and inner surface of AC, (iii) leach killing kinetics, and (iv) contact killing kinetics.
## 2 Experiment
Experimental data presented in this article was obtained from different sources (Biswas and Bandyopadhyaya, 2016; Pal et al., 2006; Zhao et al., 2013). Experimental procedure was described in detail in their work. Here we present a brief description of the experimental procedure for convenience of the readers. Ag nanoparticles were synthesized from AgNO\({}_{3}\) in presence of trisodium citrate dihydrate under UV light exposure. AC particles of size ranging from 420-840 um was used as a supporting medium for Ag. AC pore size distribution was measured using mercury porosimeter and pores are micro and mesoporous in nature. To impregnate the Ag particles on AC particles, AC was first functionalized using plasma treatment in presence of oxygen atmosphere and then crushed gently and sieved using \(20\times 40\) mesh sieve. Sieved AC particles were then added in the Ag nanoparticles suspension and stirred overnight at room temperature. Thereafter, the Ag deposited AC particles were separated by filtration and then dried at room temperature.
For studying antibacterial activity of Ag-Ac, _E.Coli_ K12 was used as a model bacteria. For experiments, _E.Coli_ culture was prepared at a concentration of \(10^{4}\) CFU/mL. Ag-AC granules at a concentration of 2, 4, 6, and 8 mg/mL were added in the cell culture and incubated and maintained at 150 rpm in a rotating shaker. Samples were collected every 5 minutes for a period of 35 minutes, plated on agar, incubated for 24 h, and then number of colonies formed were counted.
In the present work, both the AC size and AC pore size distribution are assumed to have a Gaussian distribution.
## 3 Model development
As we discussed, AC possesses a complex tortuous porous structure. In the presence of AC particles, bacteria will reach the surface of the carbon granules by self-diffusion process and come in contact with Ag nanoparticles. Since the AC granules used in this study were of micro, and mesoporous in nature, pore size remains below 50 um (Biswas and Bandyopadhyaya, 2016; Pal et al., 2006; Zhao et al., 2013). Bacteria, _E.Coli_. K12 used in this study has a size greater than 1 um in length and half a micrometer in diameter. This will prevent the bacteria from entering the pores of AC. Thus, bacteria will interact with Ag present on the outer surface of AC and the Ag present in the bulk liquid. Rate of change of bacteria concentration in the bulk liquid phase is given by
\[\frac{\partial c_{b}}{\partial t}=-\tau_{lb}-\tau_{sb} \tag{1}\]
Here, \(c_{b}\) is the bulk concentration of bacteria. \(\tau_{lb}\) is the rate of killing of bacteria in the liquid phase. \(\tau_{lb}=k_{l}c_{Ag,b}^{m}c_{b}^{n}\), here \(k_{l}\) is the rate constant for killing of bacteria in liquid phase and \(c_{Ag,b}\)is the concentration of Ag in the bulk liquid phase. \(m\) and \(n\) are the orders of the reaction with respect to bulk Ag and bacteria, respectively. \(\tau_{sb}\)is the rate of killing of bacteria in the liquid phase adjacent to the surface of AC. \(\tau_{sb}=k_{s}c_{Ag,sfo}^{m^{\prime}}c_{b}^{n}\), here \(k_{s}\) is the rate constant for killing of bacteria on the surface of GAC, \(m^{\prime}\)is the order of
surface reaction with respect to surface silver. \(c_{Ag,sto}\)is the concentration of outer surface Ag. Since the system is well mixed and assuming that there is no diffusion limitation adjacent to the particle surface, bulk concentration of bacteria will be same as that in the liquid phase adjacent to the solid surface. In developing this model, it is assumed that the bulk solution is well mixed due to continuous stirring of the mixture.
In the literature, there are mixed reports on the release of silver from the surface (Acevedo et al., 2014; Biswas and Bandyopadhyaya, 2016; Pal et al., 2006; Zhao et al., 2013). Some claim a controlled and low release of Ag within the permissible limit, whereas some others report a significant release of silver into the water. This sparks the debate regarding the mechanism of killing of bacteria, leach killing vs. contact killing. Another important aspect is the loading of Ag on AC. In some work, Ag loading is mostly inside the pores (Pal et al., 2006; Zhao et al., 2013), whereas others (Biswas and Bandyopadhyaya, 2016)claim a selective impregnation of Ag on the outer surface of AC.
Here, two situations have been considered,
\[CaseA: Bacteria+jAg=dead\ bacteria+jAg\]
when activity of Ag is not lost, and
\[CaseB: Bacteria+jAg=dead\ bacteria\]
when Ag activity is lost.
From the stoichiometry balance (Fogler, 2005), we get,
\[r_{Ag}=jr_{b}\]
and
\[r_{Ag}=jr_{sb}\]
Here, \(r_{Ag}\)is the rate of loss of Ag due to interaction with Ag in the bulk liquid and \(r_{SAg}\)is the rate of loss of Ag due to interaction with Ag on the outer surface of AC.
The equation 1 needs tracking of Ag concentration in the bulk liquid phase, which is given by
\[\frac{dc_{Ag,b}}{dt}=\frac{1}{V}\sum_{i=1}^{N}2\pi R_{i}^{2}N_{pi}D_{Ag}\frac{ dc_{Ag,pi}}{dz}\Big{|}_{z=L/2}\]
\[+\frac{A_{GAC,o}k_{Ag}(c_{Ag,s}-c_{Ag,b})}{V}-r_{LAB} \tag{2}\]
\[r_{LAB}\]
\[0,\qquad for\ no\ loss\ of\ Ag\ activity,caseA\]
\[=\left\{jk_{1}c_{Ag,b}^{m}c_{b}^{n},for\ loss\ of\ Ag\ activity\ after\ killing,caseB\right.\]
To take into account, the effect of loss of activity of Ag after killing of bacteria, one needs the information on stoichiometry, which is practically impossible to determine. Thus, stoichiometry of the reaction is assumed to follow the ratio \(bacteria\): \(Ag=1\): \(j\). \(j\)is a positive integer number and value of \(j\)used in the simulation varies between 1 to 20. \(V\) is the volume of the bacteria containing water, \(N_{pi}\)is the total number of pores of specific size \(i\) in the AC added in that experiment, \(R_{i}\)is the radius of a pore. \(D_{Ag}\)is the silver diffusion coefficient in the tortuous interconnected pores in the AC. \(c_{Ag,s}\)is the solubility of the Ag in liquid phase. It also represents the maximum attainable concentration of Ag in liquid phase adjacent to the solid surface due to dissolution of Ag coated on AC. \(c_{Ag,pi}\)is the concentration of Ag inside pore. AC used in this study has a Gaussian pore size distribution. \(A_{GAC,o}\)is the total outer surface area of the AC particles used in each experiment and \(k_{Ag}\)represent the rate constant/mass transfer coefficient for release/dissolution of Ag into the liquid phase from the outer surface of AC.
Ag present in the bulk liquid comes from dissolution of Ag/Ag nanoparticles present on the outer surface of the AC particles and inside the pores on the AC. Thus, there is diffusion transport of Ag from the inside of the pores/AC particles towards the pore mouth/AC particle surface. \(D_{Ag}\frac{dc_{Ag,pi}}{dz}\Big{|}_{z=L/2}\) is the diffusion flux of Ag at the mouth of pore \(i\) and the corresponding species balance equation for the transport (Fogler, 2005)of the Ag inside the pore \(i\) is given as
\[\frac{\partial c_{Ag,pi}}{\partial t}=D_{Ag}\left[\frac{1}{r}\frac{\partial} {\partial r}\left(r\frac{\partial c_{Ag,pi}}{\partial r}\right)+\frac{ \partial^{2}c_{Ag,pi}}{\partial z^{2}}\right] \tag{3}\]
Here, \(r_{Ag}\)is the rate of loss of Ag from the liquid phase due to killing of bacteria. Here, \(r_{Ag}=r_{l}\). It is assumed that after killing of bacteria, activity of silver is lost. In developing the above model for transport of Ag, it is assumed that pores are straight cylindrical channels and open to media in both ends. Initial and boundary condition for solving the eq. 3 is given below
\[IC:t=0,c_{Ag}=0\]
\[BC1:z=0,c_{Ag}=c_{Ag}\]
\[BC2:z=L/2,\frac{dc_{Ag}}{dz}=0\]
\[BC3:r=0,\frac{dc}{dr}=0\]
\[BC4:r=R,flux=k^{\prime}_{Ag}\big{(}c_{Ag,s}-c_{Ag,pl}\big{)}\]
Here, pores are assumed to be cylindrical in shape, \(D_{Ag}\) is effective diffusion coefficient of bacteria in the pore, \(r\) is the radial co-ordinate, \(z\) is the axial co-ordinate, \(k^{\prime}_{Ag}\)is the mass transfer coefficient for Ag release from the pore walls and \(L\)is the length of the pore. Pore radius and GAC granules are found to have a Gaussian distribution.
Change in the surface concentration of Ag is given by
\[A_{tot}\frac{dc_{Ag,sf}}{dt}=A_{EAC,o}\frac{dc_{Ag,sf,0}}{dt}+A_{ GAC,in}\frac{dc_{Ag,sf,in}}{dt}\] \[= -A_{GAC,o}k_{Ag}\big{(}c_{Ag,s}-c_{Ag,b}\big{)}-Vr_{SAg}\] \[- \sum_{i=1}^{N}\int_{0}^{L}2\pi R_{i}\,N_{pi}k^{\prime}_{Ag}\big{(} c_{Ag,s}\] \[- c_{Ag,pl}\big{)}\,dL \tag{4}\]
\[r_{SAg}\] \[= \begin{cases}0,&for\text{no loss of $Ag$ activity},\text{case $A$}\\ jk_{5}c^{m^{\prime}}_{Ag,sf}c^{n}_{b},for\text{loss of $Ag$ activity $after$ killing},\text{caseB}\end{cases}\]
In actual GAC particles, pores are tortuous, interconnected, and are of varying cross-sectional area along the porous path. In addition, some pores are open at both ends and most of them have dead ends. To incorporate these effects, we define an effective diffusion coefficient \(D_{e}\)in the porous GAC particle, a concentration of Ag \(c_{Ag}\)based on per unit volume of GAC particle, and rewrite the species balance equation (Fogler, 2005) for Ag in the GAC as given by
\[\frac{\partial c_{Ag}}{\partial t}+\frac{\partial}{r^{2}\partial}\bigg{(}-r ^{2}D_{e}\frac{\partial c_{Ag}}{\partial r}\bigg{)}-S_{v}r_{n}=0 \tag{5}\]
Here, \(D_{e}=\frac{D_{Ag}e\sigma}{\tau}\) (Fogler, 2005), where \(\varepsilon\)is the porosity of the GAC particle, \(\sigma\)is the constriction factor, and ris the tortuosity. Pore constriction factor takes into account of the variation in the cross-sectional area that is normal to the diffusion path. Tortuosity is defined as the ratio of actual distance a molecule travels between two points to the shortest distance between those two points. \(S_{v}\)is the surface area of AC per unit volume of AC. \(r_{n}=k_{n}\big{(}c_{Ag,s}-c_{Ag,e}\big{)}\), where, \(k_{n}\)is the rate constant for release/dissolution of Ag from the GAC surface. Thus, the bulk concentration of Ag is tracked by the same equation 2 with modified flux term. Thus, the equation 2 becomes
\[\frac{dc_{Ag,b}}{dt}=\frac{1}{V}\sum_{i=1}^{N}4\pi R_{i}^{2}N_{d 1}D_{e}\frac{dc_{Ag}}{dr}\bigg{|}_{r=a_{i}/_{2}}\] \[+\frac{A_{GAC,o}k_{Ag}\big{(}c_{Ag,s}-c_{Ag,b}\big{)}}{V}-r_{AB} \tag{6}\]
Here,
\[r_{LAB}\] \[= \begin{cases}0,&for\text{no loss of $Ag$ activity},\text{caseA}\\ jk_{l}c^{m}_{Ag,b}c^{n}_{b},for\text{loss of $Ag$ activity $after$ killing},\text{caseB}\end{cases}\]
Here, \(R_{i}\)and \(d_{i}\)are the radius and diameter of GAC granules/particles of size class \(i\), \(N_{d1}\)is the number of GAC granules of size class \(i\).
Change in the surface concentration of Ag is given by
\[A_{tot}\frac{dc_{Ag,sf}}{dt}=A_{GAC,o}\frac{dc_{Ag,sf,0}}{dt}+A_ {GAC,in}\frac{dc_{Ag,sf,in}}{dt}\] \[= -A_{GAC,o}k_{Ag}\big{(}c_{Ag,s}-c_{Ag,b}\big{)}-Vr_{SAg}\] \[- \sum_{i=1}^{N}\int_{0}^{a_{i}/_{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\min_{p}\big{(}C_{e}-C_{s}(p,p_{0})\big{)}^{\mathsf{T}}V_{e}^{-1}\big{(}C_{e}-C_{ s}(p,p_{0})\big{)} \tag{8}\]
Here, \(p\) is the vector of parameter, \(p_{0}\)is the vector of predetermined parameters, \(C_{e}\)is the vector of experimental observations, \(C_{s}\)is the vector of model predictions, and \(V_{e}\)is the matrix of experimental error variance.
## 4 Method of computation
Governing PDEs were discretized in spatial domain using finite difference method (backward and central difference scheme) to obtain a set of ODEs in time domain (Gupta, 2013; Patankar, 1980). ODEs were then discretized in time domain using implicit Euler scheme to obtain a set of nonlinear algebraic equations, which were then solved using Newton-Raphson method along with Gauss-Seidel and successive overrelaxation method. The value of overrelaxation factor is maintained between 0-1 (Gupta, 2013; Patankar, 1980). Around 50-120 grids were used in the radial direction depending on the size and 140-250 grids were used in the axial direction depending on the pore length. making a total of 7,000-30,000, which represents the optimum number of grids minimizing the total error (truncation error + round off error). Dense grids were used near the wall of a pore to capture the sharp variation of concentration, whereas in the central region, sparse grids were used. Mass balance for Ag as well as bacteria was checked in every time step and mass balance error was well below 2%. An adaptive time stepping method was used, where time step size was varied between \(10^{-5}\) to \(10^{-2}\) s. A relative tolerance value of \(10^{-5}\) was used to obtain a converged value.
Model and simulation results were validated with the existing simulation results for species balance equation in the literature for the different systems. Species balance equation for Ag inside the pore eq 4 was validated with the model in presented in textbook (Fogler, 2005) for the conversion of propylene oxide to the propylene glycol. Similarly, model eq. 6 was validated with model presented in the textbook (Fogler, 2005) for concentration profile in a spherical catalyst.
## 5 Results and discussion
A total of five data sets were taken from Biswas and Bandyopadhyaya (Biswas and Bandyopadhyaya, 2016): four batch mode experimental data sets for bacteria killing and one batch date set for silver release. Two batch experimental data set for bacteria killing were taken from Pal et al. (Pal et al., 2006). Three experimental data sets were taken from Zhao et al. (Zhao et al., 2013): two batch experimental data sets for bacteria killing and one batch data set for Ag release. Thus, there are 10 unknown parameters and 10 experimental data sets to be fitted with. Parameters used in this simulation was taken from the respective articles (Biswas and Bandyopadhyaya, 2019, 2016; Pal et al., 2006; Zhao et al., 2013). Surface concentration of Ag is calculated from the BET/mercury porosimeter result and Ag loading data reported in the respective articles (Biswas and Bandyopadhyaya, 2019; Pal et al., 2006; Zhao et al., 2013). Distribution of silver between the inner and outer surface of AC is extremely important in this case as it varies with the methods used by different research groups to impregnate the Ag. In case of Ag-AC composite prepared by Pal et al. (Pal et al., 2006) and Zhao et al. (Zhao et al., 2013), Ag is assumed to be homogeneously distributed between inner and outer surface of AC, whereas for Biswas and Bandyopadhyaya (Biswas and Bandyopadhyaya, 2016), 50-80% of total Ag loading is assumed to be distributed homogeneously on the outer surface of AC and rest of the amount is assumed to be distributed homogeneously over the inner surface of the AC. In developing this model, it is assumed that Ag formed a continuous film on the AC surface. Since Ag particles used in their antibacterial activity study were larger in size compared to the micro-and mesopores, micro-and mesopores are omitted from Ag surface concentration calculation as it is less likely that those pores would be accessible to Ag particles.
Initial concentration of _E.Coli_ in the antibacterial activity study was \(10^{4}\) CFU/mL in case of Biswas and Bandyopadhyaya (Biswas and Bandyopadhyaya, 2016) and \(10^{7}\) CFU/mL in case of Zhao et al. (Zhao et al., 2013) and Pal et al. (Pal et al., 2006). Similarly, initial volume of bacteria suspension and amount of Ag-AC used in the experiment were 100 mL and 2, 4, 6, 8 mg/mL, respectively, in case of Biswas and Bandyopadhyaya (Biswas and Bandyopadhyaya, 2016), 100 mL and 2 g, respectively, in case of Zhao et al. (Zhao et al., 2013), and 10 mL and 2 g, respectively, in case of Pal et al. (Pal et al., 2006). Initial loading of Ag on AC was 0.8 wt% in case of Biswas and Bandyopadhyaya (Biswas and Bandyopadhyaya, 2016), 1.65wt% in case of Zhao et al. (Zhao et al., 2013), and 0.2 wt% in case of Pal et al. (Pal et al., 2006).
Fitted simulation results shown in **Figs.2-4** are for the case A, where activity of Ag remains preserved even after interaction with bacteria cell. The corresponding fitted parameters are shown in **Table 1** for two sets of models set I and set II and for two different cases case
A and case B. It is observed that model simulation results explain the experimental variation very well in all cases. In case of **Fig.2**, faster killing of cell is achieved in case of batch study with higher Ag-AC concentration, which is obvious, as higher Ag-AC concentration increases the outer surface area density available for surface/contact killing per unit volume of the system. Both 6 and 8 mg/mL concentration shows the complete killing within 15 minutes suggesting that outer surface area limitation to the contact killing is not present in higher concentration. At low concentration of Ag-AC, low outer surface area density per unit volume of the system available for contact killing increases the killing time due to crowding of bacteria. Thus, killing time is higher in case of 2 mg/mL system as compared to the system with 4 mg/mL. Zhao et al. (2013); **Fig. 4**) found a time scale of 25 minutes for complete killing of bacteria for an Ag loading of 1.65wt%, which is twice than that used by Biswas and Bandyopadhyaya (Biswas and Bandyopadhyaya, 2016) for complete killing within similar time scale. This is probably because of the low surface concentration of Ag on the outer surface of AC in case of Zhao et al. (2013) as their method of Ag impregnation is non-selective. In case of Pal et al. (Pal et al., 2006; **Fig. 4**), time scale of complete killing is much higher than others because of the low surface Ag loading due to non-selective impregnation.
From the **table 1**, it is observed that value of rate constants for bulk-killing of bacteria is more than one order low as compared to that for surface/contact-killing. Thus, the bacteria killing mechanism is dominated by the contact killing. One possible reason for this is that the mechanism of bacteria-surface interaction in contact killing is different from that in the bulk killing. Another possible reason may be that bacteria can interact with more Ag atoms at the surface than in the bulk. Killing kinetics in both surface and bulk killing is not truly first order with respect to either Ag or bacteria as opposed to the claims made by most of the previous works (Biswas and Bandyopadhyaya, 2019; Pal et al., 2006; Zhao et al., 2013) in this area. Nonlinearity of the reaction order suggests that the interaction between bacteria and silver is quite complex.
There are many theories on the mechanism of interaction of silver with bacteria. According to most of the literature (Park et al., 2009), silver penetrates the cell wall of the bacteria, binds with sulphur and phosphorous containing proteins, and thus prevent the cell division. In order to calculate the amount of Ag required to kill one CFU of bacteria and to understand whether Ag remains available after killing bacteria (i.e., active Ag or inactive Ag), simulation was carried out with different stoichiometry (\(j\)) of the cell-bacteria interaction and without the loss of Ag activity as shown in the section 4. Overall, the calculated rate constants and order of the Ag-AC interactions do not show any significant variation from each other. Thus, it is impossible to draw any conclusion about the Ag activity. This is probably because the amount of Ag used in the experiments is so high that even if there is a loss of small amount of Ag due to inactivity, it does not affect the killing rate. It may be that the variation in stoichiometric amount of Ag from 1 to 100 is not sufficient and probably an even higher order of variation is necessary to see any significant variation in killing rate between case A and case B. However, numerical convergence problems above a stoichiometric value of 100 made it impossible to explore higher stoichiometric interactions. Little variation in rate constants and order of the reaction on variation of stoichiometric amount of Ag from 1 to 100 may also suggest that the Ag-AC interaction pathways or mechanism of interaction remains same and is independent of Ag involved in the interaction. Both the models as represented by set I and set II does not show a significant difference in parameter values. This indicates that the mechanism of Ag release from the inner pore surface of AC does not impact the killing kinetics.
The value of Ag diffusion coefficient obtained from this analysis is close to that reported in literature for other similar metal atoms or ions or molecules (Bilbao et al., 2016; Green and Southard, 2018). At least the calculated value is within the same order of magnitude. Equivalent diffusivity calculated from the second mode is slightly lower as compared to that obtained from the first model. This is because the second model considers the tortuous nature of the actual pores in the AC. Literature reports that Ag is sparingly soluble. The solubility value obtained in this analysis seems to be reasonable for sparingly soluble metal ions or atoms (Green and Southard, 2018). Mass transfer coefficient calculated in this analysis is within the range of that reported in literature for other materials or small molecules (Bilbao et al., 2016; Green and Southard, 2018).
Figure 5: Comparison of simulation results with the batch experimental data of Biswas and Baldyopadhyaya et al., 2016 for Ag release. Fitted parameters are shown in **table 1**.
Figure 6: Comparison of simulation results with batch experimental data of Zhao et al., 2013 for Ag release. Fitted parameters are shown in **table 1.**
## 6 Conclusion
Model developed based on species balance equation including the effect of distribution of Ag on the outer as well as inner surface, with and without the loss of Ag activity, stoichiometry in Ag-bacteria interaction and Ag-bacteria interaction on the surface as well in the bulk nicely captured the experimental data for variation in bacteria killing and Ag release over time. It is found that the mechanism of release of Ag from the internal pores of AC does not influence the bacteria killing kinetics. Interestingly, it is found that the contact killing of bacteria dominates over the killing in bulk liquid (leach killing). Within the limit of this model, it is also found that the activity of Ag remains preserved after interaction with bacteria (i.e., Ag becomes available after interaction) and Ag-bacteria interaction is not limited by amount of Ag. Amount of Ag interacting with a bacteria CFU does not significantly influence the killing kinetics. Thankfully, low solubility of Ag prevents the release/dissolution of surface Ag. Model suggests that the concentration of Ag on the outer surface controls the overall killing kinetics. Mathematical model presented herein can be further improved by incorporating the effect of bacteria-Ag surface interaction mechanism, surface diffusion, resistance offered by liquid film around the AC surface to the bacteria diffusion, actual spatial distribution of Ag-AC composite particles in the liquid, mixing in the system and discrete Ag particles on AC surface instead of continuous film assumption.
## Acknowledgements
Author sincerely acknowledges the department of chemical engineering at Indian Institute of Technology Bombay for making this work to happen.
|
2307.13573 | Scalar propagator in a background gluon field beyond the eikonal
approximation | We investigate the path integral representation of the scalar propagator in a
background gluon field, extending beyond the eikonal approximation by
considering all gauge field components and incorporating its $x^-$ dependence.
Utilizing the worldline formalism, we integrate the Schwinger proper time to
express the scalar propagator in light-cone coordinates, facilitating a direct
comparison with known results in the literature. The derived propagator
captures the change of longitudinal momentum of the projectile within the
medium. In the high-energy limit, our result simplifies to the effective gluon
propagator employed in the BDMPS-Z formalism. Hence, we propose that our
outcome serves as a foundational point for investigating corrections to the
BDMPS-Z spectrum arising from the longitudinal momentum transfer of the
radiated gluon with the medium, as well as for studying collisional energy loss
phenomena. Lastly, by employing an expansion around the classical saddle point
solution, we systematically derive an eikonal expansion in inverse powers of
the boost parameter, encompassing corrections related to longitudinal momentum
transfer and interactions of the projectile with the transverse component of
the field. | Pedro Agostini | 2023-07-25T15:26:48Z | http://arxiv.org/abs/2307.13573v2 | # Scalar propagator in a background gluon field beyond the eikonal approximation
###### Abstract
We investigate the path integral representation of the scalar propagator in a background gluon field, extending beyond the eikonal approximation by considering all gauge field components and incorporating its \(x^{-}\) dependence. Utilizing the worldline formalism, we integrate the Schwinger proper time to express the scalar propagator in light-cone coordinates, facilitating a direct comparison with known results in the literature. The derived propagator captures the change of longitudinal momentum of the projectile within the medium. In the high-energy limit, our result simplifies to the effective gluon propagator employed in the BDMPS-Z formalism. Hence, we propose that our outcome serves as a foundational point for investigating corrections to the BDMPS-Z spectrum arising from the longitudinal momentum transfer of the radiated gluon with the medium, as well as for studying collisional energy loss phenomena. Lastly, by employing an expansion around the classical saddle point solution, we systematically derive an eikonal expansion in inverse powers of the boost parameter, encompassing corrections related to longitudinal momentum transfer and interactions of the projectile with the transverse component of the field.
## 1 Introduction
The propagation of fast-moving particles in a classical background field, denoted by \(A^{\mu}(x)\), generated by an external source, is a crucial element in describing high-energy scattering processes. In high-energy Quantum Chromodynamics (QCD), the classical field is generated by a dense system that can come from two main scenarios: (i) a nucleus (cold nuclear matter) in proton-nucleus (\(p\)A) collisions or in nuclear deep inelastic scattering (DIS), and (ii) a hot medium produced after a heavy-ion collision.
In high-energy \(p\)A collisions, the projectile is typically treated as a dilute system composed of collinear partons, which behave as quasi-free particles. This is possible because the scale for partonic interactions is significantly larger than the interaction time due to
Lorentz dilation, allowing us to describe the collision in terms of the partonic structure of the projectile. However, the situation is different for the target, where gluons occupy a substantial portion of the phase space at high energies. This is a consequence of the rapid growth of gluon density at small values of \(x\) (higher energies) and higher values of the nucleus mass number \(A\). Nevertheless, this growth is expected to be tamed by non-linear effects, leading to a _saturation regime_[1; 2]. In this regime, the target can be effectively described by a classical field, and the Color Glass Condensate (CGC) emerges as a semi-classical effective field theory (EFT) [3; 4; 5; 6; 7] to describe this phenomenon.
On the other hand, numerous studies have been conducted to investigate final state interactions in heavy-ion collisions, with a key focus on understanding the significant suppression observed in large transverse momentum hadron spectra when compared to proton-proton collisions. This phenomenon, known as _jet quenching_, is attributed to the energy loss experienced by the initially produced hard parton. As the hard parton traverses through the dense hot medium formed after the collision, which can be described by a classical field, it emits gluons, leading to a partial depletion of its energy. The theory that describes this process of radiative parton energy loss is referred to as the Baier, Dokshitzer, Mueller, Peigne, Schiff, and Zakharov (BDMPS-Z) formalism [8; 9; 10; 11; 12].
In the study of initial stage effects using the Color Glass Condensate (CGC) effective field theory and final stage effects using the BDMPS-Z formalism, the propagation of partons in a classical background field is of utmost interest. Both approaches rely on the _eikonal approximation_, which exploits the high rapidity difference between the projectile and the target. Under this approximation, the field equations lead to a Weizsacker-Williams-like field, where only the longitudinal component is significant and independent of \(x^{-}\). Consequently, the longitudinal momentum of the partons remains conserved during the interaction due to the \(x^{-}\) independence of the field. The Lorentz contraction causes the color charge density of the nucleus to be concentrated around the light-cone \(x^{+}=0\), giving rise to the so-called shockwave approximation. In this scenario, the interaction between the projectile and the nucleus is described by the Wilson line along the nucleus's longitudinal direction, with only the longitudinal component of the field contributing. However, in the BDMPS-Z formalism, the shockwave approximation is relaxed to account for a finite extent of the medium. As a result, the propagation of the projectile within the medium becomes more intricate, characterized by transverse Brownian motion inside the medium together with a color rotation. Finally, the spin structure of the partonic current become negligible in the eikonal limit since they are suppressed by a inverse power of longitudinal momentum.
Despite the eikonal approximation being well-justified and having yielded numerous successful phenomenological results in high-energy collisions, there is a growing interest in relaxing its strict application. This interest stems from the realization that while the eikonal approximation simplifies the structure of scattering amplitudes, it may also obscure important phenomena present in the scattering processes, as we will discuss further below. Furthermore, with the upcoming Electron-Ion Collider (EIC) at BNL, which will explore DIS at high but moderate energies, a regime where low-\(x\) physics is still relevant but non-eikonal effects may become significant, there has been a notable surge in the scientific
community's interest in studying the implications of non-eikonal corrections.
Various methods exist for relaxing the eikonal approximation. One such approach involves relaxing the assumption of infinite boost. In the context of the CGC framework, pioneering studies in this direction were conducted in [13; 14]. These investigations, similar to the analysis presented in this manuscript, focused on computing corrections to the finite width of the target by expanding the field at next-to-leading order in the saddle-point approximation. Subsequently, the study was extended to include improvements for the quark propagator by incorporating the transverse component of the background field [15] and accounting for its \(x^{-}\) dependence [16]. This approach has been employed to compute various observables beyond the eikonal approximation [17; 18; 19; 20; 21].
On the other hand, a notable challenge in QCD, known as the _proton spin puzzle_[22], arises from the discrepancy between theoretical predictions and experimental results showing that quarks carry a smaller fraction of the proton's spin than expected. The eikonal observables, being spin-independent, might not account for the "missing spin," which may reside in the low-\(x\) region of the proton's wave function. Consequently, there has been a growing interest in the low-\(x\) physics community [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33] to explore the inclusion of (anti)quark exchanges in the background field of the target, leading to spin-dependent observables. In a series of papers [34; 35; 36; 37; 38; 39; 40; 41; 42], it has been investigated the effects of relaxing the eikonal approximation on the helicity and orbital angular momentum parton distributions at low-\(x\). Furthermore, recent works [43; 44; 45] have taken a step further by extending the non-linear evolution equations and the McLerran-Venugopalan (MV) model at next-to-eikonal order. In a related development, an effective Hamiltonian approach incorporating both quark and gluon fields within the background field has been presented in [46]. Additionally, some notable progress has been made in understanding the proton's spin structure using the worldline formalism [47; 48], as presented in [49; 50].
Finally, studies on the corrections to the eikonal approximation have also been relevant in the BDMPS-Z formalism. Calculations of parton energy loss using the eikonal approximation suffer from significant uncertainties when extrapolated to the full kinematic range and neglect collisional energy loss, those deficiencies were analyzed in [51; 52; 53; 54; 55]. Moreover, in [56; 57], the eikonal approximation is relaxed by incorporating a transverse flow model in the nuclear matter due to a finite transverse component of the gauge field, resulting in an anisotropic jet broadening distribution.
The aim of this manuscript is to expand upon the existing literature by relaxing the eikonal approximation in high-energy scattering amplitudes. To achieve this, we comprehensively analyze all components of the gauge field, including \(A^{+}\) and \(\mathbf{A}\), which are omitted in the eikonal approximation, along with the dynamical \(x^{-}\) dependence of the field. The novelty of our approach lies in presenting the result as a path integral in light-cone coordinates, enabling us to compute the scattering amplitude at arbitrary powers of the inverse of the energy. While the spin of the projectile is currently disregarded, we intend to incorporate it in future investigations. Therefore, our focus centers on studying the scalar propagator in the presence of a classical background field, which is the primary object in the calculation of scattering amplitudes when the spin is neglected. The path integral representation of the scalar propagator provides an initial framework for evaluating non
eikonal corrections at arbitrary orders and serves as the starting point for analyzing more complex quantities, such as the quark or gluon propagator.
The results presented within this manuscript are derived through a methodology analogous to that of [58, 59] and the references therein. This derivation is grounded in the framework of the worldline formalism, to be introduced in Section 2.1. Notably, a distinguishing feature of our approach lies in the utilization of light-cone coordinates. Specifically, the retarded scalar propagator obtained in this work, which is explained in detail in Section 2.2, can be expressed as follows:
\[\Delta_{R}(x,y) =\Theta(x^{+}-y^{+})\int\mathcal{D}p^{+}(\tau^{+})\frac{\Theta( \langle p^{+}\rangle)}{2\langle p^{+}\rangle}\int_{\vec{y}}^{\vec{x}}\mathcal{ D}^{3}\vec{z}(\tau^{+})\] \[\times\exp\left\{i\int_{y^{+}}^{x^{+}}d\tau^{+}\left[-\frac{m^{2 }}{2\langle p^{+}\rangle}+\frac{\langle p^{+}\rangle}{2}\dot{\mathbf{z}}^{2}-p ^{+}\dot{z}^{-}\right]\right\}\mathcal{U}_{[x^{+},y^{+}]}[z^{+},\vec{z}], \tag{1}\]
with the Wilson line given by
\[\mathcal{U}_{[x^{+},y^{+}]}[z^{+}(\tau^{+}),\vec{z}(\tau^{+})]=\mathcal{P}_{+ }\exp\left\{-ig\int_{y^{+}}^{x^{+}}d\tau^{+}\left[A^{-}(z^{+},\vec{z})\frac{p^ {+}}{\langle p^{+}\rangle}+\vec{A}\big{(}z^{+},\vec{z}\big{)}\cdot\dot{\vec{z}} \right]\right\}, \tag{2}\]
where \(\vec{A}\equiv(A^{+},\mathbf{A})\).
The expression in Eq. (1) describes the propagation of a particle with mass \(m\), following a trajectory \(\vec{z}(\tau^{+})\) within the medium, where \(\vec{z}\equiv(z^{-},\mathbf{z})\). As the particle undergoes multiple transverse scatterings inside the medium, its transverse path \(\mathbf{z}(\tau^{+})\) resembles a Brownian motion. On the other hand, due to the dynamical \(z^{-}\) dependence of the gauge field, the medium imparts longitudinal momentum to the projectile, resulting in a non-constant longitudinal momentum \(p^{+}(\tau^{+})\) along the path. The quantity \(\langle p^{+}\rangle\) represents the mean longitudinal momentum of the particle within the medium. Consequently, the particle follows a finite and non-constant longitudinal trajectory \(z^{-}(\tau^{+})\) within the medium. In the case where the field is time (\(z^{-}\)) independent and solely longitudinal (\(\vec{A}=0\)), Eq. (1) simplifies to the effective propagator utilized in the BDMPS-Z formalism.
While Eq. (1) is formulated for a generic field, specifying a model for the gauge field enables the calculation of corrections to the BDMPS-Z spectrum arising from the longitudinal momentum exchange between the radiated gluon and the non-eikonal field. Furthermore, by considering the non-constancy of the projectile's longitudinal momentum, this propagator facilitates the investigation of collisional energy loss, assuming a vanishing parton's spin. Lastly, an eikonal expansion in powers of inverse energy can be systematically obtained by expanding Eq. (1) around its saddle point solution.
This manuscript is organized as follows: In Section 2.1, we introduce the well established result of the Feynman propagator for a scalar field in the presence of a background field using the worldline formalism. In Section 2.2, we present the Feynman scalar propagator expressed in terms of light-cone coordinates, establishing a clearer connection with known expressions used in the existing literature. Furthermore, we compare our result with the effective scalar propagator utilized in the jet quenching formalism. Additionally, we compute the propagator when the medium has a finite extent, which is relevant for
the study of \(p\)A collisions, as well as DIS. Moving forward, in Section 3, we perform an expansion of the path integral around its classical saddle point solution and introduce a systematic approach to calculate eikonal corrections to the scalar propagator. Lastly, in Section 4, we provide a summary of our findings and present prospects for future research.
## 2 The scalar propagator in a background non-Abelian field
Although the scalar propagator does not directly influence QCD observables, it plays a crucial role in processes where the spin of quarks is negligible, as observed in ultra high-energy collisions. Moreover, its derivation lays the groundwork for computing more complex objects, such as the quark propagator, which involves the incorporation of the so-called spin factor [60; 61] or gluon propagator. In this section, we will comprehensively examine the path integral representation of the scalar propagator within a classical field using the worldline formalism and subsequently express it in light-cone coordinates. This latter simplification will enable us to establish a more lucid connection between the worldline formalism and the conventional notation commonly employed in small-\(x\) physics.
### The scalar propagator in the worldline formalism
Let us consider a scalar field \(\phi(x)\) with mass \(m\) interacting with a non-abelian classical field \(A^{\mu}(x)=T^{a}_{\mathcal{R}}A^{\mu}_{a}(x)\), where \(T^{a}_{\mathcal{R}}\) are the generators of the gauge group in the representation \(\mathcal{R}\). The Lagrangian describing this theory is given by
\[\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+|D_{\mu}\phi|^{2}-m^{2}|\phi|^{2}, \tag{1}\]
where \(F_{\mu\nu}\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+ig[A_{\mu},A_{ \nu}]\) is the field strength tensor, and \(D_{\mu}\equiv\partial_{\mu}+igA_{\mu}\) represents the covariant derivative.
The Feynman propagator of the scalar field in a 4-dimensional space-time1, \(\Delta_{F}(x,y)\), is given by the Klein-Gordon equation in presence of a gauge field:
Footnote 1: In this manuscript we use the metric \(g^{\mu\nu}=\text{diag}(1,-1,-1,-1)\).
\[(D^{\mu}D_{\mu}+m^{2})\Delta_{F}(x,y)=-i\delta^{(4)}(x-y). \tag{2}\]
To represent this equation in integral form, we introduce the operator \(\hat{H}=D^{\mu}D_{\mu}+m^{2}\), which acts on the Hilbert space of square integrable functions. The Green's function operator is defined as
\[\hat{\Delta}_{F}=-i\hat{H}^{-1}. \tag{3}\]
Therefore, the inverse of the operator \(i\hat{H}\) can be expressed using the Schwinger representation:
\[\hat{\Delta}_{F}=\int_{0}^{\infty}dTe^{-iT(\hat{H}-i\epsilon)}, \tag{4}\]
where the inclusion of \(i\epsilon\) ensures the convergence of the integral. Here, \(e^{-iT\hat{H}}\) is an unitary operator satisfying the Schrodinger equation, allowing us to interpret \(\hat{H}\) as the Hamiltonian
of a quantum system, and \(T\) as an internal time coordinate known as the Schwinger proper time.
By introducing the complete set of single-particle states \(\left|x\right\rangle\) in the Hilbert space, we can express the scalar propagator as the matrix element
\[\Delta_{F}(x,y)=\left\langle x|\hat{\Delta}_{F}|y\right\rangle. \tag{5}\]
Consequently, the propagator describes the evolution of a scalar particle from the space-time point \(y\) at proper time \(t=0\) to a point \(x\) at \(t=T\), accounting for all possible interactions with the background field \(A^{\mu}\) across different values of \(T\):
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dT\left\langle x|e^{-iT(\hat{H}-i\epsilon)}|y \right\rangle. \tag{6}\]
Analogously to non-relativistic quantum mechanics, we can represent the propagator as a path integral by introducing the complete set of momentum states \(\left|p\right\rangle\) satisfying
\[\left\langle x|p\right\rangle=e^{ix\cdot p},\qquad\int d^{4}x\left|x\right\rangle \left\langle x\right|=\int\frac{d^{4}p}{(2\pi)^{4}}\left|p\right\rangle\left \langle p\right|=1, \tag{7}\]
as well as slicing the Schwinger proper time into \(N\) steps of length \(\Delta t=\frac{T}{N}\) such that
\[e^{-i(\hat{H}-i\epsilon)T}=e^{-i(\hat{H}-i\epsilon)\sum_{n=1}^{N}\Delta t}= \prod_{n=1}^{N}e^{-i(\hat{H}-i\epsilon)\Delta t+\mathcal{O}(\Delta t^{2})}. \tag{8}\]
Thus, inserting \(N\) and \(N-1\) spectral decomposition in terms of momentum and coordinate eigenstates, respectively, in Eq. (6) we can write the scalar propagator as
\[\Delta_{F}(x,y) =\int_{0}^{\infty}dT\left\langle x|e^{-iT(\hat{H}-i\epsilon)}|y\right\rangle\] \[=\int_{0}^{\infty}dT\lim_{N\rightarrow\infty}\int\prod_{k=1}^{N-1 }d^{4}z_{k}\int\prod_{n=1}^{N}\frac{d^{4}p_{n}}{(2\pi)^{4}}\left\langle z_{n} |e^{-i\Delta t(\hat{H}-i\epsilon)}|p_{n}\right\rangle\left\langle p_{n}|z_{n -1}\right\rangle\] \[=\int_{0}^{\infty}dT\lim_{N\rightarrow\infty}\int\prod_{k=1}^{N-1 }d^{4}z_{k}\int\prod_{n=1}^{N}\frac{d^{4}p_{n}}{(2\pi)^{4}}\] \[\qquad\qquad\times\mathcal{P}\exp\left\{i\sum_{l=1}^{N}\Delta t \left[\left(p_{l}^{\mu}+gA^{\mu}(z_{l})\right)^{2}-m_{\epsilon}^{2}+ip_{l} \cdot\frac{z_{l}-z_{l-1}}{\Delta t}\right]\right\}, \tag{9}\]
where we have defined \(z_{0}\equiv y\), \(z_{N}\equiv x\) and \(m_{\epsilon}^{2}\equiv m^{2}-i\epsilon\). The path ordering operator \(\mathcal{P}\) is introduced to account to the fact that \(A_{\mu}(z_{l})\) does not commute at different values of \(z\), or equivalently, of the proper time. We have also used the fact that matrix element of the Weyl ordered Hamiltonian is \(\left\langle x|\hat{H}(\hat{x},\hat{p})|p\right\rangle=H(x,p)\left\langle x|p\right\rangle\), where \(H(x,p)\) is a c-number obtained from replacing the operators with their corresponding variables, such that
\[\left\langle x|e^{-i(\hat{H}-i\epsilon)\Delta t}|p\right\rangle=e^{i\left[(p_{ \mu}+gA_{\mu}(x))^{2}-m^{2}+i\epsilon\right]\Delta t+\mathcal{O}(\Delta t^{2} )}e^{ix\cdot p}. \tag{10}\]
Finally, taking the continuous \(N\to\infty\) limit, the propagator can be written in its phase space path integral representation:
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dTe^{-im_{\epsilon}^{2}T}\\ \times\int_{z(0)=y}^{z(T)=x}\mathcal{D}^{4}z(t)\int\mathcal{D}^{4} p(t)\ e^{i\int_{0}^{T}dt(p^{2}+p\cdot\hat{z})}\mathcal{P}e^{-ig\int_{0}^{T}dtA^{ \mu}(z)\hat{z}_{\mu}}, \tag{11}\]
where we have shifted the 4-momentum, defined \(\hat{z}^{\mu}=dz^{\mu}/dt\) and introduced the path integral measure:
\[\mathcal{D}^{4}z(t)\equiv\lim_{N\to\infty}\prod_{n=1}^{N-1}d^{4}z_{n},\qquad \mathcal{D}^{4}p(t)\equiv\lim_{N\to\infty}\prod_{n=1}^{N}d^{4}z_{n}\frac{d^{4} p_{n}}{(2\pi)^{4}}, \tag{12}\]
which accounts for all possible configurations of the trajectories \(z(t)\) and \(p(t)\). We also have omitted the dependence of \(p^{\mu}(t)\) and \(z^{\mu}(t)\) on the path parameter \(t\) in the integrand of Eq. (11). From now on, unless necessary for clarification, we will not write the dependence on the curve parameter explicitly.
Alternatively, the 4-momenta in Eq. (11) can be integrated out as they are analytic continuation of gaussian integrals. The result after integrating over \(p\) reads
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dT\lim_{N\to\infty}\left(\frac{i }{4\pi\Delta t}\right)^{2N}\int\prod_{k=1}^{N-1}d^{4}z_{k}\\ \times\mathcal{P}\exp\left\{-i\sum_{l=1}^{N}\Delta t\left[\frac{ \left(z_{l}-z_{l-1}\right)^{2}}{4\Delta t^{2}}+m_{\epsilon}^{2}+gA(z_{l}) \cdot\frac{z_{l}-z_{l-1}}{\Delta t}\right]\right\}. \tag{13}\]
Taking the continuous limit, we obtain the configuration space path integral representation of the scalar propagator in a background field in terms of the worldline \(z(t)\):
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dTe^{-im_{z}^{2}T}\int_{z(0)=y}^{z(T)=x} \mathcal{D}^{4}z\ e^{-i\int_{0}^{T}dt\frac{\hat{z}^{2}}{4}}\ \mathcal{P}e^{-ig\int_{0}^{T}dtA^{\mu}(z)\hat{z}_{\mu}}, \tag{14}\]
where now, as usual in the path integral formalism, the path integral measure when the momentum is integrated out is normalized by a factor \(\left(-4\pi i\Delta t\right)^{-2N}\) which regulates the integral in its discrete representation.
Equations (11) and (14) are the worldline path integral representation of the propagator of a scalar particle coupled to a classical background field. This approach is the basis of the worldline formalism [47; 48] where one replaces the Quantum Field Theory (QFT) path integral with a path integral over the worldline of a single particle, coupled to a background field.2 This simplifies the calculation of QFT amplitudes by reducing the problem to a one-dimensional problem in the worldline parameter. The main advantage of Eqs. (11) and (14) in scattering processes with respect to the usual perturbative
approach is that instead of summing over all possible Feynman diagrams that contribute to the process we just represent the scattering amplitude as a resummed and exponentiated quantity.
The worldline path integral is written in terms of the particle's position and momentum as a function of the worldline parameter. By integrating over all possible paths of the particle in the presence of the background field, we can compute the scattering amplitude for the process of interest. Moreover, we can read from Eq. (14) that the part of the propagator that depends on the background gauge field is given by the Wilson line along the path \(z(t)\):
\[\mathcal{U}_{[T,0]}[z(t)]=\mathcal{P}\exp\left\{-ig\int_{0}^{T}dtA^{\mu}(z) \dot{z}_{\mu}\right\}=\mathcal{P}\exp\left\{-ig\int_{y}^{x}dz_{\mu}A^{\mu}(z) \right\}, \tag{15}\]
which represents the phase accumulated by the particle's wave function as it travels through the medium. Indeed, by expanding the path ordered exponential, and using the shorthand notation \(z_{i}\equiv z(t_{i})\):
\[\mathcal{U}_{[b,a]}[z]=\sum_{n=0}^{\infty}(-ig)^{n}\int_{a}^{b} dt_{n}\int_{a}^{t_{n}}dt_{n-1}\cdots\int_{a}^{t_{2}}dt_{1}\] \[\times\dot{z}^{\mu_{n}}(t_{n})A_{\mu_{n}}(z_{n})\dot{z}^{\mu_{n-1 }}(t_{n-1})A_{\mu_{n-1}}(z_{n-1})\cdots\dot{z}^{\mu_{1}}(t_{1})A_{\mu_{1}}(z_{ 1}), \tag{16}\]
we see that the \(n^{\rm th}\) order term in the expansion represents the interaction of the worldline with \(n\) gluons at position \(z(t_{i})\) (\(i=1,\ldots,n\)), see Fig. 1. Therefore the exponential is a resummation of all possible interactions of the path with the gluons emitted by the external source. When considering a Wilson line in the fundamental (adjoint) representation, it can also describe a fast-moving quark (gluon) interacting with the medium. In such cases, we assume that the longitudinal momentum transfer arising from the particle's interaction with the background gluons is negligible, which corresponds to the eikonal approximation discussed in Section 3.
### The scalar propagator in light-cone coordinates
In the preceding section, we derived the path integral representation of the scalar propagator using Lorentz-invariant equations. Now, we take a different approach by expressing
Figure 1: Picture of the \(n^{\rm th}\) term in the expansion of the Wilson line \(U_{[b,a]}[z]\) over a path \(z^{\mu}(t)\) (\(t\in[a,b]\)). The worldline interacts with gluons generated by the medium at the coordinates \(z(t_{i})\) (abbreviated as \(z_{i}^{+}\)).
both the worldline and gauge field in terms of light-cone (LC) coordinates. LC coordinates are particularly advantageous in high-energy scattering processes where longitudinal momenta play a dominant role. By separating the 4-momentum components into longitudinal and transverse parts, we can identify the leading terms in the scattering amplitude. In LC coordinates, we define \(x^{\mu}=(x^{+},x^{-},{\bf x})\), where \(x^{\pm}=(x^{0}\pm x^{3})/\sqrt{2}\) and \({\bf x}=(x^{1},x^{2})\). The dot product in these coordinates is given by \(x\cdot y=x^{+}y^{-}+x^{-}y^{+}-{\bf x}\cdot{\bf y}\).3
Footnote 3: In this manuscript, we adopt the convention that the dot product, when applied to transverse coordinates, is Euclidean: \({\bf x}\cdot{\bf y}={\bf x}^{i}{\bf y}^{i}\neq{\bf x}^{i}{\bf y}_{i}\).
Using LC coordinates, we can rewrite the scalar propagator given in Eq. (11) as follows:
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dT\int_{z(0)=y}^{z(1)=x}{\cal D} ^{4}z\int{\cal D}^{4}p\] \[\quad\times\exp\left\{i\int_{0}^{1}d\tau\left[p^{-}(2Tp^{+}-\dot{ z}^{+})-T({\bf p}^{2}+m_{\epsilon}^{2})-p^{+}\dot{z}^{-}+{\bf p}\cdot\dot{{ \bf z}}\right]\right\}{\cal U}_{[1,0]}[z(\tau)], \tag{17}\]
where we have made the change of variables \(p^{\mu}\to-p^{\mu}\) for convenience, and we re-parametrized the worldline using a dimensionless parameter. Specifically, we performed the reparametrization4\(\tau=t/T\), with \(\tau\in[0,1]\).
Footnote 4: We should note that the Wilson line remains invariant under reparametrizations. Thus, we adopt the convention that the boundaries specified in its sub-index have the same dimensions as the path parameter. In other words,
\[{\cal U}_{[T,0]}[z(t)] ={\cal P}_{t}\exp\left\{-ig\int_{0}^{T}dt\frac{dz^{\mu}(t)}{dt}A_ {\mu}[z(t)]\right\}={\cal P}_{\tau}\exp\left\{-ig\int_{0}^{1}d\tau\frac{dz^{ \mu}(\tau)}{d\tau}A_{\mu}[z(\tau)]\right\}\] \[={\cal U}_{[1,0]}[z(\tau)]. \tag{18}\]
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dT\int_{z(0)=y}^{z(1)=x}{\cal D}^{4}z\int{ \cal D}^{4}p\] \[\quad\times\exp\left\{i\int_{0}^{1}d\tau\left[p^{-}(2Tp^{+}-\dot{ z}^{+})-T({\bf p}^{2}+m_{\epsilon}^{2})-p^{+}\dot{z}^{-}+{\bf p}\cdot\dot{{ \bf z}}\right]\right\}{\cal U}_{[1,0]}[z(\tau)],\] (19)
where we have made the change of variables \(p^{\mu}\to-p^{\mu}\) for convenience, and we re-parametrized the worldline using a dimensionless parameter. Specifically, we performed the reparametrization5\(\tau=t/T\), with \(\tau\in[0,1]\).
Footnote 5: In this manuscript, we have chosen to fix the \(z^{+}\) trajectory by integrating over \(p^{-}\). However, it is important to note that the argument of the exponential in Eq. (19) is symmetric in the \(+\) and \(-\) components. Hence, an alternative approach would be to fix the \(z^{-}\) trajectory by integrating over \(p^{+}\). The reason for adopting our current convention is that we later assume the particle is boosted in the right direction, allowing us to interpret \(p^{+}\) and \(p^{-}\) as the longitudinal momentum and light-cone energy of the particle, respectively, while also regarding \(z^{+}\) as the light-cone time.
As the argument of the exponential in Eq. (19) is linear in \(p^{-}\), the \(p^{-}\) path integral yields a Dirac delta function that fixes \(\dot{z}^{+}(\tau)=2Tp^{+}(\tau)\). This constraint determines the trajectory5\(z^{+}(\tau)\), which we refer to as the _light-cone time_:
Footnote 5: In this manuscript, we have chosen to fix the \(z^{+}\) trajectory by integrating over \(p^{-}\). However, it is important to note that the argument of the exponential in Eq. (19) is symmetric in the \(+\) and \(-\) components. Hence, an alternative approach would be to fix the \(z^{-}\) trajectory by integrating over \(p^{+}\). The reason for adopting our current convention is that we later assume the particle is boosted in the right direction, allowing us to interpret \(p^{+}\) and \(p^{-}\) as the longitudinal momentum and light-cone energy of the particle, respectively, while also regarding \(z^{+}\) as the light-cone time.
\[z^{+}(\tau)=y^{+}+2T\int_{0}^{\tau}d\tau^{\prime}p^{+}(\tau^{\prime}), \tag{20}\]
where we have used the fact that \(z^{+}(0)=y^{+}\). Furthermore, since \(z^{+}(1)=x^{+}\) is fixed, Eq. (20) also introduces the constraint \(x^{+}-y^{+}=2T\langle p^{+}\rangle\), where
\[\langle p^{+}\rangle=\int_{0}^{1}d\tau p^{+}(\tau), \tag{21}\]
is the average longitudinal momentum over the particle path. Hence, the Schwinger proper time can be written in terms of the longitudinal separation and \(\langle p^{+}\rangle\) as \(T=(x^{+}-y^{+})/2\langle p^{+}\rangle\).
Indeed, in Appendix A, we explicitly perform the \(p^{-}\) and \(z^{+}\) path integrals, which leads to the following result:
\[\Delta_{F}(x,y) =\int_{0}^{\infty}dT\int_{\vec{z}(0)=\vec{y}}^{\vec{z}(1)=\vec{x}} \mathcal{D}^{3}\vec{z}(\tau)\int\mathcal{D}^{3}\vec{p}(\tau)\ \delta\left(2\langle p^{+}\rangle T-(x^{+}-y^{+})\right)\] \[\qquad\qquad\qquad\times\exp\left\{-i\int_{0}^{1}d\tau\left[\hat{p }^{-}\dot{z}^{+}+\vec{p}\cdot\dot{\vec{z}}\right]\right\}\mathcal{U}_{[1,0]}[z^ {+}(\tau),\vec{z}(\tau)], \tag{21}\]
where we have defined the on-shell light-cone energy \(\hat{p}^{-}=(\mathbf{p}^{2}+m_{\epsilon}^{2})/2p^{+}\), \(\vec{z}\equiv(z^{-},\mathbf{z})\), \(\vec{p}\equiv(p^{+},\mathbf{p})\) and the 3-dimensional dot product as \(\vec{p}\cdot\vec{z}=p^{+}z^{-}-\mathbf{p}\cdot\mathbf{z}\). We emphasize that in Eq. (21), the LC time path \(z^{+}(\tau)\) is fixed by Eq. (19) and, therefore, \(\dot{z}^{+}(\tau)=2Tp^{+}(\tau)\).
Thanks to the Dirac delta, when the average longitudinal momentum is finite (\(\langle p^{+}\rangle\neq 0\)), the integral over \(T\) becomes straightforward. On the other hand, configurations with \(\langle p^{+}\rangle=0\) correspond to cases where the particle does not propagate in the \(z^{+}\) direction, i.e., \(x^{+}=y^{+}\). Generally, when using the propagator as an external line in a physical process, we can justify the assumption \(x^{+}\neq y^{+}\) since at least one of the propagator legs extends to infinity once we amputate it. However, when the propagator is an internal line, as in loop calculations within the medium, there exists a region in the phase space of integration where \(x^{+}=y^{+}\) (or equivalently, \(\langle p^{+}\rangle=0\)). In such cases, the expressions presented in this manuscript are incomplete.
To proceed with our analysis, we adopt the convention followed in [15; 16; 23; 25] and examine the propagator in the case where \(x^{+}\neq y^{+}\). Thus, we can perform the integration over \(T\), and the scalar propagator becomes:
\[\Delta_{F}(x,y)=\int_{\vec{z}(0)=\vec{y}}^{\vec{z}(1)=\vec{x}} \mathcal{D}^{3}\vec{z}(\tau)\int\mathcal{D}^{3}\vec{p}(\tau)\ \frac{\Theta(x^{+}-y^{+})\Theta(\langle p^{+}\rangle)-\Theta(y^{+}-x^{+}) \Theta(-\langle p^{+}\rangle)}{2\langle p^{+}\rangle}\\ \times\exp\left\{-i\int_{0}^{1}d\tau\left[\frac{\mathbf{p}^{2}+m_ {\epsilon}^{2}}{2\langle p^{+}\rangle}(x^{+}-y^{+})+\vec{p}\cdot\dot{\vec{z}} \right]\right\}\mathcal{U}_{[1,0]}[z^{+}(\tau),\vec{z}(\tau)]. \tag{22}\]
To express the path parameter in units of time, as is customary in high-energy QCD, we perform the reparametrization of the dimensionless proper time \(\tau\) as follows:6\(\tau\to\tau^{+}(\tau)=y^{+}+\tau(x^{+}-y^{+})\), with \(\tau^{+}(0)=y^{+}\) and \(\tau^{+}(1)=x^{+}\). Under this reparametrization, the path-ordered operator in the definition of the Wilson line becomes \(\mathcal{P}_{\tau}\to\Theta(x^{+}-y^{+})\mathcal{P}_{+}+\Theta(y^{+}-x^{+}) \bar{\mathcal{P}}_{+}\), where \(\mathcal{P}_{+}\) represents the path-ordered operator along the direction \(\tau^{+}\), and \(\bar{\mathcal{P}}_{+}\) denotes the anti path-ordered operator. Thus, Eq. (22) takes the form:
Footnote 6: Although it might seem tempting to define the light-cone time, \(z^{+}\), as the path parameter, as is often done in the high-energy limit of QCD, the reparametrization \(\tau\to z^{+}(\tau)=y^{+}+(x^{+}-y^{+})\int_{0}^{\tau}d\tau^{\prime}\frac{p^{ +}(\tau^{\prime})}{(p^{+})}\) is not well-defined. This is because \(\dot{z}^{+}(\tau)\propto p^{+}(\tau)\), and the resulting transformation is not a diffeomorphism; the sign of \(\dot{z}^{+}(\tau)\) may change in the domain \([y^{+},x^{+}]\).
\[\Delta_{F}(x,y) =\int_{\vec{z}(y^{+})=\vec{y}}^{\vec{z}(x^{+})=\vec{x}}\mathcal{D} ^{3}\vec{z}(\tau^{+})\int\mathcal{D}^{3}\vec{p}(\tau^{+})\ \frac{1}{2\langle p^{+}\rangle}\exp\left\{-i\int_{y^{+}}^{x^{+}}d\tau^{+} \left[\hat{p}^{-}\dot{z}^{+}+\vec{p}\cdot\dot{\vec{z}}\right]\right\}\] \[\times\left[\Theta(\langle p^{+}\rangle)\Theta(x^{+}-y^{+}) \mathcal{P}_{+}-\Theta(-\langle p^{+}\rangle)\Theta(y^{+}-x^{+})\bar{ \mathcal{P}}_{+}\right]\] \[\times\exp\left\{-ig\int_{y^{+}}^{x^{+}}d\tau^{+}\left[A^{-} \big{(}z^{+},\vec{z}\big{)}\dot{z}^{+}+\vec{A}\big{(}z^{+},\vec{z}\big{)} \cdot\dot{\vec{z}}\right]\right\}, \tag{23}\]
where the LC time, in terms of \(\tau^{+}\), is given by
\[z^{+}(\tau^{+})=y^{+}+\int_{y^{+}}^{\tau^{+}}\frac{p^{+}(\bar{\tau}^{+})}{\langle p ^{+}\rangle}d\bar{\tau}^{+}=x^{+}-\int_{\tau^{+}}^{x^{+}}\frac{p^{+}(\bar{\tau}^ {+})}{\langle p^{+}\rangle}d\bar{\tau}^{+}, \tag{24}\]
with \(\dot{z}^{+}(\tau^{+})=p^{+}(\tau^{+})/\langle p^{+}\rangle\), \(z^{+}(y^{+})=y^{+}\), \(z^{+}(x^{+})=x^{+}\), and the average longitudinal momentum by
\[\langle p^{+}\rangle=\frac{1}{x^{+}-y^{+}}\int_{y^{+}}^{x^{+}}d\tau^{+}p^{+}( \tau^{+}). \tag{25}\]
Since the reparametrized variable \(\tau^{+}\) has a well sign defined interval, \(d\tau^{+}\), we refer to it as the _light-cone proper time_. On the other hand, we have \(dz^{+}=d\tau^{+}p^{+}(\tau^{+})/\langle p^{+}\rangle\), where \(d\tau^{+}/\langle p^{+}\rangle>0\). Hence, we encounter two scenarios: if \(p^{+}(\tau^{+})>0\), the particle travels forward in light-cone time, while if \(p^{+}(\tau^{+})<0\), it travels backward in light-cone time. Furthermore, in the case where \(p^{+}(\tau^{+})\) remains constant along the path, as is commonly assumed in high-energy QCD, we find that the light-cone time and proper time coincide, \(z^{+}=\tau^{+}\).
Finally, we use the fact that a given a path \(z^{\mu}(\lambda)\), the hermitian of the Wilson line is
\[\mathcal{U}^{\dagger}_{[b,a]}=\bar{\mathcal{P}}\exp\left\{ig\int_{a}^{b}d \lambda A\cdot\dot{z}\right\}. \tag{26}\]
Thus, we can rewrite Eq. (23) as
\[\Delta_{F}(x,y)=\int\frac{\mathcal{D}^{3}\vec{p}(\tau^{+})}{2 \langle p^{+}\rangle}\int_{\vec{y}}^{\vec{x}}\mathcal{D}^{3}\vec{z}(\tau^{+}) \ \exp\Bigg{\{}-i\int_{y^{+}}^{x^{+}}d\tau^{+}\left[\hat{p}^{-}\dot{z}^{+}+\vec{p} \cdot\dot{\vec{z}}\right]\Bigg{\}}\\ \times\left[\Theta(\langle p^{+}\rangle)\Theta(x^{+}-y^{+}) \mathcal{U}_{[x^{+},y^{+}]}[z^{+},\vec{z}]-\Theta(y^{+}-x^{+})\Theta(-\langle p ^{+}\rangle)\mathcal{U}^{\dagger}_{[y^{+},x^{+}]}[z^{+},\vec{z}]\right], \tag{27}\]
where
\[\mathcal{U}_{[x^{+},y^{+}]}[z^{+}(\tau^{+}),\vec{z}(\tau^{+})]\\ =\mathcal{P}_{+}\exp\left\{-ig\int_{y^{+}}^{x^{+}}d\tau^{+}\left[ A^{-}(z^{+},\vec{z})\frac{p^{+}}{\langle p^{+}\rangle}+\vec{A}\big{(}z^{+}, \vec{z}\big{)}\cdot\dot{\vec{z}}\right]\right\}. \tag{28}\]
Analogously to Eq. (14), we can solve the Gaussian path integral in \(\mathbf{p}\) with the help of Eq. (101). By doing that, we obtain the scalar propagator in the _configuration space path integral representation_:
\[\Delta_{F}(x,y)=\int\frac{\mathcal{D}p^{+}(\tau^{+})}{2\langle p^ {+}\rangle}\int_{\vec{y}}^{\vec{x}}\mathcal{D}^{3}\vec{z}(\tau^{+})\ \exp\Bigg{\{}i\int_{y^{+}}^{x^{+}}d\tau^{+}\left[-\frac{m_{\epsilon}^{2}}{2 \langle p^{+}\rangle}+\frac{\langle p^{+}\rangle}{2}\dot{\mathbf{z}}^{2}-p^{+} \dot{z}^{-}\right]\Bigg{\}}\\ \times\left[\Theta(\langle p^{+}\rangle)\Theta(x^{+}-y^{+}) \mathcal{U}_{[x^{+},y^{+}]}[z^{+},\vec{z}]-\Theta(y^{+}-x^{+})\Theta(-\langle p ^{+}\rangle)\mathcal{U}^{\dagger}_{[y^{+},x^{+}]}[z^{+},\vec{z}]\right]. \tag{29}\]
Equations (27) and (29) are the main results of this section and serve as the starting point for the subsequent analysis. In comparison to the worldline representation given by Eqs. (11) and (14), this new representation offers two key advantages: (i) the integration
over the Schwinger proper time \(T\) has been performed, simplifying the computation of the propagator, and (ii) the explicit dependence on the longitudinal momentum \(p^{+}\) is revealed, making it well-suited for studying high-energy scattering processes. However, it is important to note that this representation is not applicable when \(x^{+}=y^{+}\).
In summary, Eq. (29) describes the propagation of a scalar particle from \(y^{\mu}\) to \(x^{\mu}\) in the presence of a background medium. As a result of multiple transverse scatterings, the particle follows a Brownian path, denoted as \({\bf z}(\tau^{+})\), in the transverse plane. Moreover, the presence of a \(z^{-}\) dependence in the gauge field causes the particle's longitudinal momentum inside the medium, denoted as \(p^{+}(\tau^{+})\), to vary along its trajectory, leading to a non-trivial \(z^{-}(\tau^{+})\) path. Additionally, due to longitudinal interactions with the medium, the LC time \(z^{+}(\tau^{+})\) of the particle may not necessarily increase monotonically and can even exhibit backward motion when the longitudinal momentum transfer with the medium is negative.
In general, we are interested in processes where \(x^{+}>y^{+}\) so that only the retarded component of the Feynman propagator contributes. For this reason, in order to simplify our analysis in the subsequent section we will study the retarded propagator:
\[\Delta_{R}(x,y) =\Theta(x^{+}-y^{+})\int{\cal D}p^{+}(\tau^{+})\frac{\Theta( \langle p^{+}\rangle)}{2\langle p^{+}\rangle}\int_{\vec{y}}^{\vec{x}}{\cal D} ^{3}\vec{z}(\tau^{+})\] \[\times\exp\left\{i\int_{y^{+}}^{x^{+}}d\tau^{+}\left[-\frac{m_{ \epsilon}^{2}}{2\langle p^{+}\rangle}+\frac{\langle p^{+}\rangle}{2}\dot{\bf z }^{2}-p^{+}\dot{z}^{-}\right]\right\}{\cal U}_{[x^{+},y^{+}]}[z^{+},\vec{z}]. \tag{30}\]
#### 2.2.1 Connection with the BDMPS-Z effective propagator
In high-energy scattering processes, the right-moving projectile possesses a significantly large longitudinal momentum, surpassing all other scales involved in the process. Consequently, the interactions with the medium are primarily governed by the exchange of soft gluons. In the particle's reference frame, the background field sources are boosted towards the left direction, resulting in the formation of a Weizsacker-Williams field with the following expression: \(\hat{A}^{\mu}(q)=\delta^{\mu-}\delta(q^{+})a(q^{-},{\bf q})\), where \(a\) represents the distribution of the \(q^{-},{\bf q}\) modes of the gauge field. As a result, the projectile interacts exclusively with the \(-\)-component of the field, which remains independent of the coordinate \(z^{-}\):7
Footnote 7: In coordinate space, this is seen as a high boost in the \(z^{-}\) direction so that \(A^{\mu}(z)\to\Lambda_{\nu}^{\mu}A^{\nu}(\Lambda^{-1}z)\), where \(\Lambda\in SO(1,3)\) is a Lorentz transformation [14]. Thus, the \(z^{-}\) dependence of the field is suppressed at high rapidity, \(\omega\), boosts, \(\Lambda^{-1}z=(e^{\omega}z^{+},e^{-\omega}z^{-},{\bf z})\), and the longitudinal component of the field is enhanced, \(\Lambda_{\nu}^{\mu}A^{\nu}=(e^{-\omega}A^{+},e^{\omega}A^{-},{\bf A})\).
\[A^{\mu}(z)=\int_{q}e^{iq\cdot z}\hat{A}^{\mu}(q)=\delta^{\mu-}\int_{q^{-},{ \bf q}}e^{iq^{-}z^{+}-i{\bf q}\cdot{\bf z}}a(q^{-},{\bf q})\equiv\delta^{\mu -}A^{-}(z^{+},{\bf z}). \tag{31}\]
In this case, the Wilson line is \(z^{-}\) independent and is given by
\[{\cal U}_{[x^{+},y^{+}]}[z^{+}(\tau^{+}),\vec{z}(\tau^{+})]={\cal P}_{+}\exp \left\{-ig\int_{y^{+}}^{x^{+}}d\tau^{+}\left[A^{-}(z^{+},{\bf z})\frac{p^{+}} {\langle p^{+}\rangle}\right]\right\}. \tag{32}\]
From Eq. (30), it can be observed that the \(z^{-}\) dependence solely arises from the kinetic term \(p^{+}\dot{z}^{-}\), allowing us to solve the \(z^{-}\) and \(p^{+}\) path integrals. The \(z^{-}\) path integral
determines \(p^{+}(\tau^{+})=\text{cte}\equiv k^{+}\), thereby establishing a correspondence between the light-cone proper time and the light-cone time, such that \(\tau^{+}=z^{+}\). By employing Eqs. (B.8) and (B.10), we can express Eq. (2.30) as follows:
\[\Delta_{R}(x,y) =\Theta(x^{+}-y^{+})\int\frac{dk^{+}}{2\pi}\frac{\Theta(k^{+})}{2 k^{+}}e^{-ik^{+}(x^{-}-y^{-})}\] \[\times\int_{\mathbf{y}}^{\mathbf{x}}\mathcal{D}^{2}\mathbf{z}(z^ {+})\ \mathcal{P}_{+}\exp\left\{i\int_{y^{+}}^{x^{+}}dz^{+}\left[\frac{k^{+}}{2} \dot{\mathbf{z}}^{2}-gA^{-}(z^{+},\mathbf{z})\right]\right\}, \tag{2.33}\]
where we have neglected the mass term since the longitudinal momentum is scaled by a large parameter, \(k^{+}\gg m\).
Equation (2.33), represents the well-known result of the (retarded) in-medium scalar propagator when considering that the momentum transfer of the particle with the medium is purely transverse and only accounting for the longitudinal component of the field. The transverse path integral
\[\mathcal{G}_{k^{+}}(x^{+},\mathbf{x};y^{+},\mathbf{y})=\int_{ \mathbf{y}}^{\mathbf{x}}\mathcal{D}^{2}\mathbf{z}(z^{+})\ \mathcal{P}_{+}\exp\left\{i\int_{y^{+}}^{x^{+}}dz^{+}\left[\frac{k^{+}}{2} \dot{\mathbf{z}}^{2}-gA^{-}(z^{+},\mathbf{z})\right]\right\}, \tag{2.34}\]
is the propagator of a Schrodinger equation for a non-relativistic particle of mass \(k^{+}\) moving in a time-dependent potential \(A^{-}(z^{+},\mathbf{z})\). Equation (2.34) acts as an effective gluon propagator in the high-energy limit and is one of the key ingredients in the calculation of the BDMPS-Z spectrum, which describes the energy spectrum of a radiated gluon due to the interaction of the parton projectile with the medium [65].
Thus, Eq. (2.30) can be used to compute corrections to the effective gluon propagator, by including longitudinal momentum transfer with the medium as well as interactions with the \(A^{+}\) and \(\mathbf{A}\) components of the field. With a model for the gauge field \(A^{\mu}(z)\), one can compute non-eikonal corrections to the BDMPS-Z spectrum resulting from the interaction of the radiated gluon with the non-eikonal gauge field.
Furthermore, in the BDMPS-Z formalism, the parent parton is typically treated as an eikonal object that traverses a straight transverse trajectory within the medium, experiencing energy loss solely due to gluon radiation. Considering Eq. (2.30) as the quark propagator when the parton spin can be neglected, it offers opportunities to investigate the energy loss of the projectile parton resulting from longitudinal scatterings with the dynamic medium. This extension allows for the exploration of collisional energy loss effects beyond the eikonal approximation. To achieve this, a model for the gauge field with an explicit dependence on \(z^{-}\) is required. Under certain conditions, the \(z^{-}\) and \(p^{+}\) path integrals can be exactly solved, leading to an energy spectrum for elastic projectile-medium scatterings. However, we acknowledge that a comprehensive non-eikonal treatment should incorporate the quark spin, which we plan to explore in a future project.
#### 2.2.2 The propagator in the case of a finite medium
To finalize this section, we introduce the case in which the particle travels outside the medium, which is relevant for the analysis of proton-nucleus (\(p\)A) collisions8. In the hybrid formalism, this situation is represented as a projectile parton propagating from \(z^{+}=-\infty\) to \(z^{+}=\infty\), interacting with a classical field generated by the cold nuclear matter. Following standard field theory conventions, we assume that the gauge field \(A^{\mu}(z)\) rapidly decreases, meaning it vanishes faster than any power of \(z^{+}\) as \(z^{+}\to\pm\infty\). To account for this behavior, we introduce longitudinal points \(z_{i}^{+}\) and \(z_{f}^{+}\), where \(z_{f}^{+}>z_{i}^{+}\), such that the background field becomes negligible for \(z^{+}<z_{i}^{+}\) and \(z^{+}>z_{f}^{+}\). In practical terms, \(z_{i}^{+}\sim-R^{+}\) and \(z_{f}^{+}\sim R^{+}\), where \(R^{+}\) is the longitudinal light-cone radius of the nucleus. Consequently, the parton only interacts with the medium within the region \(z^{+}\in[z_{i}^{+},z_{f}^{+}]\).
Footnote 8: The case presented in this section is also relevant for DIS, where the virtual photon may split into a quark-antiquark pair outside the medium.
In order to simplify our analysis, we choose to work in the light-cone gauge \(A^{+}=0\), which leads to a simpler solution for the \(z^{-}\) path integral. However, it is important to note that the following analysis can be extended to a generic gauge if needed. From Eq. (2.30), we observe that if \(A^{\mu}(z)\) is nonzero only in the region \(z_{i}^{+}<z^{+}<z_{f}^{+}\), the quantity \(p^{+}(\tau^{+})\) remains constant for \(\tau^{+}<\tau_{i}^{+}\) or \(\tau^{+}>\tau_{f}^{+}\). Here, the boundaries of the medium in the LC proper time are defined by \(z^{+}(\tau_{i}^{+})=z_{i}^{+}\) and \(z^{+}(\tau_{f}^{+})=z_{f}^{+}\). Consequently, the configuration for \(p^{+}(\tau^{+})\) is given by
\[p^{+}(\tau^{+})=\begin{cases}k_{i}^{+}&\text{if }y^{+}<\tau^{+}<\tau_{i}^{+}\\ p_{\Delta}^{+}(\tau^{+})&\text{if }\tau_{i}^{+}<\tau^{+}<\tau_{f}^{+}\\ k_{f}^{+}&\text{if }\tau_{f}^{+}<\tau^{+}<x^{+}\end{cases}, \tag{2.35}\]
where we define \(p_{\Delta}^{+}(\tau^{+})\) as the longitudinal momentum of the particle inside the medium, which is not constant due to multiple longitudinal scatterings with the nuclear matter. Additionally, we use the shorthand notation \(\vec{k}_{i}\equiv\vec{p}(y^{+})\) and \(\vec{k}_{f}\equiv\vec{p}(x^{+})\). From Eq. (2.25), it can be observed that in this case, the average longitudinal momentum is given by
\[\langle p^{+}\rangle=k_{i}^{+}\frac{\tau_{i}^{+}-y^{+}}{x^{+}-y^{+}}+\langle p _{\Delta}^{+}\rangle\frac{\tau_{f}^{+}-\tau_{i}^{+}}{x^{+}-y^{+}}+k_{f}^{+} \frac{x^{+}-\tau_{f}^{+}}{x^{+}-y^{+}}, \tag{2.36}\]
where we have defined the average in-medium longitudinal momentum as
\[\langle p_{\Delta}^{+}\rangle=\frac{1}{\tau_{f}^{+}-\tau_{i}^{+}}\int_{\tau_{ i}^{+}}^{\tau_{f}^{+}}p_{\Delta}^{+}(\tau^{+})d\tau^{+}. \tag{2.37}\]
From Eq. (2.24), we can observe that the relation between \(z^{+}\) and \(\tau^{+}\) is linear outside the medium, as illustrated in Fig. 2, where the slopes are given by the following relation:
\[\frac{z_{i}^{+}-y^{+}}{\tau_{i}^{+}-y^{+}}=\frac{k_{i}^{+}}{\langle p^{+} \rangle},\qquad\frac{x^{+}-z_{f}^{+}}{x^{+}-\tau_{f}^{+}}=\frac{k_{f}^{+}}{ \langle p^{+}\rangle}. \tag{2.38}\]
These equations allow us to relate \(z^{+}\) and \(\tau^{+}\) in the boundary of the medium.
In this case, we can easily solve Eq. (30) with the help of Eqs. (100) and (101) and using the relationship given in Eq. (38). The result is the following:
\[\Delta_{R}(x,y)=\Theta(x^{+}-y^{+})\int_{\vec{k}_{i},\vec{k}_{f}} \Theta(k_{i}^{+})\Theta(k_{f}^{+})e^{-i\hat{k}_{f}\cdot x+i\hat{k}_{i}\cdot y} \int_{\vec{z}_{i},\vec{z}_{f}}e^{i\hat{k}_{f}\cdot z_{f}-i\hat{k}_{i}\cdot z_{i} }\int\mathcal{D}p_{\Delta}^{+}(\tau^{+})\] \[\times\frac{\Theta(\langle p^{+}\rangle)}{2\langle p^{+}\rangle }\int_{\vec{z}(\tau_{i}^{+})=\vec{z}_{f}}^{\vec{z}(\tau_{f}^{+})=\vec{z}_{f}} \mathcal{D}^{3}\vec{z}(\tau^{+})\ e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau ^{+}\left[-\frac{m_{\Delta}^{2}}{2\langle p^{+}\rangle}+\frac{\langle p^{+} \rangle}{2}\hat{x}^{2}-p^{+}z^{-}\right]}\mathcal{U}_{[\tau_{f}^{+},\tau_{i}^ {+}]}[z^{+},\vec{z}], \tag{39}\]
where \(\hat{k}\cdot x=\hat{k}^{-}x^{+}+\vec{k}\cdot\vec{x}\). Equation (39) is more generic than Eq. (30), and for this reason, in the analysis of the next section, we will employ Eq. (39) rather than Eq. (30). Furthermore, due to the presence of the "LC energy" term \(\hat{k}_{f}^{-}x^{+}-\hat{k}_{i}^{-}y^{+}\) in the exponent, Eq. (39) facilitates a straightforward amputation of the propagator using an LSZ-like approach, as detailed in Appendix E.
## 3 Eikonal expansion of the scalar propagator
In Section 1, we introduced the importance of the eikonal approximation in high-energy QCD, as well as its limitations. In this section, we will review the eikonal approximation, which corresponds to the saddle-point approximation of Eqs. (27), (29) and (30), where the paths follow their classical trajectory, and the interaction of the particle with the field is manifested by a change of phase in the projectile wave function. We will also present a systematic expansion in terms of powers of the inverse of the energy. Before delving into our analysis, let us introduce the eikonal approximation in the context of scattering theory.
The eikonal approximation assumes that in a scattering process, the energy of the collision is much higher than the momentum transfer, as in the Gribov-Regge limit. In
Figure 2: The light cone time, \(z^{+}(\tau^{+})\), when the background field is finite only in \(z^{+}\in[z_{i}^{+},z_{f}^{+}]\) at a given configuration of the in-medium longitudinal momentum \(p_{\Delta}^{+}(\tau^{+})\). The shaded region represents the area in which the field is non-zero.
this scenario, the interaction of the particle with the background field is achieved by the exchange of soft gluons. In terms of Lorentz invariant variables, this means that the center-of-mass energy squared, \(s\), of the collisions is much higher than the momentum transfer squared, \(t\):
\[s\gg|t|\text{ at fixed }t. \tag{3.1}\]
Working in the target rest frame, the right-moving projectile (\(p_{i}\)) and target (\(k_{i}\)) momenta before the scattering are (in Minkowski coordinates)
\[p_{i}=(\gamma m,\gamma\beta m,\mathbf{0}),\qquad k_{i}=(M,0,\mathbf{0}), \tag{3.2}\]
where \(\gamma=\cosh\omega\approx e^{\omega}/2\) is the Lorentz gamma, \(\omega\) is the rapidity of the projectile, \(\beta=\sqrt{1-\gamma^{-2}}\), \(m\) is the mass of the projectile, and \(M\) is the mass of the target. After the scattering, the projectile exchanges momentum \(Q^{\mu}=p_{f}^{\mu}-p_{i}^{\mu}\) with the target such that
\[s\approx 2mM\gamma,\qquad t=Q^{2}. \tag{3.3}\]
In LC coordinates, the initial projectile longitudinal momentum is given by \(p_{i}^{+}=\gamma m=s/2M\). Thus, assuming that the momentum exchange is much smaller than the initial momentum of the projectile, we have the following relations at high energy:
\[P^{+}=\frac{p_{i}^{+}+p_{f}^{+}}{2}\approx\frac{s}{2M}\to\infty,\qquad Q^{-}= -M\frac{t}{s}\to 0,\qquad Q^{+}=\frac{t}{\sqrt{2}M}+Q^{-}\ll P^{+}, \tag{3.4}\]
where we have written the eikonal limit given by Eq. (3.1). Since \(P^{+}\sim s\) and \(Q^{+}\sim t\), in the following sections, we are going to study the corrections to the eikonal approximation as an expansion in powers of \(1/P^{+}\) and \(Q^{+}/P^{+}\). We work in the rest frame of the target and use the fact that the projectile is boosted to the right direction so that the longitudinal momentum \(P^{+}\sim me^{\omega}\) is scaled by a large parameter. Because the eikonal approximation is better described in momentum space, it is convenient to write the scalar propagator in terms of its Fourier modes:
\[\tilde{\Delta}_{R}(x^{+},\vec{p}_{f};y^{+},\vec{p}_{i})=\int_{\vec{x},\vec{y}} e^{i\vec{p}_{f}\cdot\vec{x}-i\vec{p}_{i}\cdot\vec{y}}\ \Delta_{R}(x,y), \tag{3.5}\]
where we just Fourier transform the \(x^{-}\) and \(\mathbf{x}\) coordinates for convenience. We note that since we are working in a mixed representation, we also have to take into account that, apart from the LC momentum \(P^{+}\), LC time intervals, \(\Delta x^{+}\to e^{\omega}\Delta x^{+}\), are also going to be parametrically large under projectile boosts due to Lorentz dilation. Moreover, we use the representation derived in Section 2.2.2, which we remember was derived in the LC gauge \(A^{+}=0\), because it is more general, and in many practical problems, we are interested in the case in which the particle also propagates outside the medium. The Fourier transform of Eq. (2.39) is thus given by
\[\tilde{\Delta}_{R}(x^{+},\vec{p}_{f};y^{+},\vec{p}_{i})=\Theta(p_{i}^{+}) \Theta(p_{f}^{+})\Theta(x^{+}-y^{+})e^{-i\vec{p}_{f}^{-}x^{+}+i\vec{p}_{i}^{-} y^{+}}\Delta_{\text{m}}\left(\frac{\vec{p}_{f}+\vec{p}_{i}}{2},\vec{p}_{f}-\vec{p}_{i }\right), \tag{3.6}\]
where we have defined the in-medium scalar propagator as9
Footnote 9: We note that \(\hat{Q}^{-}=\hat{p}_{f}^{-}-\hat{p}_{i}^{-}\) and \(\hat{P}^{-}=\frac{\hat{p}_{f}^{-}+\hat{p}_{i}^{-}}{2}\).
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =\int_{\vec{B},\vec{\Delta}}e^{i\Delta\cdot\hat{P}+iB\cdot\hat{Q}} \int{\cal D}p_{\Delta}^{+}(\tau^{+})\frac{\Theta(\langle p^{+}\rangle)}{2 \langle p^{+}\rangle}\] \[\times\int_{\vec{B}-\vec{\Delta}/2}^{\vec{B}+\vec{\Delta}/2}{ \cal D}^{3}\vec{z}(\tau^{+})\ e^{i\int_{\tau_{i}^{+}}^{\tau_{i}^{+}}d\tau^{+} \left[-\frac{m_{\epsilon}^{2}}{2\langle p^{+}\rangle}+\frac{\langle p^{+} \rangle}{2}\hat{\bf z}^{2}-p_{\Delta}^{+}\hat{z}^{-}\right]}{\cal U}_{[\tau_{f }^{+},\tau_{i}^{+}]}[z^{+},\vec{z}], \tag{10}\]
and
\[P^{\mu} =\frac{p_{i}^{\mu}+p_{f}^{\mu}}{2}, Q^{\mu} =p_{f}^{\mu}-p_{i}^{\mu}, \tag{11}\] \[B^{\mu} =\frac{z_{i}^{\mu}+z_{f}^{\mu}}{2}, \Delta^{\mu} =z_{f}^{\mu}-z_{i}^{\mu}, \tag{12}\]
where \(P\) is the average 4-momentum of the projectile and \(Q\) the 4-momentum transfer by the medium.
Since the eikonal approximation neglects the longitudinal momentum transfer in the scattering, it is convenient to write the path integral in terms of the local longitudinal momentum transfer \(q^{+}(\tau^{+})=\hat{p}_{\Delta}^{+}(\tau_{f}^{+}-\tau_{i}^{+})\), which can be achieved by performing the change of variables:
\[p_{\Delta}^{+}(\tau^{+})=P^{+}+\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}\frac{d\bar{ \tau}^{+}q^{+}(\bar{\tau}^{+})}{\tau_{f}^{+}-\tau_{i}^{+}}\frac{\epsilon(\tau^ {+}-\bar{\tau}^{+})}{2}. \tag{13}\]
where \(\epsilon(x)=\Theta(x)-\Theta(-x)\) is the sign function. We note that \(q^{+}(\tau^{+})\) is the longitudinal momentum transfer from the nuclear matter to the particle at a given proper time \(\tau^{+}\).
In Appendix C, we derive the expression for Eq. (10) written in terms of \(q^{+}(\tau^{+})\) using the discrete representation of the path integral. The continuous limit, after performing this change of variables, is given by
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =e^{i\hat{Q}^{-}B^{+}+i\hat{P}^{-}\Delta^{+}}\int_{\bf B,\Delta}e ^{-i{\bf B}\cdot{\bf Q}-i{\bf\Delta}\cdot{\bf P}}\] \[\times\int^{q^{+}(\tau_{f}^{+})=0}{\cal D}q^{+}(\tau^{+})\delta \left(\langle q^{+}\rangle-Q^{+}\right)\frac{\Theta(\langle p^{+}\rangle)}{2 \langle p^{+}\rangle}\int{\cal D}z^{-}(\tau^{+})\int_{\bf B-\Delta/2}^{\bf B+ \Delta/2}{\cal D}^{2}{\bf z}(\tau^{+})\] \[\times\exp\left\{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+} \left[-\frac{m_{\epsilon}^{2}}{2\langle p^{+}\rangle}+\frac{\langle p^{+} \rangle}{2}\hat{\bf z}^{2}+\frac{q^{+}z^{-}}{\tau_{f}^{+}-\tau_{i}^{+}}\right] \right\}{\cal U}_{[\tau_{f}^{+},\tau_{i}^{+}]}[z^{+},\vec{z}], \tag{14}\]
where we have introduced \(\delta(x)\equiv 2\pi\delta(x)\) and the Dirac delta comes from the fact that the change of variables given in Eq. (13) introduces the constraint that the average longitudinal momentum transfer is equal to the total longitudinal momentum transfer:
\[\langle q^{+}\rangle\equiv\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}\frac{d\tau^{+}}{ \tau_{f}^{+}-\tau_{i}^{+}}q^{+}(\tau^{+})=Q^{+}. \tag{15}\]
Finally, the average longitudinal momentum, written in terms of the local momentum transfer, can be computed by using Eqs. (36) and (30), and reads
\[\langle p^{+}\rangle=P^{+} +\frac{Q^{+}}{2}\frac{(x^{+}-\tau_{f}^{+})-(\tau_{i}^{+}-y^{+})}{x ^{+}-y^{+}}\] \[+\frac{1}{x^{+}-y^{+}}\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+} \frac{q^{+}(\tau^{+})}{2}\frac{(\tau_{f}^{+}-\tau^{+})-(\tau^{+}-\tau_{i}^{+}) }{\tau_{f}^{+}-\tau_{i}^{+}}. \tag{31}\]
### The eikonal approximation
In the eikonal \(P^{+}\to\infty\) limit, the path integral's action becomes dominated by the free Lagrangian:
\[\mathcal{L}_{0}=-\frac{m_{\epsilon}^{2}}{2\langle p^{+}\rangle}+ \frac{\langle p^{+}\rangle}{2}\dot{\mathbf{z}}^{2}+\frac{q^{+}z^{-}}{\Delta \tau^{+}}, \tag{32}\]
where in this section, we introduce \(\Delta\tau^{+}=\tau_{f}^{+}-\tau_{i}^{+}\) and \(B_{\tau}^{+}=(\tau_{i}^{+}+\tau_{f}^{+})/2\). This implies that the particle interacts weakly with the gauge field along its trajectory, and one can approximate its trajectory by the classical one. Thus, the eikonal approximation is equivalent to the saddle-point approximation in the functional approach. The effect of the field, as we are going to see below, is just to introduce an Aharonov-Bohm-like phase in the particle's wave function through the eikonal Wilson line, to be defined below.
Thus, by solving the Euler-Lagrange equations of Eq. (32), we obtain a set of equations for the classical trajectories:
\[z_{\rm cl}^{-}=-\left(\frac{m^{2}}{2(P^{+})^{2}}+\frac{\dot{ \mathbf{z}}^{2}}{2}\right)(\tau^{+}-B_{\tau}^{+}),\qquad\ddot{\mathbf{z}}_{ \rm cl}=0,\qquad q_{\rm cl}^{+}=0. \tag{33}\]
As one would expect, the particle's classical trajectory is given by a straight line where the longitudinal momentum is constant \(p_{\rm cl}^{+}(\tau^{+})=P^{+}\). We can express the slope of the transverse trajectory either in momentum space or in coordinate space (by making use of the boundary conditions \(\mathbf{z}(\tau_{i}^{+})=\mathbf{z}_{i}\) and \(\mathbf{z}(\tau_{f}^{+})=\mathbf{z}_{f}\)). For the discussion performed in this chapter, it is convenient to express the slope in terms of momentum coordinates which is possible by making the action stationary with respect to the external points. In order to make the final result explicitly translational invariant, we expand the transverse trajectory around the impact parameter \(\mathbf{B}\) so that:
\[z_{\rm cl}^{-}=-\frac{m^{2}+\mathbf{P}^{2}}{2(P^{+})^{2}}(\tau^{+ }-B_{\tau}^{+}),\qquad\mathbf{z}_{\rm cl}=\mathbf{B}+\frac{\mathbf{P}}{P^{+}} (\tau^{+}-B_{\tau}^{+}),\qquad z_{\rm cl}^{+}=\tau^{+}. \tag{34}\]
In the functional approach presented in this manuscript, the eikonal approximation consists of two steps. First, we use the saddle-point approximation to evaluate the field around the classical straight-line path: \(A^{\mu}(z)\approx A^{\mu}(z_{\rm cl})\). Second, we use the small-angle approximation [13] in the \(P^{+}\to\infty\) limit to neglect the slope of the \(\vec{z}_{\rm cl}\) trajectory, so that \(A^{\mu}(z)\approx A^{\mu}(\tau^{+},0,\mathbf{B})\). Hence, using this approximation for the gauge field in Eq. (30), we can solve the \(\mathbf{z}\), \(z^{-}\) and \(q^{+}\) path integrals with the help of Eqs. (141), (142) and (143),
and we obtain:
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =e^{i\hat{Q}^{-}B^{+}+i\hat{P}^{-}\Delta^{+}}\delta(Q^{+})\frac{ \Theta(P^{+})}{2P^{+}}\frac{P^{+}}{2\pi i\Delta^{+}}\int_{\mathbf{B},\mathbf{ \Delta}}e^{-i\mathbf{B}\cdot\mathbf{Q}-i\mathbf{\Delta}\cdot\mathbf{P}+i\frac{ P^{+}}{2\Delta^{+}}\mathbf{\Delta}^{2}}U_{[z_{f}^{+},z_{i}^{+}]}(0,\mathbf{B})\] \[=e^{i\frac{\mathbf{P}\cdot\mathbf{Q}}{P^{+}}B^{+}+i\frac{\mathbf{ Q}^{2}}{8P^{+}}\Delta^{+}}\delta(Q^{+})\frac{\Theta(P^{+})}{2P^{+}}\int_{ \mathbf{B}}e^{-i\mathbf{B}\cdot\mathbf{Q}}\ U_{[z_{f}^{+},z_{i}^{+}]}(0, \mathbf{B}),, \tag{3.17}\]
where we have used the fact that
\[\hat{Q}^{-}B^{+}+\hat{P}^{-}\Delta^{+}-\frac{\mathbf{P}^{2}+m^{2}}{2P^{+}} \Delta^{+}=\frac{\mathbf{P}\cdot\mathbf{Q}}{P^{+}}B^{+}+\frac{\mathbf{Q}^{2}} {8P^{+}}\Delta^{+}, \tag{3.18}\]
when \(Q^{+}=0\). Neglecting the phases, that are subleading in the \(P^{+}\to\infty\) limit, we obtain the eikonal in-medium propagator:
\[\Delta_{\rm m}(\vec{P},\vec{Q})=\delta(Q^{+})\frac{\Theta(P^{+})}{2P^{+}}\int_ {\mathbf{B}}e^{-i\mathbf{B}\cdot\mathbf{Q}}\ U_{[z_{f}^{+},z_{i}^{+}]}(0, \mathbf{B}). \tag{3.19}\]
Note that this propagator is invariant under transverse Galilean transformations, i.e., it only depends on the transverse momentum transfer \(\mathbf{Q}\) but not on \(\mathbf{P}\). This implies that, when written in coordinate space, it is going to be diagonal in the transverse coordinates. It describes the propagation of a fast-moving scalar particle in a classical background field where the recoil of the medium is neglected. The only effect of the interaction with the medium is a color rotation in the projectile's wave function that is described by the _eikonal Wilson line_:10
Footnote 10: Note that the transverse component of the background field does not contribute to the eikonal Wilson line since it is suppressed by \(\hat{\mathbf{z}}\sim 1/P^{+}\).
\[U_{[x^{+},y^{+}]}(\vec{z})=\mathcal{P}_{+}e^{-ig\int_{y^{+}}^{x^{+}}dz^{+}A^{ -}(z^{+},\vec{z})}. \tag{3.20}\]
Strictly speaking, the eikonal Wilson line is defined with \(z^{-}=0\) since, in the eikonal limit, the particle only probe the field in this region. However, we define it with \(z^{-}\neq 0\) for future convenience.
### The eikonal expansion
By employing the saddle point approximation, \(A^{\mu}(z)\approx A^{\mu}(z_{\rm cl})\), and the small-angle approximation, \(P^{+}\to\infty\), we can obtain the eikonal scalar propagator. However, in this section, we aim to go beyond these approximations and systematically expand the scalar propagator in terms of \(1/P^{+}\) by computing the finite boost corrections. To achieve this, we adopt a strategy similar to the one employed in [13], which can be outlined as follows:
1. We relax the saddle point approximation by Taylor expanding the background field around the classical trajectory. To achieve this, we perform the following change of variables: \[z^{-}=B^{-}+v^{-},\qquad\mathbf{z}=\mathbf{z}_{\rm cl}+\mathbf{v},\qquad \mathbf{z}_{\rm cl}\equiv\mathbf{B}+\frac{\mathbf{\Delta}}{\Delta\tau^{+}}( \tau^{+}-B_{\tau}^{+}),\] (3.21)
where we represent the classical transverse trajectory slope in coordinate space for convenience, and \(B^{-}\) is the conjugate of \(Q^{+}\) and can be identified as the one defined in Eq. (11). We adopt the convention of expanding around \(z^{-}=B^{-}\) instead of \(z_{\rm cl}^{-}\) because it simplifies the analysis, and at the end of the calculation, we revert to the convention of [15] and expand around \(B^{-}=0\). The Taylor expansion of the background field is then given by \[A^{\mu}(z^{+},z^{-},{\bf z})=\exp\big{\{}v^{-}\partial^{+}+{\bf v}^{i}\cdot \partial_{{\bf z}_{\rm cl}^{i}}\big{\}}A^{\mu}(z^{+},B^{-},{\bf z}_{\rm cl}).\] (14)
2. We perform the small angle limit by assuming that the transverse slope is small and we Taylor expand again the background field around \({\bf z}_{\rm cl}={\bf B}\): \[A^{\mu}(z^{+},z^{-},{\bf z})=\exp\Bigg{\{}v^{-}\partial^{+}+\Bigg{[}{\bf v}^{i }+\frac{\mathbf{\Delta}}{\Delta\tau^{+}}(\tau^{+}-B_{\tau}^{+})\Bigg{]}\cdot \partial_{{\bf B}^{i}}\Bigg{\}}A^{\mu}(z^{+},B^{-},{\bf B}).\] (15)
The main advantage of the change of variables given in Eq. (13) is that it allows as to write the scalar propagator Eq. (12) as
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =e^{i\vec{Q}^{-}B^{+}+i\hat{P}^{-}\Delta^{+}}\int_{\vec{B}}e^{i \vec{B}\cdot\vec{Q}}\int_{\mathbf{\Delta}}e^{-i\mathbf{\Delta}\cdot{\bf P}+i\frac{ \langle p^{+}\rangle}{2\Delta\tau^{+}}\mathbf{\Delta}^{2}-i\frac{m_{\rm c}^{2}}{2 \langle p^{+}\rangle}\Delta\tau^{+}}\] \[\times\int^{q^{+}(\tau_{f}^{+})=0}{\cal D}q^{+}(\tau^{+})\frac{ \Theta(\langle p^{+}\rangle)}{2\langle p^{+}\rangle}\int{\cal D}v^{-}(\tau^{+} )e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+}\frac{v^{-}q^{+}}{\Delta\tau^ {+}}}\] \[\times\int_{{\bf v}(\tau_{i}^{+})=0}^{{\bf v}(\tau_{f}^{+})=0}{ \cal D}^{2}{\bf v}(\tau^{+})\ e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+} \frac{B^{+}}{2}\dot{\cal U}_{[\tau_{f}^{+},\tau_{i}^{+}]}\big{[}z^{+},B^{-}+v ^{-},{\bf z}_{\rm cl}+{\bf v}\big{]}, \tag{16}\]
so that the transverse path integral has periodical boundary conditions: \({\bf v}(\tau_{i}^{+})={\bf v}(\tau_{f}^{+})=0\). In order to get Eq. (16), we have written the Dirac delta that constrains the mean longitudinal momentum transfer as
\[\delta\left(\langle q^{+}\rangle-Q^{+}\right)=\int_{B^{-}}e^{iB^{-}(Q^{+}- \langle q^{+}\rangle)}, \tag{17}\]
thus, the second term in the exponent cancels out after the change of variables.
We can solve Eq. (16) by expanding the Wilson line in terms of \({\bf v}\), \(\mathbf{\Delta}\), and \(v^{-}\). As \(\langle p^{+}\rangle={\cal O}(P^{+})\), it is clear from Eq. (16) that the (path) integral in (\({\bf v}\)) \(\mathbf{\Delta}\) behaves like a Gaussian-like (path) integral. Consequently, terms of order \(({\bf v}^{2n})\)\(\mathbf{\Delta}^{2n}\) in the Wilson line expansion will contribute corrections of \({\cal O}(1/(P^{+})^{n})\). On the other hand, as we shall see in the subsequent discussion, terms of order \((v^{-})^{n}\) will introduce an \(n^{\rm th}\) order derivative in the longitudinal momentum transfer, resulting in a correction of order11\({\cal O}(1/(P^{+})^{2n})\).
Footnote 11: In fact, corrections arising from the \(v^{-}\) expansion yield powers of \(1/P^{+}(x^{+}-y^{+})\) that, as previously discussed, scale with the boost rapidity as \(e^{-2\omega}\).
In order to make the notation cleaner, let us define
\[\tilde{A}^{\mu}_{z^{+}}\equiv-igA^{\mu}(z^{+},B^{-},{\bf B}). \tag{18}\]
It is also convenient to introduce the _insertion_ tensor, \({\cal I}\), which represents the part in the exponent of the Wilson line that vanishes in the eikonal limit and can be expressed as \({\cal I}=-ig[A^{-}(z^{+},\vec{z})z^{+}-{\bf A}(z^{+},\vec{z})\cdot\dot{\bf z}]- \tilde{A}^{-}_{z^{+}}\dot{z}^{+}\). Utilizing Eq. (3.23), it can be further written as
\[{\cal I}_{\tau^{+}}\left[v^{-},{\bf v},{\bf\Delta}\right]\] \[\equiv\exp\left\{v^{-}\partial^{+}+\left[{\bf v}^{i}+\frac{{\bf \Delta}^{i}}{\Delta\tau^{+}}(\tau^{+}-B_{\tau}^{+})\right]\partial_{{\bf B}^{i }}\right\}\left[\tilde{A}^{-}_{z^{+}}\dot{z}^{+}-\tilde{\bf A}_{z^{+}}\cdot \left(\dot{\bf v}+\frac{{\bf\Delta}}{\Delta\tau^{+}}\right)\right]-\tilde{A}^ {-}_{z^{+}}\dot{z}^{+}\] \[=-\tilde{\bf A}_{z^{+}}\cdot\left(\dot{\bf v}+\frac{{\bf\Delta}} {\Delta\tau^{+}}\right)+\left(v^{-}\partial^{+}+\left[{\bf v}^{i}+\frac{{\bf \Delta}^{i}}{\Delta\tau^{+}}(\tau^{+}-B_{\tau}^{+})\right]\partial_{{\bf B}^{i }}\right)\tilde{A}^{-}_{z^{+}}\dot{z}^{+}+\cdots. \tag{3.27}\]
Thus, the Wilson line can be expanded in terms of the insertions as
\[{\cal U}_{[\tau_{f}^{+},\tau_{i}^{+}]}\big{[}z^{+},B^{-}+v^{-}, {\bf z}_{\rm cl}+{\bf v}\big{]}={\cal P}_{+}\exp\left\{\,\int_{\tau_{i}^{+}}^ {\tau_{f}^{+}}d\tau^{+}\Big{(}{\cal I}_{\tau^{+}}\left[v^{-},{\bf v},{\bf \Delta}\right]+\tilde{A}^{-}_{z^{+}}\dot{z}^{+}\right)\right\}\] \[={\cal P}_{+}e^{\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+} \tilde{A}^{-}_{z^{+}}\dot{z}^{+}}\] \[\qquad\qquad\qquad+{\cal P}_{+}\int_{\tau_{i}^{+}}^{\tau_{f}^{+}} d\tau_{1}^{+}e^{\int_{\tau_{1}^{+}}^{\tau_{f}^{+}}d\tau^{+}\tilde{A}^{-}_{z^{+}} \dot{z}^{+}}{\cal I}_{\tau_{1}^{+}}\left[v^{-},{\bf v},{\bf\Delta}\right]e^{ \int_{\tau_{i}^{+}}^{\tau_{1}^{+}}d\tau^{+}\tilde{A}^{-}_{z^{+}}\dot{z}^{+}}+\cdots, \tag{3.28}\]
where the \(m^{\rm th}\) order of the expansion is obtained by the path-ordered product of \(m\) insertions. The expansion of the Wilson line can be written in a generalized way as
\[{\cal U}_{[\tau_{f}^{+},\tau_{i}^{+}]}\big{[}z^{+},B^{-}+v^{-}, {\bf z}_{\rm cl}+{\bf v}\big{]}\\ =\sum_{m=0}^{\infty}\Bigg{(}\prod_{n=m}^{1}{\cal P}_{+}\int_{\tau _{i}^{+}}^{\tau_{n+1}^{+}}d\tau_{n}^{+}e^{\int_{\tau_{n}^{+}}^{\tau_{n+1}^{+}} d\tau^{+}\tilde{A}^{-}_{z^{+}}\dot{z}^{+}}{\cal I}_{\tau_{n}^{+}}\left[v^{-},{\bf v},{ \bf\Delta}\right]\Bigg{)}e^{\int_{\tau_{i}^{+}}^{\tau_{1}^{+}}d\tau^{+} \tilde{A}^{-}_{z^{+}}\dot{z}^{+}}, \tag{3.29}\]
where \(\tau_{m+1}^{+}\equiv\tau_{f}^{+}\) and \(\tau_{0}^{+}\equiv\tau_{i}^{+}\).
Given the expansions in Eqs. (3.27) and (3.29), the next step is to solve the integrals over \({\bf\Delta}\), \({\bf v}(\tau^{+})\), and \(v^{-}(\tau^{+})\). The first one is the most straightforward, as for an infinitely differentiable function \(f({\bf\Delta})\), the Gaussian integral is given by
\[\int_{{\bf\Delta}}e^{-i{\bf\Delta}\cdot{\bf P}-\frac{(p^{+})}{2i \Delta\tau^{+}}{\bf\Delta}^{2}-i\frac{m_{0}^{2}}{2(p^{+})}\Delta\tau^{+}}f({ \bf\Delta})=\frac{2\pi i\Delta\tau^{+}}{\langle p^{+}\rangle}f(i\partial_{\bf P })e^{-i\frac{{\bf P}^{2}+m_{0}^{2}}{2(p^{+})}\Delta\tau^{+}}. \tag{3.30}\]
In the case of the path integral in \({\bf v}(\tau^{+})\), we are interested in solving a path integral of the following type:
\[\int_{{\bf v}(\tau_{i}^{+})=0}^{{\bf v}(\tau_{f}^{+})=0}{\cal D}^{ 2}{\bf v}(\tau^{+})\ e^{\ i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau+\frac{(p^{+} )}{2}\dot{\bf v}^{2}}F[{\bf v}]\\ =\int_{{\bf v}(\tau_{i}^{+})=0}^{{\bf v}(\tau_{f}^{+})=0}{\cal D} ^{2}{\bf v}(\tau^{+})\ e^{-\frac{1}{2}\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau+{ \bf v}\left(i\langle p^{+}\rangle\frac{d^{2}}{(dr^{+})^{2}}\right){\bf v}}F[{ \bf v}], \tag{3.31}\]
where \(F[{\bf v}]\) is a generic functional that can be Taylor expanded, and in the last step, we have integrated by parts. This integral can be solved analogously to Eq. (3.30) by defining the generating functional:
\[Z[{\bf J}]=\int_{{\bf v}(\tau_{i}^{+})=0}^{{\bf v}(\tau_{f}^{+})=0} \mathcal{D}^{2}{\bf v}(\tau^{+})e^{\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+} \left[i\frac{(p^{+})}{2}\dot{\bf v}^{2}+{\bf J}(\tau^{+}).{\bf v}\right]}\] \[=Z[0]\exp\Bigg{\{}\frac{1}{2}\int_{\tau_{1}^{+}\tau_{2}^{+}}{\bf J }^{i}(\tau_{1}^{+})G^{ij}(\tau_{1}^{+},\tau_{2}^{+}){\bf J}^{j}(\tau_{2}^{+}) \Bigg{\}}, \tag{3.32}\]
where \(\tau_{1}^{+},\tau_{2}^{+}\in[\tau_{i}^{+},\tau_{f}^{+}]\). The generating functional allows us to write the solution of the Gaussian path integral as
\[\int_{{\bf v}(\tau_{i}^{+})=0}^{{\bf v}(\tau_{f}^{+})=0}\mathcal{D}^{2}{\bf v }(\tau^{+})\ e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+}\frac{(p^{+})}{2} \dot{\bf v}^{2}}F[{\bf v}]=F\left[\frac{\delta}{\delta{\bf J}}\right]Z[{\bf J }]\Bigg{|}_{{\bf J}=0}. \tag{3.33}\]
Here, \(G^{ij}(\tau_{1}^{+},\tau_{2}^{+})\) is the inverse of the Gaussian coefficient and is the Green's function of the following one dimensional Poisson equation:
\[i\langle p^{+}\rangle\frac{d^{2}}{(d\tau_{1}^{+})^{2}}G^{ij}(\tau_{1}^{+}, \tau_{2}^{+})=\delta^{ij}\delta(\tau_{1}^{+}-\tau_{2}^{+}), \tag{3.34}\]
with the Dirichlet boundary conditions \(G^{ij}(\tau_{f}^{+},\tau^{+})=G^{ij}(\tau^{+},\tau_{i}^{+})=0\) for every \(\tau^{+}\in[\tau_{i}^{+},\tau_{f}^{+}]\). The solution for this equation is well known and reads
\[G^{ij}(\tau_{1}^{+},\tau_{2}^{+})\\ =\frac{i\delta^{ij}}{\langle p^{+}\rangle}\left[\frac{(\tau_{f}^{ +}-\tau_{1}^{+})(\tau_{2}^{+}-\tau_{i}^{+})}{\tau_{f}^{+}-\tau_{i}^{+}}\Theta( \tau_{1}^{+}-\tau_{2}^{+})+\frac{(\tau_{f}^{+}-\tau_{2}^{+})(\tau_{1}^{+}-\tau _{i}^{+})}{\tau_{f}^{+}-\tau_{i}^{+}}\Theta(\tau_{2}^{+}-\tau_{1}^{+})\right]. \tag{3.35}\]
On the other hand, the "free" generating functional is given by
\[Z[0]=\int_{{\bf v}(\tau_{i}^{+})=0}^{{\bf v}(\tau_{f}^{+})=0}\mathcal{D}^{2}{ \bf v}(\tau^{+})\ e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}\frac{(p^{+})}{2} \dot{\bf v}^{2}}=\frac{\langle p^{+}\rangle}{2\pi i\Delta\tau^{+}}. \tag{3.36}\]
Since the generating functional is Gaussian, it implies that any odd power of the Taylor expansion of \(F[{\bf v}]\) will vanish, and even powers higher than 2 can be computed just by means of the Wick theorem, i.e., by summing over all possible permutations of the products of the 2-point function given in Eq. (3.35). Moreover, because of the \(\dot{\bf v}\) term multiplying the transverse field in Eq. (3.27), apart from the 2-point function, we are also going to need its time derivatives whenever a transverse field is inserted in the expansion of the Wilson line. Thus, we define the four building blocks for the expansion over \({\bf v}\) as
follows:
\[G^{ij}(\tau_{1}^{+},\tau_{2}^{+}) \equiv\langle\mathbf{v}^{i}(\tau_{1}^{+})\mathbf{v}^{j}(\tau_{2}^{+ })\rangle, \tag{3.37}\] \[G^{ij}(\tau_{1}^{+},\tau_{2}^{+})^{\bullet} \equiv\langle\mathbf{v}^{i}(\tau_{1}^{+})\dot{\mathbf{v}}^{j}( \tau_{2}^{+})\rangle\] \[=i\frac{\delta^{ij}}{\langle p^{+}\rangle}\left[\frac{\tau_{f}^{ +}-\tau_{1}^{+}}{\tau_{f}^{+}-\tau_{i}^{+}}\Theta(\tau_{1}^{+}-\tau_{2}^{+})- \frac{\tau_{1}^{+}-\tau_{i}^{+}}{\tau_{f}^{+}-\tau_{i}^{+}}\Theta(\tau_{2}^{+} -\tau_{1}^{+})\right],\] (3.38) \[{}^{\bullet}G^{ij}(\tau_{1}^{+},\tau_{2}^{+}) \equiv\langle\dot{\mathbf{v}}^{i}(\tau_{1}^{+})\mathbf{v}^{j}( \tau_{2}^{+})\rangle=G^{ij}(\tau_{2}^{+},\tau_{1}^{+})^{\bullet},\] (3.39) \[{}^{\bullet}G^{ij}(\tau_{1}^{+},\tau_{2}^{+})^{\bullet} \equiv\langle\dot{\mathbf{v}}^{i}(\tau_{1}^{+})\dot{\mathbf{v}}^{j }(\tau_{2}^{+})\rangle=i\frac{\delta^{ij}}{\langle p^{+}\rangle}\left[\delta( \tau_{1}^{+}-\tau_{2}^{+})-\frac{1}{\tau_{f}^{+}-\tau_{i}^{+}}\right], \tag{3.40}\]
where we have defined
\[\langle F[\mathbf{v}]\rangle=Z[0]^{-1}\int_{\mathbf{v}(\tau_{i}^{+})=0}^{ \mathbf{v}(\tau_{f}^{+})=0}\mathcal{D}^{2}\mathbf{v}(\tau^{+})\ e^{i\int_{\tau_{i}^{+}}^{ \tau_{i}^{+}}\frac{(p^{+})}{2}\dot{\mathbf{v}}^{2}}F[\mathbf{v}]. \tag{3.41}\]
Finally, the last path integral that we have to deal with is the one over \(v^{-}(\tau^{+})\). This one can be easily solved by noting that
\[\int\mathcal{D}v^{-}(\tau^{+})e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^ {+}}d\tau^{+}\frac{v^{-}q^{+}}{\Delta\tau^{+}}}v^{-}(\tau_{1}^{+})\cdots v^{- }(\tau_{n}^{+})\] \[=(\Delta\tau^{+})^{n}\frac{\delta}{i\delta q^{+}(\tau_{1}^{+})} \cdots\frac{\delta}{i\delta q^{+}(\tau_{n}^{+})}\int\mathcal{D}v^{-}e^{i\int_ {\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{+}\frac{v^{-}q^{+}}{\Delta\tau^{+}}}\] \[\equiv\frac{\delta}{i\delta q^{+}(\tau_{1}^{+})}\cdots\frac{ \delta}{i\delta q^{+}(\tau_{n}^{+})}\delta[q^{+}(\tau^{+})], \tag{3.42}\]
where the functional derivative of the Dirac delta, given in the last step, is defined in Eq. (C.13). It implies that the result of the integration over \(n\) insertions \(v^{-}(\tau_{1}^{+})\cdots v^{-}(\tau_{n}^{+})\) is to set \(q^{+}(\tau^{+})=0\) in between \(\tau^{+}\in(\tau_{m}^{+},\tau_{m+1}^{+})\) (\(m=0,\ldots,n\)). Hence, for a generic functional \(G[v^{-}]\) that can be Taylor expanded, we obtain the following result:
\[\int\mathcal{D}v^{-}e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^ {+}\frac{v^{-}q^{+}}{\Delta\tau^{+}}}G[v^{-}]=G\left[\frac{\delta}{i\delta q^ {+}}\right]\delta[q^{+}(\tau^{+})]. \tag{3.43}\]
Before deriving the final expression, let us examine the impact of the \(v^{-}\) insertions in Eqs. (3.24) and (3.29). We assume that there are \(n\) insertions of \(v^{-}(\tau^{+})\) at light-cone proper times \(\tau_{1}^{+},\ldots,\tau_{n}^{+}\). Due to Eq. (3.42), the longitudinal momentum transfer vanishes within the intervals \(\tau^{+}\in(\tau_{m}^{+},\tau_{m+1}^{+})\). We introduce the shorthand notation \(q_{m}^{+}\equiv q^{+}(\tau_{m}^{+})\), which allows us to express it as \(q^{+}(\tau^{+})=\sum_{m=1}^{n}q_{m}^{+}\delta([\tau^{+}-\tau_{m}^{+}]/[\tau_{f} ^{+}-\tau_{i}^{+}])\). Hence, the in-medium longitudinal momentum, Eq. (3.10), is given by
\[p_{\Delta}^{+}(\tau^{+})=P^{+}+\sum_{m=1}^{n}\frac{q_{m}^{+}}{2}\epsilon(\tau^{ +}-\tau_{m}^{+}), \tag{3.44}\]
i.e., it is constant within the region \(\tau^{+}\in(\tau^{+}_{m},\tau^{+}_{m+1})\) and at each longitudinal point \(\tau^{+}_{m}\) it receives a longitudinal "kick" \(q^{+}_{m}/2\) due to the interaction with the medium. Moreover, using the constraint \(\sum_{m=1}^{n}q^{+}_{m}=Q^{+}\), we see that the longitudinal momentum in the boundary of the medium is given by: \(p^{+}_{\Delta}(\tau^{+}_{i})=P^{+}-Q^{+}/2=p^{+}_{i}\) and \(p^{+}_{\Delta}(\tau^{+}_{f})=P^{+}+Q^{+}/2=p^{+}_{f}\).
Analogous to the discussion performed in Section 2.2.2, when the longitudinal momentum is constant in a region \((\tau^{+}_{m},\tau^{+}_{m+1})\), \(z^{+}(\tau^{+})\) is going to be linear in this region, as can be read from Eq. (24). Thus, as illustrated in Fig. 3, we can relate \(z^{+}_{m}\equiv z^{+}(\tau^{+}_{m})\) and \(\tau^{+}_{m}\) as follows:
\[\frac{z^{+}_{m+1}-z^{+}_{m}}{\tau^{+}_{m+1}-\tau^{+}_{m}}=\frac{p^{+}_{m}}{ \langle p^{+}\rangle},\qquad\tau^{+}_{m}=\tau^{+}_{i}+\sum_{l=0}^{m}\frac{ \langle p^{+}\rangle}{p^{+}_{l}}(z^{+}_{l+1}-z^{+}_{l}), \tag{45}\]
where we have introduced \(p^{+}_{m}\equiv p^{+}_{\Delta}(\tau^{+}_{m})\).
Finally, since \(z^{+}(\tau^{+})\) is linear in between two insertions, the transformation \(\tau^{+}\to z^{+}(\tau^{+})\) is a diffeomorphism and we can write the path ordered exponential appearing in Eq. (39) in terms of the eikonal Wilson lines:
\[\mathcal{P}^{+}\exp\left\{\int_{\tau^{+}_{m}}^{\tau^{+}_{m+1}}d \tau^{+}\tilde{A}^{-}_{z^{+}}\dot{z}^{+}\right\}\] \[=\Theta(z^{+}_{m+1}-z^{+}_{m})\Theta(p^{+}_{m})U_{[z^{+}_{m+1},z^ {+}_{m}]}(\vec{B})+\Theta(z^{+}_{m}-z^{+}_{m+1})\Theta(-p^{+}_{m})U^{\dagger}_ {[z^{+}_{m},z^{+}_{m+1}]}(\vec{B}). \tag{46}\]
Inserting Eq. (46) into Eq. (39) we obtain the expansion of the Wilson line in terms
Figure 3: Dependence of the LC time \(z^{+}(\tau^{+})\) with the LC proper time \(\tau^{+}\) after \(n\) insertions in \(v^{-}(\tau^{+}_{m})\). Each insertion fixes the momentum to be constant within a region \([\tau^{+}_{m},\tau^{+}_{m+1}]\) so that, because of Eq. (24), the dependence of \(z^{+}\) with \(\tau^{+}\) is linear in this region and the slope is given by \(p^{+}_{m}/\langle p^{+}\rangle\).
of its eikonal homologous:
\[\mathcal{U}_{[\tau^{+}_{f},\tau^{+}_{i}]}\big{[}z^{+},v^{-}+B^{-}, \mathbf{z}_{\text{cl}}+\mathbf{v}\big{]}\\ =\sum_{m=0}^{\infty}\Bigg{(}\prod_{n=m}^{1}\int_{\tau^{+}_{i}}^{ \tau^{+}_{n+1}}d\tau^{+}_{n}\Bigg{[}\Theta(z^{+}_{n+1}-z^{+}_{n})\Theta(p^{+}_{ n})U_{[z^{+}_{n+1},z^{+}_{n}]}(\vec{B})\\ +\Theta(z^{+}_{n}-z^{+}_{n+1})\Theta(-p^{+}_{n})U^{\dagger}_{[z^{ +}_{n+1},z^{+}_{n}]}(\vec{B})\Bigg{]}\mathcal{I}_{\tau^{+}_{n}}\left[v^{-}, \mathbf{v},\boldsymbol{\Delta}\right]\Bigg{)}U_{[z^{+}_{1},z^{+}_{i}]}(\vec{B}), \tag{3.47}\]
where \(\tau^{+}_{m+1}=\tau^{+}_{f}\) and \(z^{+}_{m+1}=z^{+}_{f}\).
All in all, inserting the results obtained in Eqs. (3.30), (3.33) and (3.43) into Eq. (3.24), and the expansion of the Wilson lines in terms of the eikonal ones given in Eq. (3.47), we obtain the final expression for the eikonal expansion:
\[\Delta_{\text{m}}(\vec{P},\vec{Q})=e^{i\hat{Q}^{-}B^{+}+i\hat{P}^ {-}\Delta^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\int^{q^{+}(\tau^{+}_{f}) =0}\mathcal{D}q^{+}\frac{\Theta(\langle p^{+}\rangle)}{2\langle p^{+}\rangle} \sum_{m=0}^{\infty}\Bigg{(}\prod_{n=m}^{1}\int_{\tau^{+}_{i}}^{\tau^{+}_{n+1} }d\tau^{+}_{n}\\ \times\Bigg{[}U_{[z^{+}_{n+1},z^{+}_{n}]}(\vec{B})\Theta(z^{+}_{n +1}-z^{+}_{n})\Theta(p^{+}_{n})+U^{\dagger}_{[z^{+}_{n+1},z^{+}_{n}]}(\vec{B}) \Theta(z^{+}_{n}-z^{+}_{n+1})\Theta(-p^{+}_{n})\Bigg{]}\\ \times\mathcal{I}_{\tau^{+}_{n}}\left[\frac{\delta}{i\delta q^{+} },\frac{\delta}{\delta\mathbf{J}},i\partial_{\mathbf{P}}\right]\Bigg{)}U_{[z^ {+}_{1},z^{+}_{i}]}(\vec{B})\ e^{-i\frac{\mathbf{P}^{2}+m^{2}_{n}}{2\langle p^{ +}\rangle}\Delta\tau^{+}}\frac{Z[\mathbf{J}]}{Z[0]}\Bigg{|}_{\mathbf{J}=0} \delta[q^{+}(\tau^{+})]. \tag{3.48}\]
Equation (3.48) is the most important result of this section and can be utilized to compute the eikonal corrections to the scalar propagator at a specific order in the boost parameter \(e^{-\omega}\). We note that although we explicitly write the path integral in \(q^{+}\), its solution is straightforward due to the functional derivatives of the Dirac delta. On the other hand, \(p^{+}_{m}\) and \(\langle p^{+}\rangle\) are given in terms of \(q^{+}\) by Eqs. (3.10) and (3.13), respectively. We do not explicitly write this dependence to keep the expression concise. Moreover, the relation of \(\tau^{+}_{n}\) and \(z^{+}_{n}\) between two longitudinal insertions is given by Eq. (3.45).
To summarize, the eikonal expansion given by Eq. (3.48) together with Eq. (3.27) has two different sources. On one hand, we have corrections due to the longitudinal momentum transfer from the medium to the projectile. This correction arises from the \(z^{-}\) dependence of the field, which is taken into account by an expansion in terms of \(\delta^{n}/\delta^{n}q^{+}\). As illustrated in Appendix D, where we computed the first-order correction to Eq. (3.48), the source of this correction comes from the fluctuations of \(\langle p^{+}\rangle\) around its classical value \(P^{+}\). One can observe from Eq. (3.13) that these fluctuations, which appear through the functional derivative of \(q^{+}\), are \(\mathcal{O}(1/[P^{+}(x^{+}-y^{+})])=\mathcal{O}(e^{-2\omega})\) and are therefore double-suppressed in the eikonal limit. Thus, in order to obtain a correction at order \((e^{-\omega})^{n}\), it is sufficient to expand Eq. (3.48) up to order \((\delta/\delta q^{+})^{n/2}\).
On the other hand, there are corrections due to a finite width of the medium, \(z^{+}_{f}-z^{+}_{i}\), as well as interactions with the transverse component of the field, which result in a transverse motion of the particle inside the medium. This makes the propagator non-diagonal in the transverse coordinates. In the functional approach, these corrections arise from fluctuations around the classical transverse trajectory of the particle and are accounted for by an
expansion in terms of \(\delta/\delta\mathbf{J}\) and \(\partial_{\mathbf{P}}\). In this case, since the only dependence of \(\mathbf{P}\) and \(\mathbf{J}\) comes from Gaussian functions whose coefficient is \(\mathcal{O}(1/P^{+})\), in order to obtain a correction of order \((e^{-\omega})^{n}\), it is sufficient to perform the transverse expansion of Eq. (3.48) up to order \(2n\).
Finally, we should also perform the expansion in \(B^{-}\). It is easy to read from the Fourier exponent of Eq. (3.48) that an expansion of the eikonal Wilson lines around \(B^{-}=0\) will result in an expansion in derivatives of \(\delta(Q^{+})\). As we explain in Eq. (D.25), each derivative of \(\delta(Q^{+})\) will lead to a suppression of \(\mathcal{O}(e^{-\omega})\) once we write the propagator in coordinate space. Therefore, in order to obtain a correction of \(\mathcal{O}((e^{-\omega})^{n})\) we should expand the eikonal Wilson lines up to order \((B^{-})^{n}\).
Although Eq. (3.48) provides a systematic expansion that can be expanded at any order in \(e^{-\omega}\), evaluating it at high orders can become cumbersome. In such cases, it can be advantageous to introduce "non-eikonal" Feynman rules to simplify the calculation in terms of a diagrammatic approach.
In order to illustrate this approach, in Appendix D, we have computed the scalar propagator at first order in the eikonal expansion. This result has also been independently computed recently in [16], providing a valuable cross-check for the approach presented in this manuscript. At first order, the result we obtain for the retarded scalar propagator in momentum space is as follows:
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =\delta(Q^{+})\frac{\Theta(P^{+})}{2P^{+}}\int_{\mathbf{B}}e^{-i \mathbf{B}\cdot\mathbf{Q}}\Bigg{\{}\left(1+i\frac{\mathbf{P}\cdot\mathbf{Q}}{ P^{+}}B^{+}\right)U_{[z^{+}_{f},z^{+}_{i}]}(0,\mathbf{B})\] \[-i\delta^{\prime}(Q^{+})\frac{\Theta(P^{+})}{2P^{+}}\int_{\mathbf{ B}}e^{-i\mathbf{B}\cdot\mathbf{Q}}\partial^{+}U_{[z^{+}_{f},z^{+}_{i}]}(0, \mathbf{B}), \tag{3.49}\]
where \(D_{\mathbf{B}^{i}}\) is the transverse covariant derivative and is defined in Eq. (D.21). We note that in the eikonal limit, \(P^{+}\rightarrow\infty\), we recover the result given in Eq. (3.19).
## 4 Conclusions
The investigation of sub eikonal corrections to high-energy scattering amplitudes has gained significant importance, particularly in light of the kinematics that will be probed at the upcoming Electron-Ion Collider. These corrections provide a unique opportunity to explore novel effects that remain concealed within the eikonal limit. In this manuscript, we have comprehensively reviewed the scalar propagator in the presence of a non-Abelian classical gauge field, generated by a nuclear medium, utilizing the worldline formalism. The path integral was expressed in light-cone coordinates by integrating the Schwinger proper time, as shown in Eqs. (2.27) and (2.29). This representation becomes more useful at high energy since the dependence on the longitudinal momentum of the projectile is explicit, facilitating the identification of leading terms in the propagator.
The interpretation of the propagator in light-cone coordinates reveals that the projectile exchanges both longitudinal and transverse momentum while interacting with the
medium. The gauge field of the medium encompasses components \(A^{+}\) and \({\bf A}\), in contrast to the eikonal case. The longitudinal momentum transfer with the medium arises from the \(z^{-}\) dependence of the gauge field. Interestingly, by neglecting this dependence and considering only the \(A^{-}\) component of the field, we recover the effective gluon propagator used in the derivation of the BDMPS-Z spectrum.
While the results presented in this work are formal, they open the door for further investigations and practical applications. Once a model for the gauge field is established, the path integral can be solved for \(z^{-}\) and \(p^{+}\), enabling the computation of (i) corrections to the BDMPS-Z spectrum by incorporating the longitudinal momentum transfer to the effective gluon propagator, and (ii) the computation of collisional energy loss due to the longitudinal interaction of the projectile parton with the medium.
In Section 3, we have developed an expansion around the saddle point solution of the path integral, which corresponds to a power expansion in \(e^{-\omega}\), where \(\omega\) represents the rapidity of the projectile. While the first order correction has been recently computed in [16; 25], higher-order non-eikonal corrections are crucial for processes where the collision energy is moderate.
As the results presented in this manuscript are derived for a scalar particle and a generic gauge field, there are several avenues for improvement and extension. First, when the eikonal approximation is relaxed, the spin of partons becomes relevant. To account for this, a complete analysis of scattering amplitudes with quarks should be conducted, replacing the scalar propagator with the quark propagator. In the worldline formalism, this can be achieved by incorporating the spin factor [60; 61]. Moreover, the study of the non-eikonal gluon propagator, which is relevant for numerous observables, is another important extension. Hence, we plan to extend our analysis in order to account for both cases.
Next, this work focuses on the interaction of a quantum scalar particle with a classical field. To fully explore non-eikonal effects, we need to include a quark interacting with a classical gluon field, as well as a gluon interacting with a classical quark field. While some progress has been made in this direction at the next-to-eikonal accuracy in [46], it is worth investigating the possibility of obtaining an exponentiated path integral result when the background field consists of gluons and quarks.
Furthermore, the result presented here is only valid when the particle propagates in the \(z^{+}\) direction, i.e., \(x^{+}\neq y^{+}\). For certain applications, it may be necessary to include the term proportional to \(\delta(x^{+}-y^{+})\). This can be explored by studying Eq. (21) before performing the integration over the Schwinger proper time.
In conclusion, this manuscript lays the groundwork for studying non-eikonal corrections to scattering amplitudes in the presence of a background non-Abelian field. The developments presented here open up opportunities for future research and can be extended to a variety of phenomenological applications.
## Acknowledgments
We are very grateful to Xabier Feal, Andrey Tarasov, and Raju Venugopalan for reading this manuscript and their valuable comments. We thank Tolga Altinoluk and Guillaume Beuf for insightful discussions that have contributed to the development of this manuscript. Special thanks go to Nestor Armesto and Fabio Dominguez for their invaluable revision and constructive feedback on the manuscript, as well as for their inspiring and fruitful discussions. This work has been supported by Conselleria de Cultura, Educacion e Universidade of Xunta de Galicia (Spain) under the grant ED481B-2022-050. The author has received financial support from Xunta de Galicia (Centro singular de investigacion de Galicia accreditation 2019-2022, ref. ED421G-2019/05), by European Union ERDF, by the "Maria de Maeztu" Units of Excellence program MDM2016-0692, and by the Spanish Research State Agency under project PID2020-119632GB-I00. This work has been performed in the framework of the European Research Council project ERC-2018-ADG-835105 YoctoLHC and the MSCA RISE 823947 "Heavy ion collisions: collectivity and precision in saturation physics" (HIEIC), and has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 824093.
## Appendix A The scalar propagator in LC coordinates in the discrete limit
In this section we solve explicitly the \(z^{+}\) and \(p^{-}\) path integral given in Eq. (17). In the continuous limit Eq. (17) can be written as follows:
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dT\lim_{N\to\infty}\int\prod_{l=1 }^{N-1}d^{4}z_{l}\prod_{k=1}^{N}\frac{d^{4}p_{k}}{(2\pi)^{4}}\\ \times\mathcal{P}\exp\Bigg{\{}i\sum_{n=1}^{N}\Bigg{[}p_{n}^{-}(2 \frac{T}{N}p_{n}^{+}-(z_{n}^{+}-z_{n-1}^{+}))-\frac{T}{N}(\mathbf{p}_{n}^{2} +m_{\epsilon}^{2})\\ -\vec{p}_{n}\cdot(\vec{z}_{n}-\vec{z}_{n-1})-gA^{-}(z_{n}^{+}, \vec{z}_{n})(z_{n}^{+}-z_{n-1}^{+})-g\vec{A}(z_{n}^{+},\vec{z}_{n})\cdot(\vec{ z}_{n}-\vec{z}_{n-1})\Bigg{]}\Bigg{\}}. \tag{18}\]
The \(p^{-}\) and \(z^{-}\) dependent part of this integral can therefore be solved as follows:
\[\int\prod_{l=1}^{N-1}dz_{l}^{+}\prod_{n=1}^{N}\frac{dp_{n}^{-}}{2\pi }\mathcal{P}\exp\Bigg{\{}i\sum_{n=1}^{N}\left[p_{n}^{-}\left(\frac{2Tp_{n}^{+}}{ N}-z_{n}^{+}+z_{n-1}^{+}\right)-gA^{-}(z_{n}^{+},\vec{z}_{n})(z_{n}^{+}-z_{n-1}^{ +})\right.\] \[\left.\hskip 113.811024pt-g\vec{A}(z_{n}^{+},\vec{z}_{n})\cdot( \vec{z}_{n}-\vec{z}_{n-1})\right]\Bigg{\}}\] \[=\int\prod_{l=1}^{N-1}dz_{l}^{+}\prod_{k=1}^{N}\delta\left(\frac{ 2Tp_{k}^{+}}{N}-z_{k}^{+}+z_{k-1}^{+}\right)\mathcal{P}\exp\Bigg{\{}-ig\sum_{n =1}^{N}\left[A^{-}(z_{n}^{+},\vec{z}_{n})(z_{n}^{+}-z_{n-1}^{+})\right.\] \[\left.\hskip 113.811024pt+\vec{A}(z_{n}^{+},\vec{z}_{n})\cdot( \vec{z}_{n}-\vec{z}_{n-1})\right]\Bigg{\}}\] \[=\delta\left(2\langle p^{+}\rangle T-(x^{+}-y^{+})\right)\] \[\times\mathcal{P}\exp\left\{-ig\sum_{n=1}^{N}\left[A^{-}(z_{n}^{+ },\vec{z}_{n})\frac{2Tp_{n}^{+}}{N}+\vec{A}(z_{n}^{+},\vec{z}_{n})\cdot(\vec{z} _{n}-\vec{z}_{n-1})\right]\right\}, \tag{110}\]
where we have defined
\[z_{n}^{+}\equiv y^{+}+2T\sum_{l=1}^{n}\frac{p_{l}^{+}}{N},\qquad\langle p^{+} \rangle=\sum_{n=1}^{N}\frac{p_{n}^{+}}{N}. \tag{111}\]
Plugging Eq. (110) into Eq. (110), we obtain the discrete representation of Eq. (2.21):
\[\Delta_{F}(x,y)=\int_{0}^{\infty}dT\lim_{N\to\infty}\int\prod_{l=1 }^{N-1}d^{3}\vec{z}_{l}\prod_{k=1}^{N}\frac{d^{3}\vec{p}_{k}}{(2\pi)^{3}} \delta\left(2\langle p^{+}\rangle T-(x^{+}-y^{+})\right)\] \[\times\mathcal{P}\exp\Bigg{\{}-i\sum_{n=1}^{N}\left[\frac{\mathbf{ p}_{n}^{2}+m_{e}^{2}}{2p_{n}^{+}}\frac{2p_{n}^{+}T}{N}+\vec{p}_{n}\cdot(\vec{z}_{n}- \vec{z}_{n-1})\right.\] \[\left.\hskip 113.811024pt+gA^{-}(z_{n}^{+},\vec{z}_{n})\frac{2p_{n} ^{+}T}{N}+\vec{A}(z_{n}^{+},\vec{z}_{n})\cdot(\vec{z}_{n}-\vec{z}_{n-1}) \right]\Bigg{\}}. \tag{112}\]
## Appendix B Useful path integrals
In this section, we compute the path integrals that are used throughout this manuscript. The approach for solving path-integrals adopted in this work, is based on the so-called time-slicing regularization [66]. Path integrals are defined through their discrete representation and regulated by the number of slices \(N\). We first solve them for fixed \(N\), and once we obtain the final result, we take the \(N\to\infty\) limit.
We consider the D-dimensional Gaussian path integral with a "current term":
\[\int_{z(y^{+})=y}^{z(x^{+})=x}\mathcal{D}^{D}z(\tau^{+})F[z]\int\mathcal{D}^{D }p(\tau^{+})e^{-i\int_{y^{+}}^{x^{+}}d\tau^{+}\left(ap^{2}+p\cdot\hat{z}\right)}, \tag{113}\]
where \(F[z]\) is a generic functional of \(z(\tau^{+})\) and \(a\) is a real constant. In the discrete representation we can write Eq. (107) as follows:
\[\lim_{N\to\infty}\int\prod_{n=1}^{N-1}d^{D}z_{n}\prod_{k=1}^{N}\frac {d^{D}p}{(2\pi)^{D}}F[z_{i}]\exp\Biggl{\{}-i\sum_{n=1}^{N}\frac{x^{+}-y^{+}}{N} \left[ap_{n}^{2}+p_{n}\cdot(z_{n}-z_{n-1})\right]\Biggr{\}}\] \[=\lim_{N\to\infty}\int\prod_{n=1}^{N-1}d^{D}z_{n}F[\{z_{i}\}] \left(\frac{N}{4\pi ia(x^{+}-y^{+})}\right)^{ND/2}\exp\Biggl{\{}i\sum_{n=1}^{N }\frac{N}{4a(x^{+}-y^{+})}(z_{n}-z_{n-1})^{2}\Biggr{\}}, \tag{108}\]
where in the last step we have solved the analytical continuation of the Gaussian integral in \(p^{\mu}\) and defined \(z_{0}=x\) and \(z_{N}=y\). Hence, in the continuous limit, we obtain the solution of Eq. (107):
\[\int_{z(y^{+})=y}^{z(x^{+})=x}\mathcal{D}^{D}z(\tau^{+})\int \mathcal{D}^{D}p(\tau^{+})e^{-i\int_{y^{+}}^{x^{+}}d\tau^{+}\left(ap^{2}+p \cdot\dot{z}\right)}F[z]\] \[=\int_{z(y^{+})=y}^{z(x^{+})=x}\mathcal{D}^{D}z(\tau^{+})\exp \Bigg{\{}i\int_{y^{+}}^{x^{+}}d\tau^{+}\frac{\dot{z}^{2}}{4a}\Bigg{\}}F[z]. \tag{109}\]
In Eq. (109), we have defined the path integral measure as
\[\mathcal{D}^{D}z(\tau^{+})\equiv\prod_{n=1}^{N-1}d^{D}z_{n}\left(\frac{N}{4\pi ia (x^{+}-y^{+})}\right)^{ND/2}, \tag{110}\]
where this factor is introduced in the definition of the measure, after integration in the momentum coordinates, to regulate the Gaussian path integral.
In the case in which \(F[z]=1\), \(D=2\) and \(a=1/2p^{+}\) we have the following path integral we obtain the propagator of a free particle in a two-dimensional space. The result is well known [19], and reads
\[\int_{\mathbf{z}(y^{+})=y}^{\mathbf{z}(x^{+})=x}\mathcal{D}^{2}\mathbf{z}( \tau^{+})\exp\Bigg{\{}i\int_{y^{+}}^{x^{+}}d\tau^{+}p^{+}\frac{\dot{\mathbf{z }}^{2}}{2}\Bigg{\}}=\frac{p^{+}}{2\pi i(x^{+}-y^{+})}e^{i\frac{p^{+}}{2(x^{+}- y^{+})}(\mathbf{x}-\mathbf{y})^{2}}. \tag{111}\]
Now let us solve the following exponential path-integral:
\[\int_{z^{-}(y^{+})=y^{-}}^{z^{-}(x^{+})=x^{-}}\mathcal{D}z^{-}(\tau^{+})e^{i \int_{y^{+}}^{x^{+}}d\tau^{+}z^{-}\dot{p}^{+}}. \tag{112}\]
In the discrete representation this integral is given by
\[\lim_{N\to\infty}\int\prod_{k=1}^{N-1}dz_{k}^{-}\exp\Biggl{\{}i \sum_{n=1}^{N-1}z_{n}^{-}(p_{n+1}^{+}-p_{n}^{+})\Biggr{\}} =\lim_{N\to\infty}\prod_{n=1}^{N-1}\delta(p_{n+1}^{+}-p_{n}^{+})\] \[=\lim_{N\to\infty}\prod_{n=1}^{N-1}\delta(p_{n}^{+}-p_{N}^{+}). \tag{113}\]
Hence, the solution of Eq. (101) is given by
\[\int_{z^{-}(y^{+})=y^{-}}^{z^{-}(x^{+})=x^{-}}\mathcal{D}z^{-}(\tau^{+})e^{i\int_ {y^{+}}^{x^{+}}d\tau^{+}z^{-}\tilde{p}^{+}}=\delta\big{[}p(\tau^{+})-p^{+}(x^{+} )\big{]}, \tag{102}\]
where we have defined the functional Dirac delta as
\[\delta\big{[}p(\tau^{+})-k^{+}\big{]}\equiv\lim_{N\to\infty}\prod_{n=1}^{N-1} \delta(p_{n}^{+}-k^{+}). \tag{103}\]
It is easy to realize that the momentum path-integral of any functional \(F[p^{+}(\tau^{+})]\) multiplied by the functional Dirac delta is given by:
\[\int\mathcal{D}p^{+}(\tau^{+})F[p^{+}(\tau^{+})]\delta\big{[}p(\tau^{+})-p_{f} ^{+}\big{]}=\int\frac{dp_{f}^{+}}{2\pi}F[p_{f}^{+}], \tag{104}\]
where we have introduced \(p_{f}^{+}\equiv p^{+}(x^{+})\).
## Appendix C Derivation of the propagator written in terms of the local momentum transfer
In this section, we derive Eq. (11) starting from Eq. (100). The discrete representation of Eq. (100) is given by
\[\Delta_{\rm m}(\vec{P},\vec{Q})=\int_{\vec{B},\vec{\Delta}}e^{i \Delta\cdot\hat{P}+iB\cdot\hat{Q}}\lim_{N_{\Delta}\to\infty}\int\prod_{k=1}^{N _{\Delta}}\frac{dp_{k}^{+}}{2\pi}\frac{\Theta(\langle p^{+}\rangle)}{2\langle p ^{+}\rangle}\int\prod_{l=1}^{N_{\Delta}-1}d^{3}\vec{z}_{l}\\ \times\mathcal{P}_{+}\Bigg{\{}i\sum_{n=1}^{N_{\Delta}}\Bigg{[}- \frac{m_{\epsilon}^{2}}{2\langle p^{+}\rangle}\frac{\tau_{f}^{+}-\tau_{i}^{+ }}{N_{\Delta}}+\frac{N_{\Delta}}{\tau_{f}^{+}-\tau_{i}^{+}}\frac{\langle p^{+ }\rangle}{2}(\mathbf{z}_{n}-\mathbf{z}_{n-1})^{2}-p_{n}^{+}(z_{n}^{-}-z_{n-1} ^{-})\\ -gA^{-}(z_{n}^{+},\vec{z}_{n})\frac{p_{n}^{+}}{\langle p^{+} \rangle}\frac{\tau_{f}^{+}-\tau_{i}^{+}}{N_{\Delta}}+g\mathbf{A}(z_{n}^{+}, \vec{z}_{n})\cdot(\mathbf{z}_{n}-\mathbf{z}_{n-1})\Bigg{]}\Bigg{\}}, \tag{105}\]
where \(\vec{z}_{0}=\vec{B}-\vec{\Delta}/2\) and \(\vec{z}_{N_{\Delta}}=\vec{B}+\vec{\Delta}/2\). Here, \(N_{\Delta}\) is the number of discretization steps inside the medium and is written in terms of the total number of steps as follows:
\[N_{\Delta}=\frac{\tau_{f}^{+}-\tau_{i}^{+}}{x^{+}-y^{+}}N. \tag{106}\]
On the other hand, the discrete representation of the average longitudinal momentum is given by
\[\langle p^{+}\rangle=\frac{\tau_{i}^{+}-y^{+}}{x^{+}-y^{+}}p_{i}^ {+}+\frac{\tau_{f}^{+}-\tau_{i}^{+}}{x^{+}-y^{+}}\sum_{n=1}^{N_{\Delta}}\frac {p_{n}^{+}}{N_{\Delta}}+\frac{x^{+}-\tau_{f}+}{x^{+}-y^{+}}p_{f}^{+}\\ =\frac{(x^{+}-y^{+})-(\tau_{f}^{+}-\tau_{i}^{+})}{x^{+}-y^{+}}P^{ +}+\frac{(x^{+}+y^{+})-(\tau_{f}^{+}+\tau_{i}^{+})}{x^{+}-y^{+}}\frac{Q^{+}}{ 2}+\frac{\tau_{f}^{+}-\tau_{i}^{+}}{x^{+}-y^{+}}\sum_{n=1}^{N_{\Delta}}\frac{p _{n}^{+}}{N_{\Delta}}. \tag{107}\]
We can solve the \(B^{-}\) and \(\Delta^{-}\) integrals by writing the \(z^{-}\) term of the kinetic part of the Lagrangian as
\[-i\sum_{n=1}^{N_{\Delta}}p_{n}^{+}(z_{n}^{-}-z_{n-1}^{-})=-i\frac{p_ {N_{\Delta}}^{+}+p_{1}^{+}}{2}\Delta^{-}-i(p_{N_{\Delta}}^{+}-p_{1}^{+})B^{-}+i \sum_{n=1}^{N_{\Delta}-1}z_{n}^{-}(p_{n+1}^{+}-p_{n}^{+})\] \[=-i\Delta^{-}\left(p_{1}^{+}+\frac{1}{2}\sum_{n=1}^{N_{\Delta}-1} \frac{q_{n}^{+}}{N_{\Delta}}\right)-iB^{-}\sum_{n=1}^{N_{\Delta}-1}\frac{q_{n} ^{+}}{N_{\Delta}}+i\sum_{n=1}^{N_{\Delta}-1}z_{n}^{-}\frac{q_{n}^{+}}{N_{\Delta }}, \tag{100}\]
where in the last step we have performed the change of variables
\[\frac{q_{n}^{+}}{N_{\Delta}}=(p_{n+1}^{+}-p_{n}^{+}),\qquad p_{n}^{+}=p_{1}^{ +}+\sum_{l=1}^{n-1}\frac{q_{n}^{+}}{N_{\Delta}}. \tag{101}\]
Hence, assuming that \(A^{\mu}(z_{N_{\Delta}},\vec{z}N\Delta)=0\), performing the change of variables \(p_{n}^{+}\to q_{n-1}^{+}\), and solving the \(B^{-}\) and \(\Delta^{-}\) integrals, we can write Eq. (100) as follows:
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =e^{i\hat{Q}^{-}B^{+}+i\hat{P}^{-}\Delta^{+}}\int_{{\bf B},{\bf \Delta}}e^{-i{\bf B}\cdot{\bf Q}-i{\bf\Delta}\cdot{\bf P}}\lim_{N_{\Delta} \rightarrow\infty}\int\prod_{k=1}^{N_{\Delta}-1}\frac{dq_{k}^{+}}{2\pi N_{ \Delta}}\int\frac{dp_{1}^{+}}{2\pi}\] \[\times\delta\left(P^{+}-p_{1}^{+}-\frac{\langle q^{+}\rangle}{2} \right)\delta\left(Q^{+}-\langle q^{+}\rangle\right)\frac{\Theta(\langle p^{+} \rangle)}{2\langle p^{+}\rangle}\int\prod_{l=1}^{N_{\Delta}-1}d^{3}\vec{z}_{l}\] \[\times{\cal P}_{+}\Bigg{\{}i\sum_{n=1}^{N_{\Delta}}\Bigg{[}- \frac{m_{e}^{2}}{2\langle p^{+}\rangle}\frac{\tau_{f}^{+}-\tau_{i}^{+}}{N_{ \Delta}}+\frac{N_{\Delta}}{\tau_{f}^{+}-\tau_{i}^{+}}\frac{\langle p^{+} \rangle}{2}({\bf z}_{n}-{\bf z}_{n-1})^{2}+z_{n}^{-}\frac{q_{n}^{+}}{N_{\Delta}}\] \[-gA^{-}(z_{n}^{+},\vec{z}_{n})\frac{p_{1}^{+}+\sum_{l=1}^{n-1} \frac{q_{l}^{+}}{N_{\Delta}}}{\langle p^{+}\rangle}\frac{\tau_{f}^{+}-\tau_{i }^{+}}{N_{\Delta}}+g{\bf A}(z_{n}^{+},\vec{z}_{n})\cdot({\bf z}_{n}-{\bf z}_{n -1})\Bigg{]}\Bigg{\}}, \tag{102}\]
where \(q_{N_{\Delta}}^{+}=0\).
The Dirac delta in the \(p_{1}^{+}\) integral fixes the in-medium longitudinal momentum to:
\[p_{n}^{+}=P^{+}+\frac{1}{2}\sum_{l=1}^{n-1}\frac{q_{l}^{+}}{N_{\Delta}}-\frac{1 }{2}\sum_{l=n}^{N_{\Delta}-1}\frac{q_{l}^{+}}{N_{\Delta}}. \tag{103}\]
Thus, by solving the \(p_{1}^{+}\) integral in Eq. (102), we obtain the discrete representation of the in-medium propagator in terms of \(q_{n}^{+}\):
\[\Delta_{\rm m}(\vec{P},\vec{Q})=e^{i\hat{Q}^{-}B^{+}+i\hat{P}^{-} \Delta^{+}}\int_{{\bf B},{\bf\Delta}}e^{-i{\bf B}\cdot{\bf Q}-i{\bf\Delta} \cdot{\bf P}}\lim_{N_{\Delta}\rightarrow\infty}\int\prod_{k=1}^{N_{\Delta}-1} \frac{dq_{k}^{+}}{2\pi N_{\Delta}}\delta\left(Q^{+}-\langle q^{+}\rangle\right)\] \[\frac{\Theta(\langle p^{+}\rangle)}{2\langle p^{+}\rangle}\int \prod_{l=1}^{N_{\Delta}-1}d^{3}\vec{z}_{l}\ {\cal P}_{+}\Bigg{\{}i\sum_{n=1}^{N_{\Delta}}\Bigg{[}- \frac{m_{\epsilon}^{2}}{2\langle p^{+}\rangle}\frac{\tau_{f}^{+}-\tau_{i}^{+}} {N_{\Delta}}+\frac{N_{\Delta}}{\tau_{f}^{+}-\tau_{i}^{+}}\frac{\langle p^{+} \rangle}{2}({\bf z}_{n}-{\bf z}_{n-1})^{2}+z_{n}^{-}\frac{q_{n}^{+}}{N_{\Delta}}\] \[-gA^{-}(z_{n}^{+},\vec{z}_{n})\frac{1}{\langle p^{+}\rangle} \left(P^{+}+\frac{1}{2}\sum_{l=1}^{n-1}\frac{q_{l}^{+}}{N_{\Delta}}-\frac{1}{2} \sum_{l=n}^{N_{\Delta}-1}\frac{q_{l}^{+}}{N_{\Delta}}\right)\frac{\tau_{f}^{+}- \tau_{i}^{+}}{N_{\Delta}}\] \[+g{\bf A}(z_{n}^{+},\vec{z}_{n})\cdot({\bf z}_{n}-{\bf z}_{n-1}) \Bigg{]}\Bigg{\}}, \tag{104}\]
where the average momentum is given by
\[\langle p^{+}\rangle=P^{+}+\frac{(x^{+}+y^{+})-(\tau_{f}^{+}+\tau_{i }^{+})}{x^{+}-y^{+}}\frac{Q^{+}}{2}+\frac{1}{2}\frac{\tau_{f}^{+}-\tau_{i}^{+}} {x^{+}-y^{+}}\sum_{n=1}^{N_{\Delta}}\left(\sum_{l=1}^{n-1}\frac{q_{l}^{+}}{N_{ \Delta}}-\sum_{l=n}^{N_{\Delta}-1}\frac{q_{l}^{+}}{N_{\Delta}}\right). \tag{111}\]
Finally, taking the continuous limit of Eqs. (110) to (111), we obtain the results of Eqs. (10), (11) and (13), respectively.
To finalize this section, we discuss the regularization of the \(q^{+}\) path integral. We can read from Eq. (110) that the \(q^{+}\) path integral measure is regulated as
\[\mathcal{D}q^{+}=\lim_{N_{\Delta}\rightarrow\infty}\prod_{n=1}^{ N_{\Delta}-1}\frac{dq_{n}^{+}}{2\pi N_{\Delta}}. \tag{112}\]
This implies that the exponent path integral over the \(v^{-}\) variable leads to the following result:
\[\int\mathcal{D}v^{-}e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau^{ +}v^{-}\frac{q^{+}}{\Delta\tau^{+}}} =\lim_{N_{\Delta}\rightarrow\infty}\prod_{n=1}^{N_{\Delta}-1}dv _{n}^{-}\exp\left\{i\sum_{n=1}^{N_{\Delta}-1}v_{n}^{-}\frac{q_{n}^{+}}{N_{ \Delta}}\right\}\] \[=\lim_{N_{\Delta}\rightarrow\infty}\prod_{n=1}^{N_{\Delta}-1} \delta\left(\frac{q_{n}^{+}}{N_{\Delta}}\right)\equiv\delta[q^{+}(\tau^{+})], \tag{113}\]
that is, the continuous limit of the functional Dirac delta is regulated by a factor \(N_{\Delta}^{N_{\Delta}-1}\) which cancels with the regulator of the \(q^{+}\) path integral measure:
\[\int^{q^{+}(\tau_{f}^{+})=0}\mathcal{D}q^{+}(\tau^{+})F[q^{+}] \delta[q^{+}(\tau^{+})] =\lim_{N_{\Delta}\rightarrow\infty}\int\prod_{n=1}^{N_{\Delta}-1} \frac{dq_{n}^{+}}{2\pi N_{\Delta}}\delta\left(\frac{q_{n}^{+}}{N_{\Delta}} \right)F(\{q_{i}^{+}\})\] \[=\lim_{N_{\Delta}\rightarrow\infty}\int\prod_{n=1}^{N_{\Delta}-1} \frac{dq_{n}^{+}}{2\pi}\delta(q_{n}^{+})F(\{q_{i}^{+}\})=F[0]. \tag{114}\]
Moreover, the functional derivative at a point \(\tau_{1}^{+}\in[\tau_{i}^{+},\tau_{f}^{+}]\) of the Dirac delta is defined in the discrete representation as follows:
\[\frac{\delta}{\delta q^{+}(\tau_{1}^{+})}\delta[q^{+}(\tau^{+})] =\frac{\partial}{\partial q_{n_{1}}^{+}}\lim_{N_{\Delta}\to \infty}\prod_{n=1}^{N_{\Delta}-1}\delta\left(\frac{q_{n}^{+}}{N_{\Delta}} \right), \tag{115}\]
where
\[n_{1}=\frac{\tau_{1}^{+}-\tau_{i}^{+}}{\tau_{f}^{+}-\tau_{i}^{+} }N_{\Delta}. \tag{116}\]
## Appendix D The eikonal expansion at first order
In this section, we compute the first-order correction to the eikonal approximation using the eikonal expansion given in Eq. (3.48). The object analyzed in this section has recently
been computed in [16] using a different formalism and will serve as a double-check for our approach.
Before starting with our analysis, let us write schematically:
\[\Delta_{\rm m}(\vec{P},\vec{Q})=\Delta_{\rm m}^{(0)}(\vec{P},\vec{Q})+\Delta_{ \rm m}^{(1),-}(\vec{P},\vec{Q})+\Delta_{\rm m}^{(1),\perp}(\vec{P},\vec{Q})+ \mathcal{O}\left(\frac{1}{(P^{+})^{2}}\right), \tag{114}\]
where \(\Delta_{\rm m}^{(0)}\) is the zeroth order, i.e., \(m=0\), term in Eq. (3.48). \(\Delta_{\rm m}^{(1),-}\) is the leading-order correction due to the \(v^{-}\) expansion and gives a contribution of order \(1/(P^{+})^{2}\), which is subleading in our analysis. However, we study this term for illustrative purposes. On the other hand, the term \(\Delta_{\rm m}^{(1),\perp}\) gives the leading order in \(1/P^{+}\) in the eikonal expansion by computing corrections to the \({\bf z}\) classical trajectories and is the result of expanding Eq. (3.48) up to order \(m=2\).
One can read from Eq. (3.48) that the zeroth order of the expansion is simply given by
\[\Delta_{\rm m}^{(0)}(\vec{P},\vec{Q}) =e^{i\vec{Q}^{-}B^{+}+i\dot{P}^{-}\Delta^{+}-i\frac{{\bf P}^{2}+m_ {2}^{2}}{2P^{+}}\Delta^{+}}\frac{\Theta(P^{+})}{2P^{+}}\int_{\vec{B}}e^{i \vec{B}\cdot\vec{Q}}\ U_{[z_{f}^{+},z_{i}^{+}]}(\vec{B})\] \[=\frac{\Theta(P^{+})}{2P^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q }}\ \left(1+i\frac{{\bf P}\cdot{\bf Q}}{P^{+}}B^{+}+i\frac{{\bf Q}^{2}}{8P^{+}} \Delta^{+}\right)U_{[z_{f}^{+},z_{i}^{+}]}(\vec{B}), \tag{115}\]
which is not but the result obtained in Eq. (3.19) including the phases and with \(B^{-}\neq 0\).
### Correction to the \(z^{-}\) trajectory
We now analyze the corrections that arise due to the expansion around the fluctuations \(v^{-}\). As we have pointed out in Section 3, this correction will result in a single derivative of the Dirac delta in the longitudinal momentum transfer \(q_{1}^{+}\equiv q^{+}(\tau_{1}^{+})\). Thus, as we will see below, it is analogous to an expansion at leading order in \(q_{1}^{+}/P^{+}\), since higher order corrections will be suppressed by the delta function. At this order, the insertion given in Eq. (3.27) can be written as
\[\mathcal{I}_{r^{+}}\left[v^{-},0,0\right]=v^{-}\partial^{+}\tilde{A}^{-}(z^{+ },0,{\bf B})\dot{z}^{+}. \tag{116}\]
On the other hand, we only need the first term of the summation in Eq. (3.48), so that the term in the in-medium propagator that contributes to the leading order correction is12
Footnote 12: From now on until the end of the calculation, we do not write the dependence of the eikonal Wilson line on \(\vec{B}\) explicit in order to make the notation tidier.
\[\Delta_{\rm m}^{(1),-}(\vec{P},\vec{Q}) =e^{i\vec{Q}^{-}B^{+}+i\dot{P}^{-}\Delta^{+}}\int_{\vec{B}}e^{i \vec{B}\cdot\vec{Q}}\int^{q^{+}(\tau_{f}^{+})=0}\mathcal{D}q^{+}\frac{\Theta( \langle p^{+}\rangle)}{2\langle p^{+}\rangle}e^{-i\frac{{\bf P}^{2}+m_{2}^{2} }{2\langle p^{+}\rangle}\Delta\tau^{+}} \tag{117}\] \[\times\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau_{1}^{+}U_{[z_{f}^{+},z_{1}^{+}]}\dot{z}_{1}^{+}\partial^{+}\tilde{A}_{z_{1}^{+}}^{-}U_{[z_{1}^{+}, z_{1}^{+}]}\frac{\delta}{i\delta q^{+}(\tau_{1}^{+})}\delta[q^{+}(\tau^{+})]\] \[=\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\int dq_{1}^{+}(-i\delta^{ \prime}(q_{1}^{+}))\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau_{1}^{+}\frac{ \Theta(\langle p^{+}\rangle)}{2\langle p^{+}\rangle}e^{-i\frac{{\bf P}^{2}+m_ {2}^{2}}{2\langle p^{+}\rangle}\Delta\tau^{+}}\dot{z}_{1}^{+}U_{[z_{f}^{+},z_ {1}^{+}]}\partial^{+}\tilde{A}_{z_{1}^{+}}^{-}U_{[z_{1}^{+},z_{1}^{+}]},\]
where in the last step, we solved the path integral over \(q^{+}\) by using the Dirac delta and dropped the non-eikonal phase, anticipating that the result after solving the \(q_{1}^{+}\) integral would be of order \(1/P^{+}\). This equation represents the leading-order correction to the longitudinal momentum transfer resulting from the interaction of the projectile with the medium. It describes the propagation of a particle that interacts with the medium by absorbing soft gluons, as illustrated in Figure 4 with black lines. The particle absorbs a single semi-hard gluon at position \(z_{1}^{+}\), leading to a longitudinal "kick" of order \(\partial^{+}A^{-}\), depicted by the red line in Figure 4. Consequently, the particle undergoes a change in its longitudinal momentum.
Since the result in Eq. (45) is proportional to the derivative of the delta in \(q_{1}^{+}\), it is enough to expand all the variables that depend on this quantity at leading order. Moreover, we also expand all the variables at order \(Q^{+}/P^{+}\) since we are neglecting the \(\mathcal{O}(1/(P^{+})^{2})\) correction. The quantities that depend on the longitudinal momentum transfer are the average longitudinal momentum, given by Eq. (3.13), and the LC proper time intervals, given by Eq. (3.45). Using Eq. (3.13), we can express the average momentum as
\[\langle p^{+}\rangle=P^{+}+\frac{Q^{+}}{2}\frac{(x^{+}-\tau_{f}^{+})+(y^{+}- \tau_{i}^{+})}{x^{+}-y^{+}}+\frac{q_{1}^{+}}{2}\frac{(\tau_{f}^{+}-\tau_{1}^{ +})+(\tau_{i}^{+}-\tau_{1}^{+})}{x^{+}-y^{+}}. \tag{46}\]
This result can be written entirely in terms of \(z_{1}^{+}\equiv z^{+}(\tau_{1}^{+})\) by using the relationships given in Eqs. (2.38) and (3.45). Doing that we obtain
\[\langle p^{+}\rangle=P^{+}\Bigg{[}1-\frac{Q^{+}}{2(x^{+}-y^{+})} \left(\frac{x^{+}-z_{f}^{+}}{P^{+}+Q^{+}/2}+\frac{y^{+}-z_{i}^{+}}{P^{+}-Q^{+ }/2}\right)\] \[-\frac{q_{1}^{+}}{2(x^{+}-y^{+})}\left(\frac{z_{f}^{+}-z_{1}^{+}} {P^{+}+q_{1}^{+}/2}+\frac{z_{i}^{+}-z_{1}^{+}}{P^{+}-q_{1}^{+}/2}\right)\Bigg{]} ^{-1}\] \[=P^{+}\left[1+\frac{Q^{+}}{P^{+}}\frac{B^{+}-\frac{x^{+}+y^{+}}{ 2}}{x^{+}-y^{+}}+\frac{q_{1}^{+}}{P^{+}}\frac{z_{1}^{+}-B^{+}}{x^{+}-y^{+}}+ \mathcal{O}\left(\frac{1}{(P^{+})^{2}}\right)\right]^{-1}. \tag{47}\]
Figure 4: Diagrammatic interpretation of the leading order correction to the \(v^{-}\) trajectory. The particle travels through the medium, defined in between \(z_{i}^{+}\) and \(z_{f}^{+}\), and scatters with multiple soft gluons, represented by black lines. At a longitudinal coordinate \(z_{1}^{+}\), the particle interacts with a semi-hard gluon that gives a longitudinal ”kick” of order \(\partial^{+}A^{-}\) and changes its longitudinal momentum. The red line represents the exchanged gluon responsible for the momentum transfer.
On the other hand, the term in the exponential of Eq. (101) can also be written in terms of \(z_{1}^{+}\), using Eq. (102), as
\[\frac{\Delta\tau^{+}}{\langle p^{+}\rangle}=\frac{z_{f}^{+}-z_{1}^{+}}{P^{+}+q_{ 1}^{+}/2}+\frac{z_{1}^{+}-z_{i}^{+}}{P^{+}-q_{1}^{+}/2}=\frac{\Delta^{+}}{P^{+} }+\mathcal{O}\left(\frac{1}{(P^{+})^{2}}\right). \tag{103}\]
Using Eq. (100), the step function can be expanded as13
Footnote 13: At this point we only need to expand the \(\Theta\) function in powers of \(q_{1}^{+}\), since Eq. (101) is proportional to \(\delta^{\prime}(q_{1}^{+})\). However, we are anticipating that the final result Eq. (100), after expanding around \(B^{-}=0\), is going to be proportional to derivatives of \(\delta(Q^{+})\) and we are also expanding around \(Q^{+}=0\).
\[\Theta(\langle p^{+}\rangle)=\Theta(P^{+})-\left[Q^{+}\frac{B^{+}-\frac{x^{+} +y^{+}}{2}}{x^{+}-y^{+}}+q_{1}^{+}\frac{z_{1}^{+}-B^{+}}{x^{+}-y^{+}}+\cdots \right]\delta(P^{+})+\mathcal{O}(\delta^{\prime}(P^{+})). \tag{104}\]
However, as we stated in Section 2.2, the scalar propagator studied in the present analysis neglects the contribution where the particle does not propagate in the longitudinal direction, i.e., \(x^{+}=y^{+}\). This implies that the modes with \(P^{+}=0\) do not contribute to the present analysis and are discarded. Thus, we can write \(\Theta(\langle p^{+}\rangle)=\Theta(P^{+})\).
Plugging Eqs. (100) to (104) into Eq. (101) and solving the \(q_{1}^{+}\) integral, we obtain
\[\Delta_{\rm m}^{(1),-}(\vec{P},\vec{Q})=\frac{\Theta(P^{+})}{2P^{ +}}e^{-i\frac{\mathbf{P}^{2}+m_{2}^{2}}{2P^{+}}\Delta^{+}}\frac{i}{P^{+}(x^{+ }-y^{+})}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\] \[\qquad\times\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d\tau_{1}^{+}\dot{z }_{1}^{+}(z_{1}^{+}-B^{+})U_{[z_{f}^{+},z_{1}^{+}]}\partial^{+}\tilde{A}_{z_{1 }^{+}}^{-}U_{[z_{1}^{+},z_{i}^{+}]}+\mathcal{O}\left(\frac{1}{(P^{+})^{2}}\right)\] \[=\frac{\Theta(P^{+})}{2P^{+}}\frac{i}{P^{+}(x^{+}-y^{+})}\int_{ \vec{B}}e^{i\vec{B}\cdot\vec{Q}}\int_{z_{i}^{+}}^{z_{f}^{+}}dz_{1}^{+}(z_{1}^{ +}-B^{+})U_{[z_{f}^{+},z_{1}^{+}]}\partial^{+}\tilde{A}_{z_{1}^{+}}^{-}U_{[z_{ 1}^{+},z_{i}^{+}]}\] \[\qquad\quad+\mathcal{O}\left(\frac{1}{(P^{+})^{2}}\right), \tag{105}\]
where in the last step we have performed the change of variables \(\tau_{1}^{+}\to z^{+}(\tau_{1}^{+})\) which is trivial since the integrand is already multiplied by the Jacobian and dropped the phase which is subleading. This equation can be further simplified, and written in terms of derivatives of Wilson line, by using the following identities:
\[\partial^{\mu}U_{[z_{f}^{+},z_{i}^{+}]} =\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+}]}\partial ^{\mu}\tilde{A}_{z^{+}}^{-}U_{[z^{+},z_{i}^{+}]}, \tag{106}\] \[\int_{z_{f}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+}]} \stackrel{{\longleftrightarrow}}{{\partial}}^{\mu}U_{[z^{+},z_{ i}^{+}]} =-2\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}(z^{+}-B^{+})U_{[z_{f}^{+},z^{+}]} \partial^{\mu}\tilde{A}_{z^{+}}^{-}U_{[z^{+},z_{i}^{+}]}, \tag{107}\]
where
\[\stackrel{{\longleftrightarrow}}{{\partial}}^{\mu}=\overrightarrow{ \partial}^{\mu}-\overleftarrow{\partial}^{\mu}. \tag{108}\]
Thus, using Eq. (107), we can write the following correction to the in-medium propagator due to longitudinal scatterings of the projectile in the medium:
\[\Delta_{\rm m}^{(1),-}(\vec{P},\vec{Q})=\frac{\Theta(P^{+})}{2P^{+}}\frac{-i}{ 2P^{+}(x^{+}-y^{+})}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\int_{z_{i}^{+}}^{z_ {f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+}]}\stackrel{{\longleftrightarrow} }{{\partial}}^{+}U_{[z^{+},z_{i}^{+}]}. \tag{109}\]
This term, although apparently it is \(\mathcal{O}(1/P^{+})\), is in fact a next-to-next-to-eikonal correction, i.e. \(\sim\mathcal{O}(1/(P^{+})^{2})\). The reason for this dependence not being explicit is that we are working in a mixed representation where we express in terms of momentum, \(\vec{p}\), and LC times, \(x^{+}\). However, under a large boost in the right direction, the LC time is dilated as \(z^{+}\to e^{\omega}z^{+}\), so that the interval \((x^{+}-y^{+})\) is enhanced. We can see that, in fact, \(x^{+}-y^{+}\) scales as \(P^{+}\) heuristically, by noting that \((x^{+}-y^{+})\) is the conjugate of \(P^{-}\), so that \((x^{+}-y^{+})\sim 1/P^{-}\sim 2P^{+}/P_{\perp}^{2}\).
Therefore, we obtain that there is no next-to-eikonal correction coming from the expansion around \(v^{-}=B^{-}\). So that,
\[\Delta_{\rm m}^{(1),-}(\vec{P},\vec{Q})=\mathcal{O}\left(\frac{1}{(P^{+})^{2}} \right). \tag{102}\]
### Correction to the \(\mathbf{z}_{\perp}\) classical trajectory
We will now examine the corrections arising from the \(\mathbf{z}\) trajectory. In this scenario, we only need to consider the corrections associated with the transverse coordinates. Hence, there is no longitudinal momentum transfer during the interaction, and we can set \(\tau^{+}=z^{+}\). Utilizing Eq. (100), we need to compute
\[\Delta_{\rm m}^{(1),\perp}(\vec{P},\vec{Q}) =\delta(Q^{+})\frac{\Theta(P^{+})}{2P^{+}}e^{i\hat{Q}^{-}B^{+}+i \hat{P}^{-}\Delta^{+}}\int_{\mathbf{B}}e^{-i\mathbf{B}\cdot\mathbf{Q}}\sum_{m= 1}^{2}\Bigg{(}\prod_{n=m}^{1}\int_{z_{i}^{+}}^{z_{n+1}^{+}}dz_{n}^{+}\] \[\times U_{[z_{n+1}^{+},z_{n}^{+}]}\mathcal{I}_{\tau_{n}^{+}} \left[0,\frac{\delta}{\delta\mathbf{J}},i\partial_{\mathbf{P}}\right]\Bigg{)} U_{[z_{1}^{+},z_{i}^{+}]}e^{-i\frac{\mathbf{P}^{2}+m_{2}^{2}}{2P^{+}}\Delta^{+}} \frac{Z[\mathbf{J}]}{Z[0]}\Bigg{|}_{\mathbf{J}=0}, \tag{103}\]
where in the term \(m=1\) we have to expand the insertion to second order and in the term \(m=2\) to first order.
Thus, inserting the transverse expansion of the insertion, given in Eq. (101), in Eq. (103) we can write
\[\Delta_{\rm m}^{(1),\perp}(\vec{P},\vec{Q})=\frac{\Theta(P^{+})}{ 2P^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\Bigg{\{}\int_{z_{i}^{+}}^{z_{f} ^{+}}dz_{1}^{+}U_{[z_{f}^{+},z_{1}^{+}]}\Bigg{[}\frac{i\partial_{\mathbf{P}i} }{\Delta^{+}}\left((z_{1}^{+}-B^{+})\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^ {+}}^{-}-\tilde{\mathbf{A}}_{z_{1}^{+}}^{i}\right)\] \[+\frac{1}{2}\left(\frac{i\partial_{\mathbf{P}i}i\partial_{ \mathbf{P}j}}{(\Delta^{+})^{2}}(z_{1}^{+}-B^{+})^{2}+G^{ij}(z_{1}^{+},z_{1}^{+ })\right)\partial_{\mathbf{B}^{i}}\partial_{\mathbf{B}^{j}}\tilde{A}_{z_{1}^{ +}}^{-}\] \[-\left(\frac{i\partial_{\mathbf{P}i}i\partial_{\mathbf{P}j}}{( \Delta^{+})^{2}}(z_{1}^{+}-B^{+})+G^{ij}(z_{1}^{+},z_{1}^{+})^{\bullet}\right) \partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{j}\Bigg{]}U_{[z_{1}^{+},z_{i}^ {+}]}+\int_{z_{i}^{+}}^{z_{f}^{+}}dz_{2}^{+}\int_{z_{i}^{+}}^{z_{2}^{+}}dz_{1}^ {+}U_{[z_{f}^{+},z_{2}^{+}]}\] \[\times\Bigg{[}\left(\frac{i\partial_{\mathbf{P}i}i\partial_{ \mathbf{P}j}}{(\Delta^{+})^{2}}(z_{2}^{+}-B^{+})(z_{1}^{+}-B^{+})+G^{ij}(z_{2}^ {+},z_{1}^{+})\right)\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{2}^{+}}^{-}U_{[z_ {2}^{+},z_{1}^{+}]}\partial_{\mathbf{B}^{j}}\tilde{A}_{z_{1}^{+}}^{-} \tag{104}\] \[+\left(\frac{i\partial_{\mathbf{P}i}i\partial_{\mathbf{P}j}}{( \Delta^{+})^{2}}+{}^{\bullet}G(z_{2}^{+},z_{1}^{+})^{\bullet}\right)\tilde{ \mathbf{A}}_{z_{2}^{+}}^{i}U_{[z_{2}^{+},z_{1}^{+}]}\tilde{\mathbf{A}}_{z_{1}^ {+}}^{j}\] \[-\left(\frac{i\partial_{\mathbf{P}i}i\partial_{\mathbf{P}j}}{( \Delta^{+})^{2}}(z_{2}^{+}-B^{+})+G^{ij}(z_{2}^{+},z_{1}^{+})^{\bullet}\right) \partial_{\mathbf{B}^{i}}\tilde{A}_{z_{2}^{+}}^{-}U_{[z_{2}^{+},z_{1}^{+}]} \tilde{\mathbf{A}}_{z_{1}^{+}}^{j}\] \[-\left(\frac{i\partial_{\mathbf{P}i}i\partial_{\mathbf{P}j}}{( \Delta^{+})^{2}}(z_{1}^{+}-B^{+})+{}^{\bullet}G(z_{2}^{+},z_{1}^{+})\right) \tilde{\mathbf{A}}_{z_{2}^{+}}^{i}U_{[z_{2}^{+},z_{1}^{+}]}\partial_{ \mathbf{B}^{j}}\tilde{A}_{z_{1}^{+}}^{-}\Bigg{]}U_{[z_{1}^{+},z_{i}^{+}]} \Bigg{)}e^{-i\frac{\mathbf{P}^{2}+m_{2}^{2}}{2P^{+}}\Delta^{+}}.\]
This equation, although lengthy, is just composed of the 2-point function and its derivatives, defined in Eqs. (3.37) to (3.40), as well as single and double derivatives of \(\mathbf{P}^{i}\) with respect to the Gaussian. It can be simplified by using the expression of the 2-point function and performing the derivatives:
\[\Delta_{\text{m}}^{(1),\perp}(\vec{P},\vec{Q})=\frac{\Theta(P^{+}) }{2P^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\Bigg{\{}\int_{z_{i}^{+}}^{z_{f} ^{+}}dz_{1}^{+}U_{[z_{f}^{+},z_{1}^{+}]}\Bigg{[}\frac{\mathbf{P}^{i}}{P^{+}} \left((z_{1}^{+}-B^{+})\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{-}- \tilde{\mathbf{A}}_{z_{1}^{+}}^{i}\right)\] \[+\frac{i\Delta^{+}}{8P^{+}}\partial_{\mathbf{B}^{i}}\partial_{ \mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{-}\Bigg{]}U_{[z_{1}^{+},z_{1}^{+}]}\] \[+\int_{z_{i}^{+}}^{z_{f}^{+}}dz_{2}^{+}\int_{z_{i}^{+}}^{z_{2}^{+ }}dz_{1}^{+}U_{[z_{f}^{+},z_{2}^{+}]}\Bigg{[}\frac{i}{4P^{+}}\left(\Delta^{+} -2(z_{2}^{+}-z_{1}^{+})\right)\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{2}^{+}}^ {-}U_{[z_{2}^{+},z_{1}^{+}]}\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{-}\] \[-\frac{i}{2P^{+}}\left(\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{2} ^{+}}^{-}U_{[z_{2}^{+},z_{1}^{+}]}\tilde{\mathbf{A}}_{z_{1}^{+}}^{i}-\tilde{ \mathbf{A}}_{z_{2}^{+}}^{i}U_{[z_{2}^{+},z_{1}^{+}]}\partial_{\mathbf{B}^{i}} \tilde{A}_{z_{1}^{+}}^{-}\right)\] \[+\frac{i}{P^{+}}\delta(z_{2}^{+}-z_{1}^{+})\tilde{\mathbf{A}}_{z _{2}^{+}}^{i}U_{[z_{2}^{+},z_{1}^{+}]}\tilde{\mathbf{A}}_{z_{1}^{+}}^{j} \Bigg{]}U_{[z_{1}^{+},z_{1}^{+}]}\Bigg{\}},\] (D.17)
where we have dropped the subleading phases at next-to-eikonal order. This equation can be further simplified by integrating by parts the second term in the sum that is proportional to \(\partial_{\mathbf{B}}^{2}A^{-}\). By doing so, we obtain:
\[\Delta_{\text{m}}^{(1),\perp}(\vec{P},\vec{Q})=\frac{\Theta(P^{+}) }{2P^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\Bigg{\{}\int_{z_{i}^{+}}^{z_{f }^{+}}dz_{1}^{+}U_{[z_{f}^{+},z_{1}^{+}]}\Bigg{[}\frac{\mathbf{P}^{i}}{P^{+}} \left((z_{1}^{+}-B^{+})\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{-}- \tilde{\mathbf{A}}_{z_{1}^{+}}^{i}\right)\] \[-\frac{\Delta^{+}}{8P^{+}}\mathbf{Q}^{i}\partial_{\mathbf{B}^{i}} \tilde{A}_{z_{1}^{+}}^{-}\Bigg{]}U_{[z_{1}^{+},z_{1}^{+}]}\] \[+\int_{z_{i}^{+}}^{z_{f}^{+}}dz_{2}^{+}\int_{z_{i}^{+}}^{z_{2}^{ +}}dz_{1}^{+}U_{[z_{f}^{+},z_{2}^{+}]}\Bigg{[}\frac{i}{2P^{+}}\left(z_{1}^{+} -z_{2}^{+}\right)\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{2}^{+}}^{-}U_{[z_{2}^ {+},z_{1}^{+}]}\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{-}\] \[-\frac{i}{2P^{+}}\left(\partial_{\mathbf{B}^{i}}\tilde{A}_{z_{2} ^{+}}^{-}U_{[z_{2}^{+},z_{1}^{+}]}\tilde{\mathbf{A}}_{z_{1}^{+}}^{i}-\tilde{ \mathbf{A}}_{z_{2}^{+}}^{i}U_{[z_{2}^{+},z_{1}^{+}]}\partial_{\mathbf{B}^{i}} \tilde{A}_{z_{1}^{+}}^{-}\right)\] \[+\frac{i}{P^{+}}\delta(z_{2}^{+}-z_{1}^{+})\tilde{\mathbf{A}}_{z _{2}^{+}}^{i}U_{[z_{2}^{+},z_{1}^{+}]}\tilde{\mathbf{A}}_{z_{2}^{+}}^{j} \Bigg{]}U_{[z_{1}^{+},z_{1}^{+}]}\Bigg{\}}.\] (D.18)
Now, analogous to what we did for the \(v^{-}\) correction, the next step is to express the derivatives of the gauge field in terms of the Wilson line. To do that, we use the identities given in Eqs. (D.10) and (D.11) as well as the following relation:
\[\int_{z_{i}^{+}}^{z_{f}^{+}}dz_{2}^{+}\int_{z_{i}^{+}}^{z_{2}^{+} }dz_{1}^{+}U_{[z_{f}^{+},z_{2}^{+}]}\left(z_{2}^{+}-z_{1}^{+}\right)\partial_{ \mathbf{B}^{i}}\tilde{A}_{z_{2}^{+}}^{-}U_{[z_{2}^{+},z_{1}^{+}]}(\vec{B}) \partial_{\mathbf{B}^{i}}\tilde{A}_{z_{1}^{+}}^{-}U_{[z_{1}^{+},z_{i}^{+}]}\] \[=\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+}]} \overleftarrow{\partial}_{\mathbf{B}^{i}}\overrightarrow{\partial}_{ \mathbf{B}^{i}}U_{[z^{+},z_{i}^{+}]}.\] (D.19)
So that Eq. (108) can be written as
\[\Delta^{(1),\perp}_{\rm m}(\vec{P},\vec{Q})=\frac{\Theta(P^{+})}{2P^ {+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\Bigg{\{}-\frac{\Delta^{+}}{8P^{+}} \mathbf{Q}^{i}\overrightarrow{\partial}_{\mathbf{B}^{i}}U_{[z^{+}_{f},z^{+}_{ i}]}\] \[+\int_{z^{+}_{i}}^{z^{+}_{f}}dz^{+}U_{[z^{+}_{f},z^{+}]}\Bigg{[} \frac{\mathbf{P}^{i}}{P^{+}}\left(-\frac{\overleftrightarrow{\partial}_{ \mathbf{B}^{i}}}{2}-\tilde{\mathbf{A}}^{i}_{z^{+}_{i}}\right)\] \[-\frac{i}{2P^{+}}\left(\overleftrightarrow{\partial}_{\mathbf{B }^{i}}\overrightarrow{\partial}_{\mathbf{B}^{i}}-\tilde{\mathbf{A}}^{i}_{z^{ +}}\overrightarrow{\partial}_{\mathbf{B}^{i}}+\overleftrightarrow{\partial}_ {\mathbf{B}^{i}}\tilde{\mathbf{A}}^{i}_{z^{+}}-\tilde{\mathbf{A}}^{i}_{z^{+ }}\tilde{\mathbf{A}}^{i}_{z^{+}}\right)\Bigg{]}U_{[z^{+},z^{+}_{i}]}\Bigg{\}}, \tag{109}\]
where the transverse derivatives only act on the terms inside the square brackets. Finally, analogous to [17], we can write the final result in an explicit gauge covariant way by expressing the transverse components of the field in terms of the covariant derivatives. This can be done by noting that the transverse component of the gauge field can be written as:
\[\tilde{\mathbf{A}}^{i}=\overrightarrow{D}_{\mathbf{B}^{i}}-\overrightarrow{ \partial}_{\mathbf{B}^{i}}=\overleftrightarrow{\partial}_{\mathbf{B}^{i}}- \overleftrightarrow{D}_{\mathbf{B}^{i}}=\frac{\overleftrightarrow{D}_{ \mathbf{B}^{i}}-\overleftrightarrow{\partial}_{\mathbf{B}^{i}}}{2}. \tag{110}\]
Inserting Eq. (110) into Eq. (109) and integrating by parts the first term, which is proportional to \(\mathbf{Q}^{i}\), we obtain the result for the corrections of the scalar propagator due to fluctuations around the classical transverse trajectory expressed in a gauge covariant way:
\[\Delta^{(1),\perp}_{\rm m}(\vec{P},\vec{Q})=\frac{\Theta(P^{+})} {2P^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\Bigg{\{}-i\frac{\mathbf{Q}^{2} \Delta^{+}}{8P^{+}}U_{[z^{+}_{f},z^{+}_{i}]}\] \[-\int_{z^{+}_{i}}^{z^{+}_{f}}dz^{+}U_{[z^{+}_{f},z^{+}]}\Bigg{[} \frac{\mathbf{P}^{i}}{P^{+}}\overleftrightarrow{D}_{\mathbf{B}^{i}}+\frac{i} {2P^{+}}\overleftrightarrow{D}_{\mathbf{B}^{i}}\overrightarrow{D}_{\mathbf{B} ^{i}}\Bigg{]}U_{[z^{+},z^{+}_{i}]}\Bigg{\}}. \tag{111}\]
### Summing all the corrections
Finally, we sum up all the \(\mathcal{O}(1/P^{+})\) corrections of the in-medium propagator given in Eqs. (109), (110) and (111). The result gives
\[\Delta_{\rm m}(\vec{P},\vec{Q})=\frac{\Theta(P^{+})}{2P^{+}}\int_ {\vec{B}}e^{i\vec{B}\cdot\vec{Q}}\Bigg{\{}\left(1+i\frac{\mathbf{P}\cdot \mathbf{Q}}{P^{+}}B^{+}\right)U_{[z^{+}_{f},z^{+}_{i}]}(\vec{B})\] \[-\int_{z^{+}_{i}}^{z^{+}_{f}}dz^{+}U_{[z^{+}_{f},z^{+}]}(\vec{B}) \Bigg{[}\frac{\mathbf{P}^{i}}{P^{+}}\overleftrightarrow{D}_{\mathbf{B}^{i}}+ \frac{i}{2P^{+}}\overleftrightarrow{D}_{\mathbf{B}^{i}}\overrightarrow{D}_{ \mathbf{B}^{i}}\Bigg{]}U_{[z^{+},z^{+}_{i}]}(\vec{B})\Bigg{\}}+\mathcal{O} \left(\frac{1}{(P^{+})^{2}}\right). \tag{112}\]
The last step is to expand the eikonal Wilson lines around \(B^{-}=0\):
\[U_{[z^{+}_{f},z^{+}_{i}]}(\vec{B})=\exp\left\{B^{-}\partial^{+}\right\}U_{[z^ {+}_{f},z^{+}_{i}]}(0,\mathbf{B}). \tag{113}\]
It is clear from Eq. (112) that the zeroth order term in \(B^{-}\) will give a Dirac delta \(\delta(Q^{+})\) that fixes the longitudinal momentum transfer to zero and higher order corrections will give higher order derivatives of the Dirac delta. The part of the retarded propagator, given
in Eq. (3.6), that depends on the longitudinal momentum transfer, \(Q^{+}\), is the LC energy phases:
\[\hat{p}_{f}^{-}x^{+}=\frac{{\bf p}_{f}^{2}}{2P^{+}}\left(1-\frac{Q^{+}}{2P^{+}}+ \cdots\right)x^{+},\qquad\hat{p}_{i}^{-}y^{+}=\frac{{\bf p}_{i}^{2}}{2P^{+}} \left(1+\frac{Q^{+}}{2P^{+}}+\cdots\right)y^{+}. \tag{111}\]
Thus, the effect of the \(n^{\rm th}\) derivative of the Dirac delta of \(Q^{+}\), \(\delta^{{}^{\prime}n}(Q^{+})\), is to drop a factor \({\cal O}(1/(P^{+})^{n+1})x^{+}\sim{\cal O}(1/(P^{+})^{n})\) coming from the phase, after integration over \(Q^{+}\). On the other hand, once we write the propagator in coordinate space, the integration over \(Q^{+}\) comes through a Fourier transform that introduces the phase \(Q^{+}(x^{-}+y^{-})/2\) so that, after integration over \(Q^{+}\), the \(n^{\rm th}\) derivative of the delta will also introduce a factor \({\cal O}((x^{-}+y^{-})^{n}/2^{n})\). This factor is not written in terms of powers of \(1/P^{+}\) because, in this case, we are expressing the propagator in coordinate space. It is considered a non-eikonal correction because under a boost in the right direction, the \(z^{-}\) component of the particle trajectory transforms as \(z^{-}\to e^{-\omega}z^{-}\), making it subleading in the eikonal approximation.
So far, because of the aforementioned arguments, in order to get the next-to-eikonal corrections to the scalar propagator, it is enough to expand Eq. (110) up to order \(B^{-}\), while neglecting the terms of order \(B^{-}/P^{+}\) which will lead to a next-to-next-to-eikonal correction. By doing this, we obtain:
\[\Delta_{\rm m}(\vec{P},\vec{Q}) =\frac{\Theta(P^{+})}{2P^{+}}\int_{\vec{B}}e^{i\vec{B}\cdot\vec{Q }}\Bigg{\{}\left(1+i\frac{{\bf P}\cdot{\bf Q}}{P^{+}}B^{+}\right)U_{[z_{f}^{+ },z_{i}^{+}]}(0,{\bf B})+B^{-}\partial^{+}U_{[z_{f}^{+},z_{i}^{+}]}(0,{\bf B})\] \[\qquad\qquad-\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+ }]}(0,{\bf B})\Bigg{[}\frac{{\bf p}^{i}}{P^{+}}\overleftrightarrow{D}_{{\bf B }^{i}}+\frac{i}{2P^{+}}\overleftrightarrow{D}_{{\bf B}^{i}}\overrightarrow{D}_ {{\bf B}^{i}}\Bigg{]}U_{[z^{+},z_{i}^{+}]}(0,{\bf B})\Bigg{\}}\] \[=\delta(Q^{+})\frac{\Theta(P^{+})}{2P^{+}}\int_{\bf B}e^{-i{\bf B }\cdot{\bf Q}}\Bigg{\{}\left(1+i\frac{{\bf P}\cdot{\bf Q}}{P^{+}}B^{+}\right) U_{[z_{f}^{+},z_{i}^{+}]}(0,{\bf B})\] \[\qquad\qquad\qquad-\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+ },z^{+}]}(0,{\bf B})\Bigg{[}\frac{{\bf p}^{i}}{P^{+}}\overleftrightarrow{D} _{{\bf B}^{i}}+\frac{i}{2P^{+}}\overleftrightarrow{D}_{{\bf B}^{i}} \overrightarrow{D}_{{\bf B}^{i}}\Bigg{]}U_{[z^{+},z_{i}^{+}]}(0,{\bf B}) \Bigg{\}}\] \[\qquad\qquad-i\delta^{\prime}(Q^{+})\frac{\Theta(P^{+})}{2P^{+}} \int_{\bf B}e^{-i{\bf B}\cdot{\bf Q}}\partial^{+}U_{[z_{f}^{+},z_{i}^{+}]}(0,{ \bf B}). \tag{112}\]
Introducing this result into Eq. (11) we obtain the retarded scalar propagator in momentum space at next-to-eikonal order:
\[\tilde{\Delta}_{R}(x^{+},\vec{p}_{f};y^{+},\vec{p}_{i})=\Theta(x^ {+}-y^{+})e^{-i\hat{p}_{f}^{-}x^{+}+i\hat{p}_{i}^{-}y^{+}}\delta(p_{f}^{+}-p_{i }^{+})\frac{\Theta(p_{i}^{+})}{2p_{i}^{+}}\] \[\times\int_{\bf B}e^{-i{\bf B}\cdot({\bf p}_{\bf f}-{\bf p}_{i})} \Bigg{\{}\left(1+i\frac{{\bf p}_{f}^{2}-{\bf p}_{i}^{2}}{2p_{i}^{+}}B^{+} \right)U_{[z_{f}^{+},z_{i}^{+}]}(0,{\bf B}) \tag{113}\] \[-\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+}]}(0,{\bf B} )\Bigg{[}\frac{{\bf p}_{f}^{i}+{\bf p}_{i}^{i}}{2p_{i}^{+}} \overleftrightarrow{D}_{{\bf B}^{i}}+\frac{i}{2p_{i}^{+}}\overleftrightarrow{D }_{{\bf B}^{i}}\overrightarrow{D}_{{\bf B}^{i}}\Bigg{]}U_{[z^{+},z_{i}^{+}]}(0,{ \bf B})\Bigg{\}}\] \[-i\delta^{\prime}(p_{f}^{+}-p_{i}^{+})\Theta(x^{+}-y^{+})e^{-i \hat{p}_{f}^{-}x^{+}+i\hat{p}_{i}^{-}y^{+}}\frac{\Theta(p_{i}^{+})\Theta(p_{f}^ {+})}{p_{i}^{+}+p_{f}^{+}}\int_{\bf B}e^{-i{\bf B}\cdot({\bf p}_{\bf f}-{\bf p }_{\bf l})}\partial^{+}U_{[z_{f}^{+},z_{i}^{+}]}(0,{\bf B}),\]
Performing the Fourier transform of Eq. (114), we obtain the next-to-eikonal correction to the scalar propagator in coordinate space:
\[\Delta_{R}(x,y) =\Theta(x^{+}-y^{+})\int_{P^{+}}\frac{\Theta(P^{+})}{2P^{+}}e^{-iP^ {+}(x^{-}-y^{-})}\int_{\mathbf{p}_{i},\mathbf{p}_{f}}e^{i\mathbf{p}_{f}\cdot \mathbf{x}-i\mathbf{p}_{i}\cdot\mathbf{y}-i\frac{\mathbf{p}_{f}^{2}+m^{2}}{2P^ {+}}x^{+}+i\frac{\mathbf{p}_{f}^{2}+m^{2}}{2P^{+}}y^{+}}\] \[\times\int_{\mathbf{B}}e^{-i\mathbf{B}\cdot(\mathbf{p}_{f}- \mathbf{p}_{i})}\Bigg{\{}\left(1+i\frac{\mathbf{p}_{f}^{2}-\mathbf{p}_{i}^{2}}{ 2P^{+}}B^{+}\right)U_{[z_{f}^{+},z_{i}^{+}]}(0,\mathbf{B})\] \[-\int_{z_{i}^{+}}^{z_{f}^{+}}dz^{+}U_{[z_{f}^{+},z^{+}]}(0, \mathbf{B})\Bigg{[}\frac{\mathbf{p}_{f}^{i}+\mathbf{p}_{i}^{i}}{2P^{+}} \overleftrightarrow{\mathbf{D}_{\mathbf{B}^{i}}}+\frac{i}{2P^{+}} \overleftrightarrow{\mathbf{D}_{\mathbf{B}^{i}}}\overrightarrow{\mathbf{D}_{ \mathbf{B}^{i}}}\Bigg{]}U_{[z^{+},z_{i}^{+}]}(0,\mathbf{B})\] \[+\Bigg{[}\frac{x^{-}+y^{-}}{2}-\frac{\mathbf{p}_{f}^{2}+m^{2}}{ (2P^{+})^{2}}x^{+}-\frac{\mathbf{p}_{i}^{2}+m^{2}}{(2P^{+})^{2}}y^{+}\Bigg{]} \,\partial^{+}U_{[z_{f}^{+},z_{i}^{+}]}(0,\mathbf{B})\Bigg{\}}. \tag{115}\]
We note that although Eq. (115) provides the correction to the scalar propagator due to fluctuations around the classical trajectory, it is not a new result, as it has already been computed recently in [16]. However, the method used in [16] was completely different, based on iterating the differential equations of the propagator and performing power counting in the boost parameter \(e^{\omega}\) by boosting the background field instead of the particle. Therefore, Eq. (115) serves as a cross-check for the eikonal expansion approach presented in this manuscript.
## Appendix E The amputated retarded scalar propagator
In this section, we derive the amputated retarded propagator using Eq. (39). Equation (39) is particularly useful because it allows us to amputate the propagator legs straightforwardly using an LSZ-like reduction formula. In order to see that, we write Eq. (39) in momentum space:
\[\tilde{\Delta}_{R}(p_{f},p_{i})=\Theta(p_{i}^{+})\Theta(p_{f}^{+ })\int_{x^{+},y^{+}}e^{i(p_{f}^{-}-\hat{p}_{f}^{-})x^{+}-i(p_{i}^{-}-\hat{p}_{ i}^{-})y^{+}}\Theta(x^{+}-y^{+})\int_{\vec{z}_{i},\vec{z}_{f}}e^{i\hat{p}_{f} \cdot z_{f}-i\hat{p}_{i}\cdot z_{i}}\] \[\times\int\mathcal{D}p_{\Delta}^{+}\frac{\Theta(\langle p^{+} \rangle)}{2\langle p^{+}\rangle}\int_{\vec{z}(\tau_{i}^{+})=\vec{z}_{i}}^{\vec {z}(\tau_{f}^{+})=\vec{z}_{f}}\mathcal{D}^{3}\vec{z}\ e^{i\int_{\tau_{i}^{+}}^{ \vec{z}^{\prime}}d\tau^{+}\left[-\frac{m_{f}^{2}}{2\langle p^{+}\rangle}+\frac {\langle p^{+}\rangle}{2}\hat{\mathbf{z}}^{2}-p^{+}\hat{z}^{-}\right]}\mathcal{ U}_{[\tau_{f}^{+},\tau_{i}^{+}]}[z^{+},\vec{z}]. \tag{116}\]
Although this propagator is connected, in the sense that the vacuum diagrams have already been divided out, it is not amputated. In order to put its legs on-shell, we use a LSZ-like approach: we multiply the leg that we want to amputate by the inverse of the free propagator \((p^{2}-m^{2})\) and we perform the on-shell limit \(p^{2}\to m^{2}\). Note that, in general, we should also divide by the factor \(\sqrt{Z}\) arising from the renormalization of the scalar field. However, since the background field is classical and we are not taking into account self-interactions of the particle, this factor is just 1.
Let us work out the amputation of the final leg. In this case, the LSZ reduction formula is given, in momentum space, by
\[\tilde{\Delta}_{R}^{(c)}(p_{f},p_{i})=\lim_{p_{f}^{2}\to m^{2}}(p_{f}^{2}-m^{2} )\tilde{\Delta}_{R}(p_{f},p_{i})=2p_{f}^{+}\lim_{p_{f}^{-}\to\hat{p}_{f}^{-}}( p_{f}^{-}-\hat{p}_{f}^{-})\tilde{\Delta}_{R}(p_{f},p_{i}), \tag{117}\]
where we denote the amputated propagator as \(\tilde{\Delta}^{(c)}\). Since Eq. (101) is of the form
\[\tilde{\Delta}_{R}(p_{f},p_{i})=\int_{x^{+}}e^{i(p_{f}^{-}-\tilde{p}_{f}^{-})x^{ +}}f(x^{+}), \tag{103}\]
we have that
\[\tilde{\Delta}^{(c)}_{R}(p_{f},p_{i}) =2p_{f}^{+}\lim_{p_{f}^{-}\to\tilde{p}_{f}^{-}}(p_{f}^{-}-\hat{p} _{f}^{-})\int_{x^{+}}e^{i(p_{f}^{-}-\tilde{p}_{f}^{-})x^{+}}f(x^{+})\] \[=2p_{f}^{+}\lim_{p_{f}^{-}\to\tilde{p}_{f}^{-}}\int_{x^{+}}\frac{ \partial e^{i(p_{f}^{-}-\hat{p}_{f}^{-})x^{+}}}{i\partial x^{+}}f(x^{+})\] \[=2ip_{f}^{+}\lim_{p_{f}^{-}\to\tilde{p}_{f}^{-}}\int_{x^{+}}e^{i( p_{f}^{-}-\tilde{p}_{f}^{-})x^{+}}\frac{\partial f(x^{+})}{\partial x^{+}}=2ip_{f}^ {+}\lim_{x^{+}\to\infty}f(x^{+}), \tag{104}\]
where when we integrated by parts, in the third step, we have used the fact that the integral vanishes at \(x^{+}\to\infty\) due to the \(i\epsilon\) prescription, hidden in the definition of \(\hat{p}_{f}^{-}\). In the fourth step, we have performed the on-shell \(p_{f}^{-}\to\hat{p}_{f}^{-}\) limit and used the fact that \(f(x^{+}=-\infty)=0\) due to the step function.
We can do an analogous analysis for initial leg, \(y^{+}\), of the propagator. Thus, the amputated propagator, i.e., the scattering amplitude for a scalar particle interacting with the medium, is given by
\[\tilde{\Delta}^{(c)}_{R}(p_{f},p_{i}) =4p_{i}^{+}p_{f}^{+}\lim_{x^{+}\to\infty}\lim_{y^{+}\to-\infty} \int_{\vec{x}_{i},\vec{x}_{f}}e^{i\hat{p}_{f}\cdot z_{f}-i\hat{p}_{i}\cdot z_{i }}\int{\cal D}p_{\Delta}^{+}\frac{\Theta(\langle p^{+}\rangle)}{2\langle p^{+ }\rangle}\] \[\times\int_{\vec{z}(\tau_{i}^{+})=\vec{z}_{i}}^{\vec{z}(\tau_{f}^ {+})=\vec{z}_{i}}{\cal D}^{3}\vec{z}\ e^{i\int_{\tau_{i}^{+}}^{\tau_{f}^{+}}d \tau^{+}\left[-\frac{m_{\ell}^{2}}{2\langle p^{+}\rangle}+\frac{\langle p^{+} \rangle}{2}\hat{\bf z}^{2}-p^{+}\hat{z}^{-}\right]}{\cal U}_{[\tau_{f}^{+},\tau _{i}^{+}]}[z^{+},\vec{z}]. \tag{105}\]
We note that in Eq. (105), although there is not a explicit dependence on the variables \(x^{+}\) and \(y^{+}\), the path integral depend on them through the definition of the average longitudinal momentum given in Eq. (36) as well as on the coordinates \(\tau_{i}^{+}\) and \(\tau_{f}^{+}\).
|
2305.05568 | Resource Dimensioning for Single-Cell Edge Video Analytics | Edge intelligence is an emerging technology where the base stations located
at the edge of the network are equipped with computing units that provide
machine learning services to the end users. To provide high-quality services in
a cost-efficient way, the wireless and computing resources need to be
dimensioned carefully. In this paper, we address the problem of resource
dimensioning in a single-cell system that supports edge video analytics under
latency and accuracy constraints. We show that the resource-dimensioning
problem can be transformed into a convex optimization problem, and we provide
numerical results that give insights into the trade-offs between the wireless
and computing resources for varying cell sizes and for varying intensity of
incoming tasks. Overall, we observe that the wireless and computing resources
exhibit opposite trends; the wireless resources favor from smaller cells, where
high attenuation losses are avoided, and the computing resources favor from
larger cells, where statistical multiplexing allows for computing more tasks.
We also show that small cells with low loads have high per-request costs, even
when the wireless resources are increased to compensate for the low
multiplexing gain at the servers. | Jaume Anguera Peris, Viktoria Fodor | 2023-05-09T15:57:17Z | http://arxiv.org/abs/2305.05568v1 | # Resource Dimensioning for Single-Cell
###### Abstract
Edge intelligence is an emerging technology where the base stations located at the edge of the network are equipped with computing units that provide machine learning services to the end users. To provide high-quality services in a cost-efficient way, the wireless and computing resources need to be dimensioned carefully. In this paper, we address the problem of resource dimensioning in a single-cell system that supports edge video analytics under latency and accuracy constraints. We show that the resource-dimensioning problem can be transformed into a convex optimization problem, and we provide numerical results that give insights into the trade-offs between the wireless and computing resources for varying cell sizes and for varying intensity of incoming tasks. Overall, we observe that the wireless and computing resources exhibit opposite trends; the wireless resources favor from smaller cells, where high attenuation losses are avoided, and the computing resources favor from larger cells, where statistical multiplexing allows for computing more tasks. We also show that small cells with low loads have high per-request costs, even when the wireless resources are increased to compensate for the low multiplexing gain at the servers.
Edge intelligence, resource dimensioning, queuing theory, edge computing
## I Introduction
The popularity of mobile devices and the advancements in deep learning algorithms have created ample opportunities for developing applications that extract features and information from images and assist the mobile users in making decisions or obtaining relevant information from their environment. Yet, as these applications become more computationally intensive and delay constrained, it becomes increasingly apparent that mobile devices suffer from insufficient battery capacity, memory, or computational power.
One prominent solution to meet the demands of such delay-sensitive and computationally-complex tasks is to offload them to a server located at the edge of the network. This solution has gathered increasing interest for edge intelligence, which deploys AI-based models at the edge server to process the incoming tasks. The mobile users may then benefit from edge intelligence if the gains from computing at the edge server are greater than the costs of transmitting the information over the wireless link. In recent years, there have been remarkable progress in this field, including techniques for selecting edge servers for fast response times [1], finding and transmitting only the most informative parts of the image [2, 3], adapting the image format to the network conditions [3], or partitioning the deep learning models to distribute the workload of the image-processing tasks between the mobile users and the edge server [4]. Despite these advancements, there are still challenges that remain unsolved, such as how to dimension the wireless and the computing resources in the system to ensure that the edge server provides good service quality under acceptable costs. This is specially relevant because managing the bandwidth resources at the wireless network and properly allocating the computational resources at the edge server have been shown to be crucial in edge computing and video edge analytics, particularly when the wireless and computing resources may be limited [5, 6].
To address the resource-management problem, the literature offers a high variety of solutions. In [7], the authors formulated the trade-off between network latency, computational latency, and accuracy in an edge video-analytic system and proposed a multi-objective optimization problem to select the optimal edge server and video frame resolution. This work was later extended in [8] to address the dynamic decisions in the system using queuing theory and to account for the energy consumption of the entire offloading process. More recently, the work in [9] has further analyzed the optimal resource provisioning at the edge server when it accommodates a variety of applications, models, and configurations. With this last work, the focus has shifted from proposing the optimal image resolution, offloading frame rate, image encoding, or deep learning algorithm under given wireless and computing resources to understanding the trade-off between the required wireless and computing resources for a given set of user requirements. In this regard, we aim to further extend this line of research and examine the interrelationship between the wireless and computing resources as the system scales in terms of cell size and traffic intensity.
This paper addresses the resource-dimensioning problem for a single-cell system supporting video analytics. Specifically, we are interested in the minimum amount of wireless and computing resources that need to be deployed in the system to ensure acceptable service quality. For that, we model the entire offloading process under accuracy and latency constraints, and propose an optimization problem to find the relationship between the optimal parameters of the uplink transmission, the edge server, and the deep learning algorithm. We evaluate the optimal solution via numerical results and discuss the trade-off between the different parameters of the system for a cost-efficient resource dimensioning. Overall, our discussion and
results provide a clear understanding on how to provision the wireless network and the edge server to meet the demands of video-analytic users, specially when the system scales in size and accommodates a larger amount of incoming tasks.
The remainder of this paper is organized as follows. We describe the system and introduce the mathematical model for the entire offloading process in Sections II and III, respectively. Section IV presents the resource-dimensioning problem. Then, we provide the numerical results in Section V, followed by the concluding remarks in Section VI.
## II System description
Consider a single-cell cellular system with several mobile users uniformly distributed over the coverage area of the base station, as depicted in Figure 1. The base station is equipped with a mobile edge computing (MEC) server supporting AI applications for video analytics. These video-analytic applications run an object detection algorithm to assist the edge server and the mobile users in making informed decisions or obtaining relevant information from their environment. With that purpose, mobile users generate image processing tasks and offload them to the MEC server.
Since the amount of data transmitted from the users to the base station is considered to be much larger than the amount of data transmitted from the base station to the users, we focus on the uplink transmission and disregard the little time of eventual feedback to the user. With that in mind, the entire offloading process to perform the video analytics at the MEC server can be divided into two parts. First, users transmit their frame to the base station, and second, the server schedules and processes the frames following a queuing system.
For the uplink transmission, we consider a simple resource allocation where all mobile users receive the same bandwidth \(B\) for transmitting a frame to the base station. Moreover, we assume that all video-analytic users can always access the wireless network immediately when needed. This is possible because video-analytic users represent one part of the user population with specific needs and demands, so it is feasible to have a dedicated dynamic network slice that provides them access to the network with high priority [10].
For the processing policy at the MEC server, we consider that the edge server is directly connected to the base station via a high-speed link, such that all frames that are completely received at the base station are delivered to the server without delay. The frames are then queued to an infinite buffer and processed on a first-come-first-served basis by a GPU-enabled server with computing capacity \(H\).
Altogether, we denote the time to transmit a frame as \(T_{ul}\), the time spent in the queue as \(T_{w}\), and the time to process a frame as \(T_{s}\). We refer to the sum of these three times as the delay of the entire offloading process, and let each user specify a maximum value \(D\) for that delay, and a minimum probability \(\omega_{\min}\) of satisfying that delay requirement, i.e.,
\[\mathcal{P}\Big{(}T_{ul}+T_{w}+T_{s}\leq D\Big{)}\,\geq\,\omega_{\min}. \tag{1}\]
For simplicity, we consider all users in the network to have the same delay requirement \(D\) and \(\omega_{\min}\) for all the frames and the same payload for all the image processing tasks. Moreover, we consider that users want the accuracy of the object detection algorithm to be above a minimum threshold \(a_{min}\).
Overall, the objective of the resource-dimensioning problem is to find the minimum frequency resource \(B\) that should be allocated per frame and the minimum computational resources \(H\) that should be placed at the edge server to satisfy the latency and the accuracy requirements for all users and frames in the network. In this regard, the next section presents a comprehensive system model and derives the expressions of the parameters that characterize both the uplink transmission and the MEC server.
## III System model
### _Uplink communication and transmission time_
For the uplink transmission, we consider a free-space propagation model with frequency-division multiple access and i.i.d. Rayleigh fading. All mobile users utilize a maximum-ratio transmission precoding vector, such that the effect of the channel can be modelled at the receiver by the multiplicative Exponential random variable \(|g|^{2}\sim\exp{(\gamma)}\), where \(\gamma=\frac{\lambda^{2}}{16\pi^{2}}\), and \(\lambda_{c}\) is the wavelength of the carrier frequency. Moreover, we consider that all users utilize distance-proportional fractional power control of the form \(Pr^{\alpha\epsilon}\), where \(P\) is the reference power at \(1\) kilometer from the base station, \(r\) is the distance to the base station, \(\alpha\) is the path-loss exponent, and \(\epsilon\in[0,1]\) is the power control coefficient. To model the maximum transmission power of the mobile users, we consider the transmit powers \(Pr^{\alpha\epsilon}\) to be limited by the peak power \(\bar{P}\). Finally, we consider the noise power at the receiver to be \(\sigma^{2}\). Under this transmission model, the signal-to-noise ratio (SNR) at the base station for any randomly selected user is
\[\text{SNR}=\frac{|g|^{2}\,\ell(r,\alpha,\epsilon)\,r^{-\alpha}}{\sigma^{2}}, \tag{2}\]
where
\[\ell(r,\alpha,\epsilon)=\min\big{(}Pr^{\alpha\epsilon},\bar{P}\big{)}\,. \tag{3}\]
From there, we consider that any active user with given bandwidth \(B\) estimates the state of the channel from pilot
Fig. 1: End-to-end system model for a single-cell edge video analytics.
signals and adjusts its modulation and coding according to its instantaneous SNR. As a result, any active user achieves a transmission rate over a single coherence block close to the Shannon capacity, \(B\log_{2}(1+\text{SNR})\). At the same time, video analytics are typically optimized to operate with frame sizes on the order of hundreds of thousands of pixels [11]. Hence, the frames are sufficiently large for the transmission to take place over several coherence blocks, such that the transmission signals experience a high variety of fading realizations and the users experience the ergodic capacity of the channel. With that in mind, let us now calculate the ergodic capacity for any user transmitting a frame at distance \(r\) from the base station as [12, Theorem 3]
\[\bar{R} =\mathbb{E}[B\log_{2}(1+\text{SNR})]\] \[\overset{(a)}{=}\frac{B}{\log(2)}\int_{0}^{\infty}\mathcal{P} \left(\log(1+\text{SNR})\geq t\right)dt\] \[\overset{(b)}{=}\frac{B}{\log(2)}\int_{0}^{\infty}\mathcal{P} \left(|g|^{2}\geq\frac{(e^{t}-1)\,\sigma^{2}}{\ell(r,\alpha,\epsilon)\,r^{- \alpha}}\right)dt\] \[\overset{(c)}{=}\frac{B}{\log(2)}\int_{0}^{\infty}\exp\left(- \frac{(e^{t}-1)\,r^{\alpha}}{\gamma\,\ell(r,\alpha,\epsilon)}\sigma^{2} \right)dt\] \[\overset{(d)}{=}\frac{B}{\log(2)}\,\exp\left(\frac{BN_{0}\,r^{ \alpha}}{\gamma\,\ell(r,\alpha,\epsilon)}\right)\,E_{1}\left(\frac{BN_{0}\,r^ {\alpha}}{\gamma\,\ell(r,\alpha,\epsilon)}\right), \tag{4}\]
where \((a)\) follows from the fact that the expectation of any positive random variable \(X\) can be calculated in the sense of Lebesgue-Stieltjes as \(\mathbb{E}[X]=\int_{0}^{\infty}\mathcal{P}\left(X\geq t\right)dt\), \((b)\) follows from the definition of the SNR in (2), \((c)\) follows from the exponential distribution of \(|g|^{2}\), and \((d)\) follows from the definition of the exponential integral function \(E_{1}(x)=\int_{x}^{\infty}\frac{e^{-t}}{t}\,dt\). Notice that we expressed the noise power \(\sigma^{2}\) in (4) as the product between the noise power spectral density, \(N_{0}\), and the allocated bandwidth \(B\).
Now that we have characterized the uplink communication, let us proceed with the uplink transmission time. To generate the image processing task, users offload frames with resolution \(s\times s\) pixels, encoded at \(\theta\) bits per pixel, and compressed at rate \(\xi\):1, i.e., frames represented by \(\theta s^{2}/\xi\) bits. From there, and the assumption that users experience the ergodic nature of the channel, it follows from the analysis in [13] that the probability density function of the uplink transmission time over the entire uplink transmission for any user at distance \(r\) from the base station can be approximated as
\[f_{T_{ul}}(t)\longrightarrow\Delta\left(t-\frac{\theta s^{2}}{\xi\,\bar{R}} \right), \tag{5}\]
where \(\Delta(\cdot)\) represents the Dirac delta function.
### _Edge analytics and service time_
To perform the video analytics, the edge server utilizes a convolutional neural network pre-trained with YOLOv5 [14]. Since YOLOv5 is capable of handling frames of different resolutions without changing its associated learnable parameters, it is possible to parametrize the inference time of the detection algorithm [7], such that we can express the service time as
\[T_{s}=\frac{c_{1}s^{3}+c_{2}}{H} \tag{6}\]
for some positive constants \(c_{1}\), \(c_{2}\), where the units of the numerator are number of floating-point operations (in trillions).
Similarly, it is possible to parameterize the accuracy of the object detection algorithm as
\[a(s)=c_{3}-c_{4}e^{-c_{5}s}, \tag{7}\]
for some other positive constants \(c_{3}\), \(c_{4}\), and \(c_{5}\), where the accuracy of the detection is measured as the mean average precision of the object detection algorithm for a predefined threshold of the intersection over union [7]. For more information about the effect of the learning and inference processes on the constants \(c_{1},\ldots,c_{5}\), please refer to [15, 16].
### _Service policy and waiting time_
We consider that users operate independently of each other and are uniformly distributed over a circular cell with area \(A\). The traffic generated per unit area is Poisson distributed with intensity \(\lambda\left[\text{frames}\cdot s^{-1}\cdot km^{-2}\right]\), and frames arrive at the MEC servers after a random \(T_{ul}\) time. Since the Poisson process is homogeneous in time, the process remains translation invariant in time [17], and the arrival process at the edge server is Poisson distributed with intensity \(\lambda A\).
Upon arrival, the frames are delivered on an infinite buffer and then wait to be processed on a first-come-first-served basis. The processing time \(T_{s}\) is deterministic and is determined by the resolution of the transmitted frames, the neural network considered for the video analytics, and the available computational resources at the edge server, as shown in (6). As a result, it yields that server can be modelled as an M/D/1 queuing system with load \(\rho=\lambda A\,T_{s}\).
Considering the M/D/1 queuing system, the cumulative density function of \(T_{w}\) can be derived from the state probabilities using the Erlang's principle of statistical equilibrium [18, Sections 10.4.2 and 10.4.4]. Specially, for a given load \(\rho\) and service time \(T_{s}\), the complementary cumulative distribution function (CCDF) of the waiting time can be expressed as
\[\mathcal{P}(T_{w}>T)=1-(1-\rho)\sum_{\nu=0}^{\tilde{T}}\frac{\left[\rho(\nu-t) \right]^{\nu}}{\nu!}\,e^{-\rho(\nu-t)}, \tag{8}\]
for any \(T\geq 0\), where \(t=\frac{T}{T_{s}}\), \(\tilde{T}=\lfloor t\rfloor\), and \(\lfloor\cdot\rfloor\) is the greatest integer function. In terms of the function's behaviour, (8) is continuous and monotonically decreasing for any \(\rho\) and \(T_{s}\), and is concave for \(T\in[0,T_{s})\) and convex for \(T\in[T_{s},\infty)\).
Note, however, that the concavity of \(\mathcal{P}(T_{w}>T)\) poses a hindrance to formulating the optimization problem as a convex problem. Hence, we consider instead a convex approximation for (8). Among all possible approximations, we select Henk's approximation [19], as it offers a tighter approximation to the true CCDF for higher \(\rho\) and for any \(T\) close to or higher than \(T_{s}\). Considering this, and following the analysis in [19, Section 9.6.1], we get the following expression for the M/D/1 queue
\[\mathcal{P}^{\text{{H}}\text{{H}}\text{{H}}\text{{x}}}(T_{w}>T)=\frac{(1-\rho)} {\left(\rho\tau-1\right)}e^{-\lambda A(\tau-1)T},\quad\forall\,T\geq 0, \tag{9}\]
where \(\tau=-W_{-1}(-\rho e^{-\rho})/\rho\), and \(W_{-1}(\cdot)\) is the negative branch of the Lambert function.
## IV Resource-dimensioning problem
Now that we have characterized the entire offloading process, we are ready to construct the optimization problem that minimizes the cost of the frequency and the computational resources. In the considered edge-analytics system, each frame receives the same bandwidth resources and all users utilize the same power control coefficient for the uplink transmission, so the users that experience the largest \(T_{ul}\) are the users located at the edge of the cell. As the other delays are independent of the users' location, the edge users always experience the worst overall performance. Therefore, we formulate the optimization problem with the constraints that all the frames arriving from these cell-edge users satisfy the latency requirement \(D\) with probability \(\omega_{\min}\), and satisfy the minimum object detection accuracy \(a_{\min}\). That is to say, if the latency and accuracy requirements of the frames of the cell-edge user are satisfied for some \(B\) and \(H\), then the latency and accuracy requirements of all the other users and frames will be satisfied as well for the same \(B\) and \(H\).
With that, let us now formulate the resource-dimensioning problem. First, to ease the notation and facilitate the analysis of the convexity of the optimization problem, we decouple (1) into two parts by introducing a free variable \(T\),
\[\mathcal{P}(T_{w}>T)\leq 1-\omega_{\min}\quad\textit{and}\quad T_{ul}+T+T_{s}=D. \tag{10}\]
Then, we adopt the weighted sum method [20] to accurately represent the complex preferences between the wireless and computing resources and construct a multi-objective function that aims at minimizing the cost of the wireless and computing resources per frame,
\[J(B,H|\lambda,r,\beta_{1},\beta_{2})=\beta_{1}B+(1-\beta_{1})\,\beta_{2}\, \frac{H}{\lambda\pi r^{2}}. \tag{11}\]
In particular, the wireless resource per frame is \(B\), and the computing resource \(H\) is shared by the requests from all active users, giving an average computing resource per request \(H_{f}=H/(\lambda\pi r^{2})\). The positive weight parameter \(\beta_{1}\in(0,1)\) characterizes the trade-off between \(B\) and \(H_{f}\), and the positive parameter \(\beta_{2}\) makes sure that the model has an adequate representation of the objective function by bringing \(B\) and \(H_{f}\) to the same order of magnitude.
After putting all together and combining the results from the previous sections, the multi-objective optimization problem for the single-cell system can be expressed for a fixed \(r\), \(\lambda\), \(D\), \(\omega_{\min}\), \(a_{\min}\), and \(\rho_{\max}\) as
\[\min_{\{B,H,T,s\}} \beta_{1}B+(1-\beta_{1})\,\beta_{2}\,\frac{H}{\lambda\pi r^{2}}\] (12a) s.t. \[\mathcal{P}(T_{w}>T)\leq 1-\omega_{\min}, \tag{12b}\] \[\frac{s^{2}\kappa_{1}}{\phi(B,\kappa_{2})}+T+\frac{c_{1}s^{3}+c_{ 2}}{H}=D,\] (12c) \[\kappa_{3}(c_{1}s^{3}+c_{2})\leq H,\] (12d) \[B>0,\;H>0,\;T\geq 0,\;s\geq\kappa_{4}, \tag{12e}\]
where
\[\phi(B,\kappa_{2})=B\exp\left(B\kappa_{2}\right)E_{1}\left(B\kappa_{2}\right),\]
and
\[\kappa_{1}=\frac{\theta}{\xi}\log(2), \kappa_{2}=\frac{N_{0}\,r^{\alpha}}{\gamma\,\ell(r,\alpha,\epsilon )}, \tag{13}\] \[\kappa_{3}=\frac{\lambda\pi r^{2}}{\rho_{\max}}, \kappa_{4}=\frac{1}{c_{5}}\ln\left(\frac{c_{4}}{c_{3}-a_{\min}}\right)\]
are just constants.
Specially, the optimization problem in (12) aims at dimensioning the wireless and computing resources of a single-cell edge-analytic system such that the edge users, and consequently all the other users within the cell, have a guaranteed probability \(\omega_{\min}\) of successfully offloading and processing their frames within the delay requirement \(D\), and a guaranteed accuracy \(a_{\min}\) for the AI-based object detection algorithm. The optimization variables are \(B\), \(H\), \(T\), and \(s\). Constraints (12b) and (12c) guarantee that the optimal \(B\) and \(H\) satisfy the latency requirements for each user and frame, and constraint (12d) guarantees that the server satisfies the demands of all users and does not overload. Finally, constraint (12e) limits the domain of the variables and guarantees that the optimal \(s\) satisfies the accuracy requirements for each user and frame.
As discussed in Section III-C, the cumulative density function of \(T_{w}\) is non-convex over the domain of \(T\), so we instead consider the Henk's approximation (9). Similarly, the equality (12c) introduces a non-affine condition to our optimization problem, which may result in a non-convex feasible region. Hence, we substitute the equality with an inequality. This change does not alter the optimal result in any way, as we are trying to minimize \(B\) and \(H\), so there is no incentive to converge to a solution where the total delay of the entire offloading process is less than \(D\). Finally, to preserve the convexity of \(\phi(B,\kappa_{2})\), we utilize the epigraph representation of the \(\min\) operator of the peak power constrain (3).
Considering these, we can re-formulate the optimization problem for a fixed \(r\), \(\lambda\), \(D\), \(\omega_{\min}\), \(a_{\min}\), and \(\rho_{\max}\) as
\[\min_{\{B,H,T,s\}} \beta_{1}B+(1-\beta_{1})\,\beta_{2}\,\frac{H}{\lambda\pi r^{2}}\] (14a) s.t. \[\mathcal{P}^{\textit{Henk}}(T_{w}>T)\leq 1-\omega_{\min}, \tag{14b}\] \[\frac{s^{2}\kappa_{1}}{\phi_{\textit{low}}(B,\kappa_{2})}+T+ \frac{c_{1}s^{3}+c_{2}}{H}\leq D\] (14c) \[\frac{s^{2}\kappa_{1}}{\phi_{\textit{peak}}(B,\kappa_{2})}+T+ \frac{c_{1}s^{3}+c_{2}}{H}\leq D\] (14d) \[\text{(\ref{eq:12d})}-\text{(\ref{eq:12e})}.\]
where
\[\phi_{\textit{low}}(B,\kappa_{2}) =\Big{\{}\phi(B,\kappa_{2})\,:\,\ell(r,\alpha,\epsilon)=Pr^{ \alpha\epsilon}\Big{\}},\] \[\phi_{\textit{peak}}(B,\kappa_{2}) =\Big{\{}\phi(B,\kappa_{2})\,:\,\ell(r,\alpha,\epsilon)=\bar{P} \Big{\}}.\]
**Lemma 1**.: _The problem formulated in (14) is convex._
Proof:: The objective function is an affine function, and the feasible region is jointly convex on all variables because all constraints are convex. Therefore, (14) is a convex optimization problem.
**Remark 1**.: _The problem formulation could be further extended to consider three more limitations. First, a maximum bandwidth allocation per frame to account for the limitations of the user equipment. Second, a maximum processing capacity at the server to account for energy or cost limitations. Third, an irregular coverage area to account for fading, shadowing, or asymmetric channel conditions. In either case, since this paper aims to find the pure relationship between \(B\) and \(H\) and serve as a basis for more complex problems, we leave these possible extensions for future work._
Since we are formulating the optimization problem with the Henk's approximation of the CCDF of the waiting time, it is possible that the optimal solution \((B^{*},H^{*},T^{*},s^{*})\) for the convex problem (14) is neither a feasible point nor an optimal solution for the optimization problem (12). However, we can calculate and compensate for this approximation error in advance as stated below.
**Theorem 1**.: _The error incurred from using the Henk's approximation can be calculated in advance. Specially, we can compensate for this approximation error by adjusting the delay requirement to \(\omega_{\min}+0.017\) to make sure that the optimal solution for the convex problem (14) is always a feasible solution for the original problem (12)._
Proof.: Define the maximum error between the true CCDF of the waiting time and the Henk's approximation as
\[e^{*}(\rho)=\max_{T}\,\Big{(}\mathcal{P}(T_{w}>T)-\mathcal{P}^{\textit{Henk}}( T_{w}>T)\Big{)}. \tag{15}\]
Note that we are not interested in the absolute value of the difference because the only cases for which the optimal solution of the convex problem does not satisfy constrain (14b) are those where the Henk's approximation takes lower values than the true CCDF for the same \(T\). Also, note that the maximum error does not depend on the service time \(T_{s}\).
Considering this, we can take the first derivative and find that the argument that maximizes the error is
\[T_{\textit{maxError}}=-\frac{T_{s}}{\rho\tau}\log\left(\frac{\rho\tau-1}{ \tau-1}\right).\]
Then, if we numerically evaluate \(e^{*}(\rho)\) at \(T_{\textit{maxError}}\), we can find that \(e^{*}(\rho)\leq 0.017,\;\forall\rho\). Hence, the maximum difference between the true CCDF of the waiting time and the Henk's approximation is at most \(0.017\) for any server load and service time. Consequently, if we set the new constrain (14b) to be \(\omega_{\min}+0.017\), the optimal solution for the convex problem (14) will always be a feasible solution for the original problem (12). This concludes the proof.
Finally, it is important to mention that the problem formulation (14) does not lead to closed-form solutions of the optimal \(B\), \(H\), or \(H_{f}\) values. However, to understand the effect of the joint dimensioning of these resources, it is meaningful to evaluate how they need to be scaled when one of them can be considered costless.
**Theorem 2**.: _The minimum wireless resources per frame and the minimum computational resource at the MEC server that are required to satisfy the latency and accuracy requirements are given by_
\[\begin{split} B|_{\min}&=\frac{-\Omega_{1}/\kappa_ {2}}{\Omega_{1}+W_{-1}\left(-\Omega_{1}\,\exp\left(-\Omega_{1}\right)\right)}, \\ H|_{\min}&=\Omega_{2}\times\max\left\{\kappa_{3},\frac{1}{D\left(1-\Omega_{1}\right)}\right\},\end{split}\]
_where_
\[\Omega_{1}=\kappa_{2}\frac{\kappa_{1}\kappa_{4}^{2}}{D},\;\;\;\;\;\Omega_{2}= c_{1}\kappa_{4}^{3}+c_{2},\]
_and \(\kappa_{1}\), \(\kappa_{2}\), \(\kappa_{3}\), and \(\kappa_{4}\) are defined in (13)._
Proof.: The minimum number of frequency resources can be derived by assuming an infinite number of computational resources and by finding an upper bound of the ergodic capacity using Jensen's inequality. Similarly, the minimum number of computational resources can be derived by assuming an infinite number of frequency resources. The expressions then follow from these assumptions and the constraints of the optimization problem given in (14).
## V Numerical results
This section evaluates the optimal resource dimensioning of a single-cell system supporting edge video analytics. Specially, we solve the optimization problem in (14) using the convex optimization CVXOPT toolbox from MATLAB. To account for the two mathematical functions that are not supported in CVXOPT, namely, the negative branch of the Lambert Function, \(W_{-1}(\cdot)\), and the exponential integral function, \(E_{1}(\cdot)\), we consider their equivalent expressions from [21, Eq. 5] and [22, Eq. 5.1.11], respectively. With that, we analyze the optimal solution of the optimization problem and discuss the effect of different parameters on the design of cost-efficient strategies. In all cases, we consider the system parameters from Table I, corresponding to a typical cellular network [23] and a typical video-analytic use-case [24], unless stated otherwise. For the detection algorithm, we set the parameters that define (6) and (7) to \(c_{1}=7\cdot 10^{-10}\), \(c_{2}=0.083\), \(c_{3}=1\), \(c_{4}=1.578\)
and \(c_{5}=6.5\cdot 10^{-3}\), which have been shown to fit the ground truth with a root mean square error less than \(0.03\)[7]. Also, recall that we assumed for simplicity that all frames have the same maximum delay requirements \(D\) and \(\omega_{\min}\), and the same minimum object detection accuracy \(a_{\min}\).
Figure 2(a) shows the optimal bandwidth and computing resources per frame, namely, \(B\) and \(H_{f}=H/(\lambda\pi r^{2})\), as a function of the cell radius \(r\) for different traffic intensities \(\lambda\). If we analyze the resources jointly, we observe a multiplexing effect for both the frequency and the computational resources; the higher the traffic in the system, the lower the number of resources required per frame. If we analyze the resources individually, we observe that the bandwidth resources benefit from having small cell radii due to lower signal attenuation, whereas the computing resources benefit from higher cell radii due to increased multiplexing gain. We see the effect of joint dimensioning when the cell sizes are small and the traffic intensity is low. In this scenario, the server needs a significant amount of resources to make sure that a frame satisfies the delay requirements whenever it arrives to the queue, but that, in turn, results in an underutilized server because the arrivals are less frequent. As the use of shared computational resources is very inefficient in this case, the per-frame bandwidth allocation needs to be increased to avoid very high computation costs. Consequently, the bandwidth needs are not monotonously increasing for increasing \(r\).
Figure 2(b) shows the optimal value of the objective function (11) and the optimal server load as a function of the cell radius \(r\) for different traffic intensities \(\lambda\). The figure demonstrates that the server needs to be over-dimensioned significantly in small cells under low load. We also observe that a single-cell system with moderate or higher traffic behaves quite similarly for increasing \(r\), but these are distinct from the low traffic case, where small cells become very expensive, despite the good transmission conditions.
Figure 2(c) shows the optimal resource dimensioning for a single-cell system with cell radius \(r=0.5\) km as a function of the trade-off parameter \(\beta_{1}\) for different traffic intensities \(\lambda\). Overall, smaller \(\beta_{1}\) prioritize having more frequency resources than computational resources and vice-versa. In the extremes, the parameter that has decreasing weight in the objective function grows without bound, and the other parameter becomes the minimum resources per frame that should be allocated to satisfy the delay requirements of all the users. Notice also that the figure shows symmetry around \(\beta_{1}=0.5\), thus corroborating the adequate choice of the scaling parameter \(\beta_{2}\).
Finally, on Figure 2(d) we evaluate the optimal bandwidth resource and computing resource per frame in the extreme cases \(\beta_{1}\to 0\) and \(\beta_{1}\to 1\). The optimal values from (14) are shown as a function of the cell radius \(r\) for different traffic intensities \(\lambda\), and are compared to the theoretical lower bounds from Theorem 2, with \(H_{f}|_{\min}=H|_{\min}/(\lambda\pi r^{2})\). We observe that the bandwidth resources per frame become larger for increasing cell radii and do not depend on the traffic in the system. These results are contrary to the observations from Figure 2(a), where the computational costs are also taken into account and the bandwidth resources compensate for the inefficient computation. In addition, we observe that the computing resources per frame exhibit a moderate increase for small cell sizes, and remain nearly constant for increasing traffic intensity \(\lambda A\). This is again contrary to the observations from Figure 2(a), where the wireless costs are also taken into account and the computing resources are tuned to obtain waiting times that satisfy the minimum success probability. In all cases, the theoretical equivalents are close to the optimal values obtained from the optimization problem, with \(H_{f}|_{\min}\) being a tighter approximation to \(H_{f}\) than \(B|_{\min}\) to \(B\). This is because the minimum computing resources are derived directly from the constraints of the problem, whereas the minimum bandwidth resources requires the Jensen's inequality to find a closed-form expression.
## VI Conclusions
This paper extends a line of research in edge video analytics in which the focus is no longer on adjusting the demands of the users according to the available resources in the system, but rather on dimensioning the wireless and computing resources in the system to meet the demands of the mobile users. For that, we presented an entire end-to-end system model and proposed a multi-objective optimization problem to characterize the trade-off between the wireless and computing resources under latency and accuracy constraints. We showed that the resource-dimensioning problem is non-convex, but can be reformulated into a convex problem. The results show that the wireless and computing resources exhibit opposite trends; the wireless resource favors from smaller cells because of lower signal attenuation, and the computing resources favors from bigger cells because the server avoids being under-utilized. Besides, the wireless and computing resources benefit from higher traffic, as the system can favourably multiplex the incoming tasks. We also derived expressions for the minimum resources needed in the system to meet the demands of the users. We showed that the minimum wireless resources only depend on the distance between the user and the base station, and that the minimum computing resources exhibit a moderate increase for small cell sizes and remain nearly invariant for high traffic arrival intensities. Overall, we conclude that the incoming traffic at the edge server, the cell radius, and the cost of the bandwidth and the computational capacity are decisive when dimensioning the resources of single-cell systems supporting edge video analytics.
|
2306.04326 | Deciding whether an Attributed Translation can be realized by a Top-Down
Transducer | We prove that for a given partial functional attributed tree transducer with
monadic output, it is decidable whether or not an equivalent top-down
transducer (with or without look-ahead) exists. We present a procedure that
constructs an equivalent top-down transducer (with or without look-ahead) if it
exists. | Sebastian Maneth, Martin Vu | 2023-06-07T10:45:33Z | http://arxiv.org/abs/2306.04326v3 | # Deciding whether an Attributed Translation can be realized by a Top-Down Transducer
###### Abstract
We prove that for a given partial functional attributed tree transducer with monadic output, it is decidable whether or not an equivalent top-down transducer (with or without look-ahead) exists. We present a procedure that constructs an equivalent top-down transducer (with or without look-ahead) if it exists.
## 1 Introduction
It is well known that _two-way (string) transducers_ are strictly more expressive than _one-way (string) transducers_. For instance, a two-way transducer can compute the reverse of an input string, which cannot be achieved by any one-way transducer. For a given (functional) two-way transducer, it is a natural question to ask: Can its translation be realized by a one-way transducer? This question was recently shown to be decidable [8], see also [1]. Decision procedures of this kind have several advantages; for instance, the smaller class of transducers may be more efficient to evaluate (i.e., may use less resources), or the smaller class may enjoy better closure properties than the larger class.
One possible pair of respective counterparts of two-way (string) transducers and one-way (string) transducers in the context of trees are _attributed tree transducers_ and _top-down tree transducers_. As the name suggests, states of the latter process an input tree strictly in a top-down fashion while the former can, analogously to two-way transducers, change direction as well. As for their string counterparts, attributed tree transducers are strictly more expressive than top-down tree transducers [10]. Hence, for a (functional) attributed tree transducer, it is a natural question to ask: Can its translation be realized by a deterministic top-down transducer?
In this paper, we address this problem for a subclass of attributed tree transducers. In particular, we consider attributed tree transducer with _monadic output_ meaning that all output trees that the transducer produces are _monadic_, i.e., "strings". We show that the question whether or not for a given attributed tree transducer \(A\) with monadic output an equivalent top-down transducer (with or without look-ahead) \(T\) exists can be reduced to the question whether or not a given two-way transducer can be defined by a one-way transducer.
First, we test whether \(A\) can be equipped with look-ahead so that \(A\) has the _single-path property_, which means that attributes of \(A\) only process a single
input path. Intuitively, this means that given an input tree, attributes of \(A\) only process nodes occurring in a node sequence \(v_{1},\ldots,v_{n}\) where \(v_{i}\) is the parent of \(v_{i+1}\) while equipping \(A\) with look-ahead means that input trees of \(A\) are preprocessed by a deterministic bottom-up relabeling. The single-path property is not surprising given that if \(A\) is equivalent to some top-down tree transducer \(T\) (with look-ahead) then states of \(T\) process the input tree in exactly that fashion. Assume that \(A\) extended with look-ahead satisfies this condition. We then show that \(A\) can essentially be converted into a two-way (string) transducer. We now apply the procedure of [1]. It can be shown that the procedure of [1] yields a one-way (string) transducer if and only if a (nondeterministic) top-down tree transducer \(T\) exists which uses the same look-ahead as \(A\). We show that once we have obtained a one-way transducer \(T_{O}\) equivalent to the two-way transducer converted from \(A\), we can compute \(T\) from \(T_{O}\), thus obtaining our result. It is well-known that for a functional top-down tree transducer (with look-ahead) an equivalent deterministic top-down tree transducer with look-ahead can be constructed [5]. Note that for the latter kind of transducer, it can be decided whether or not an equivalent transducer _without_ look-ahead exists [12] (this is because transducers with monadic output are linear by default).
We show that the above results are also obtainable for attributed tree transducers with look-around. This model was introduced by Bloem and Engelfriet [2] due to its better closure properties. For this we generalize the result from [5], and show that every functional partial attributed tree transducer (with look-around) is equivalent to a deterministic attributed tree transducer with look-around.
Note that in the presence of origin, it is well known that even for (non-deterministic) macro tree transducers (which are strictly more expressive than attributed tree transducers) it is decidable whether or not an origin-equivalent deterministic top-down tree transducer with look-ahead exists [9]. In the absence of origin, the only definability result for attributed transducers that we are aware of is, that it is decidable for such transducers (and even for macro tree transducers) whether or not they are of linear size increase [7]; and if so an equivalent single-use restricted attributed tree transducer can be constructed (see [6]).
## 2 Attributed Tree Transducers
For \(k\in\mathbb{N}\), we denote by \([k]\) the set \(\{1,\ldots,k\}\). Let \(\Sigma=\{e_{1}^{k_{1}},\ldots,e_{n}^{k_{n}}\}\) be a _ranked alphabet_, where \(e_{j}^{k_{j}}\) means that the symbol \(e_{j}\) has _rank_\(k_{j}\). By \(\Sigma_{k}\) we denote the set of all symbols of \(\Sigma\) which have rank \(k\). The set \(T_{\Sigma}\) of _trees over \(\Sigma\)_ consists of all strings of the form \(a(t_{1},\ldots,t_{k})\), where \(a\in\Sigma_{k}\), \(k\geq 0\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}\). Instead of \(a()\) we simply write \(a\). For a tree \(t\in T_{\Sigma}\), its nodes are referred to as follows: We denote by \(\epsilon\) the root of \(t\) while \(u.i\) denotes the \(i\)-th child of the node \(u\). Denote by \(V(t)\) the set of nodes of \(t\), e.g. for the tree \(t=f(a,f(a,b))\) we have \(V(t)=\{\epsilon,1,2,2.1,2.2\}\). For \(v\in V(t)\), \(t[v]\) is the label of \(v\), \(t/v\) is the subtree of \(t\) rooted at \(v\), and \(t[v\gets t^{\prime}]\) is obtained from \(t\) by replacing \(t/v\) by \(t^{\prime}\). For a set \(\Lambda\) disjoint with \(\Sigma\), we define \(T_{\Sigma}[\Lambda]\) as \(T_{\Sigma^{\prime}}\) where \(\Sigma^{\prime}_{0}=\Sigma_{0}\cup\Lambda\) and \(\Sigma^{\prime}_{k}=\Sigma_{k}\) for \(k>0\).
A _(partial deterministic) attributed tree transducer_ (or \(att\) for short) is a tuple \(A=(S,I,\Sigma,\Delta,a_{0},R)\) where \(S\) and \(I\) are disjoint sets of _synthesized attributes_ and _inherited attributes_, respectively. The sets \(\Sigma\) and \(\Delta\) are ranked alphabets of _input_ and _output symbols_, respectively. We denote by \(a_{0}\in S\) the _initial attribute_ and define that \(R=(R_{\sigma}\mid\sigma\in\Sigma\cup\{\#\})\) is a collection of finite sets of rules. We implicitly assume \(att\)'s to include a unique symbol \(\#\notin\Sigma\) of rank 1, the so-called _root marker_, that only occurs at the root of a tree. Let \(\sigma\in\Sigma\cup\{\#\}\) be of rank \(k\geq 0\). Let \(\pi\) be a variable for nodes for which we define \(\pi 0=\pi\). For every \(a\in S\) the set \(R_{\sigma}\) contains at most one rule of the form \(a(\pi)\to t\) and for every \(b\in I\) and \(i\in[k]\), \(R_{\sigma}\) contains at most one rule of the from \(b(\pi i)\to t^{\prime}\) where \(t,t^{\prime}\in T_{\Delta}[\{a^{\prime}(\pi i)\mid a^{\prime}\in S,i\in[k]\} \cup\{b(\pi)\mid b\in I\}]\). The right-hand sides \(t\), \(t^{\prime}\) are denoted by \(\operatorname{rhs}_{A}(\sigma,a(\pi))\) and \(\operatorname{rhs}_{A}(\sigma,b(\pi i))\), respectively, if they exist. If \(I=\emptyset\) then we call \(A\) a deterministic top-down tree transducer (or simply a \(dt\)). In this case, we call \(S\) a set of _states_ instead of attributes.
To define the semantics of the \(att\)\(A\), we first define the _dependency graph of \(A\) for the tree \(s\in T_{\Sigma}\)_ as \(D_{A}(s)=(V,E)\) where \(V=\{(a_{0},\epsilon)\}\cup((S\cup I)\times(V(\#(s))\setminus\{\epsilon\}))\) and \(E=\{((\gamma^{\prime},uj),(\gamma,ui))\mid u\in V(s),\gamma^{\prime}(\pi j)\) occurs in \(\operatorname{rhs}_{A}(s[u],\gamma(\pi i))\), with \(0\leq i,j\) and \(\gamma,\gamma^{\prime}\in S\cup I\}\). If \(D_{A}(s)\) contains a cycle for some \(s\in T_{\Sigma}\) then \(A\) is called _circular_. We define \(v0=v\) for a node \(v\). For a given tree \(s\in T_{\Sigma}\cup\{\#(s^{\prime})\mid s^{\prime}\in T_{\Sigma}\}\), let \(N=\{a_{0}(\epsilon)\}\cup\{\alpha(v)\mid\alpha\in S\cup I,v\in V(s)\setminus\{ \epsilon\}\}\). For trees \(t,t^{\prime}\in T_{\Delta}[N]\), \(t\Rightarrow_{A,s}t^{\prime}\) holds if \(t^{\prime}\) is obtained from \(t\) by replacing a node labeled by \(\gamma(vi)\), where \(i=0\) if \(\gamma\in S\) and \(i>0\) if \(\gamma\in I\), by \(\operatorname{rhs}_{A}(s[v],\gamma(\pi i))[\gamma^{\prime}(\pi j)\leftarrow \gamma^{\prime}(vj)\mid\gamma^{\prime}\in S\cup I,0\leq j]\). If \(A\) is non-circular, then every \(t\in T_{\Delta}[N]\) has a unique normal form with respect to \(\Rightarrow_{A,s}\) denoted by \(\operatorname{nf}(\Rightarrow_{A,s},t)\). The _translation realized by \(A\)_, denoted by \(\tau_{A}\), is the set \(\{(s,\operatorname{nf}(\Rightarrow_{A,\#(s)},a_{0}(\epsilon)))\in T_{\Sigma} \times T_{\Delta}\}\). As \(A\) is deterministic, \(\tau_{A}\) is a partial function. Thus we also write \(\tau_{A}(s)=t\) if \((s,t)\in\tau_{A}\) and say that on input \(s\), \(A\) produces the tree \(t\). Denote by \(\operatorname{dom}(A)\) the _domain of \(A\)_, i.e., the set of all \(s\in T_{\Sigma}\) such that \((s,t)\in\tau_{A}\) for some \(t\in T_{\Delta}\). Similarly, \(\operatorname{range}(A)\) denotes the _range of \(A\)_, i.e., the set of all \(t\in T_{\Delta}\) such that for some \(s\in T_{\Sigma}\), \((s,t)\in\tau_{A}\).
Example 1: Consider the \(att\)\(A_{1}=(S,I,\Sigma,\Delta,a,R)\) where \(\Sigma=\{f^{2},e^{0}\}\) and \(\Delta=\{g^{1},e^{0}\}\). Let the set of attributes of \(A\) be given by \(S=\{a\}\) and \(I=\{b\}\). We define \(R_{f}=\{a(\pi)\to a(\pi 1),b(\pi 1)\to a(\pi 2),b(\pi 2)\to b(\pi)\}\). Furthermore we define \(R_{e}=\{a(\pi)\to g(b(\pi))\}\) and \(R_{\#}=\{a(\pi)\to a(\pi 1),b(\pi 1)\to e\}\). The tree transformation realized by \(A_{1}\) contains all pairs \((s,t)\) such that if \(s\) has \(n\) leaves, then \(t\) is the tree over \(\Delta\) that contains \(n\) occurrences of the symbol \(g\). The domain of \(A\) is \(T_{\Sigma}\) and its range is \(T_{\Delta}\setminus\{e\}\). The dependency graph of \(A_{1}\) for the tree \(f(f(e,e),f(e,e))\) is depicted in Figure 1. As usual, occurrences of inherited and synthesized attributes are placed to the left and right of nodes, respectively. If clear from context, names of attribute occurrences are omitted.
We emphasize that we always consider input trees to be trees over \(\Sigma\). The root marker is a technical necessity. For instance, the translation of \(A_{1}\) in Example 1 is not possible without it. It is well known that whether or not a given \(att\)\(A\) is circular can be tested by computing the _is-dependencies_ of \(A\)[11]. Informally, the is-dependency of a tree \(s\) depicts the dependencies between inherited
and synthesized attributes at the root of \(s\). Formally, the is-dependency of \(s\) is the set \(\mathrm{IS}_{A}(s)=\{(b,a)\in I\times S\mid a(1)\) is reachable from \(b(1)\) in \(D_{A}(s)\}\). As a dependency graph is a directed graph, we say for \(v,v_{1},v_{2}\in V\), that \(v_{1}\) is _reachable_ from \(v_{2}\) if (a) \(v_{1}=v_{2}\) or (b) \(v\) is reachable from \(v_{2}\) and \((v,v_{1})\in E\) as usual. If \(s=\sigma(s_{1},\ldots,s_{k})\) and the is-dependency of \(s_{1},\ldots,s_{k}\) is known, then the is-dependency of \(s\) can be easily computed in a bottom-up fashion using the rules of \(R_{\sigma}\). In the the rest of the paper, we only consider non-circular \(att\)'s.
We define an _attributed tree transducer with look-ahead (or \(att^{R}\))_ as a pair \(\hat{A}=(B,A)\) where \(B\) is a deterministic bottom-up relabeling which preprocesses input trees for the \(att\)\(A\). A(_deterministic_) _bottom-up relabeling_\(B\) is a tuple \((P,\Sigma,\Sigma^{\prime},F,R)\) where \(P\) is the set of states, \(\Sigma\), \(\Sigma^{\prime}\) are ranked alphabets and \(F\subseteq P\) is the set of final states. For \(\sigma\in\Sigma\) and \(p_{1},\ldots,p_{k}\in P\), the set \(R\) contains at most one rule of the form \(\sigma(p_{1}(x_{1}),\ldots,p_{k}(x_{k}))\to p(\sigma^{\prime}(x_{1},\ldots,x_{ k}))\) where \(p\in P\) and \(\sigma^{\prime}\in\Sigma^{\prime}\). These rules induce a derivation relation \(\Rightarrow_{B}\) in the obvious way. The translation realized by \(B\) is given by \(\tau_{B}=\{(s,t)\in T_{\Sigma}\times T_{\Delta}\mid s\Rightarrow_{B}^{*}p(t), p\in F\}\). As \(\tau_{B}\) is a partial function, we also write \(\tau_{B}(s)=t\) if \((s,t)\in\tau_{B}\). The translation realized by \(\hat{A}\) is given by \(\tau_{\hat{A}}=\{(s,t)\in T_{\Sigma}\times T_{\Delta}\mid t=\tau_{A}(\tau_{B} (s))\}\). We write \(\tau_{\hat{A}}(s)=t\) if \((s,t)\in\tau_{\hat{A}}\) as usual. If \(A\) is a \(dt\) then \(\hat{A}\) is called a deterministic top-down transducer with look-ahead (or \(dt^{R}\)).
## 3 The Single Path Property
In the this section, we show that given an \(att\) with monadic output, it is decidable whether or not an equivalent \(dt^{R}\) exists. Monadic output means that output symbols are at most of rank 1. First consider the following definition. For an \(att\)\(A\) with initial attribute \(a_{0}\), an input tree \(s\) and \(v\in V(s)\), we say that on input \(s\), an attribute \(\alpha\) of \(A\)_processes_ the node \(v\) if \((a_{0},\epsilon)\) is reachable from \((\alpha,1.v)\) in \(D_{A}(s)\). Recall that the dependency graph for \(s\) is defined on the tree \(\#(s)\). Now, consider an arbitrary \(dt^{R}\)\(\tilde{T}=(B^{\prime},T^{\prime})\) with monadic output. Then the behavior of \(T^{\prime}\) is limited in a particular way: Let \(s\) be an input tree and \(s^{\prime}\) be obtained from \(s\) via the relabeling \(B^{\prime}\). On input \(s^{\prime}\), the states of \(T^{\prime}\) only process the nodes
Figure 1: Dependency Graph of the \(att\) in Example 1 for \(f(f(e,e),f(e,e))\).
on a single _path_ of \(s^{\prime}\). A path is a sequence of nodes \(v_{1},\ldots,v_{n}\) such that \(v_{i}\) is the parent node of \(v_{i+1}\). This property holds obviously as the output is monadic and hence at most one state occurs on the right-hand side of any rule of \(T^{\prime}\).
Using this property, we prove our claim. In the following, we fix an \(att\)\(A=(S,I,\Sigma,\Delta,a_{0},R)\) with monadic output and show that if a \(dt^{R}\)\(T=(B^{\prime},T^{\prime})\) equivalent to \(A\) exists then we can equip \(A\) with look-ahead such that attributes of \(A\) become limited in the same way as states of \(T^{\prime}\): They only process nodes of a single path of the input tree. Our proof makes use of the result of [1]. This result states that for a _functional two-way transducer_ it is decidable whether or not an equivalent _one way transducer_ exists. Such transducers are essentially attributed transducers and top-down transducers with monadic input and output, respectively. Functional means that the realized translation is a function. We show that \(A\) equipped with look-ahead so that attributes of \(A\) are limited as described earlier can be converted into a two-way transducer \(T_{W}\). It can be shown that the procedure of [1] yields a one-way transducer \(T_{O}\) equivalent to \(T_{W}\) if and only if \(T\) exists. We then show that we can construct \(T\) from \(T_{O}\).
Subsequently, we define the look-ahead with which we equip \(A\). W.l.o.g. we assume that only right-hand sides of rules in \(R_{\#}\) are ground (i.e., in \(T_{\Delta}\)). Clearly any \(att\) can be converted so that it conforms to this requirement. Let \(\alpha(\pi)\rightarrow\tau\in R_{\sigma}\) such that \(\tau\) is ground. First, we remove this rule from \(R_{\sigma}\). Then we introduce a new inherited attribute \(\langle\tau\rangle\) and the rules \(\alpha(\pi)\rightarrow\langle\tau\rangle(\pi)\in R_{\sigma}\), and \(\langle\tau\rangle(\pi 1)\rightarrow\tau\in R_{\#}\). For all \(\sigma^{\prime}\in\Sigma_{k}\) and \(j\in[k]\) we define \(\langle\tau\rangle(\pi j)\rightarrow\langle\tau\rangle(\pi)\in R_{\sigma^{ \prime}}\).
Let \(s\in\text{dom}(A)\) and let \(v\in V(s)\). We call \(\psi\subseteq I\times S\) the _visiting pair set at \(v\) on input \(s\)_ if \((b,a)\in\psi\) if and only if
* on input \(s\), the attribute \(a\) processes \(v\) and
* \((b,a)\in\text{IS}_{A}(s/v)\)
Let \(\psi\) be the visiting pair set at \(v\) on input \(s\). In the following, we denote by \(\Omega_{\psi}\) the set consisting of all trees \(s^{\prime}\in T_{\Sigma}\) such that \(\psi\subseteq\text{IS}_{A}(s^{\prime})\). Thus the set \(\Omega_{\psi}\) contains all trees \(s^{\prime}\) such that the visiting pair set at \(v\) on input \(s[v\gets s^{\prime}]\) is also \(\psi\). If an arbitrary \(a\in S\) exists such that \((b,a)\in\psi\) for some \(b\in I\) and the range of \(a\) when translating trees in \(\Omega_{\psi}\) is unbounded, i.e., if the cardinality of \(\{\text{nf}(\Rightarrow_{A,s^{\prime}},a(\epsilon))\mid s^{\prime}\in\Omega_{ \psi}\}\) is unbounded, then we say that the _variation of \(\Omega_{\psi}\)_ is unbounded. If \(\psi\) is the visiting pair set at \(v\) on input \(s\) and the variation of \(\Omega_{\psi}\) is unbounded then we also say that the _variation at \(v\) on input \(s\)_ is unbounded. The variation plays a key role for proving our claim. In particular, the following property is derived from it: We say that \(A\) has the _single path property_ if for all trees \(s\in dom(A)\) a path \(\rho\) exists such that the variation at \(v\in V(s)\) is bounded whenever \(v\) does not occur in \(\rho\). The following lemma states that the single path property is a necessary condition for the \(att\)\(A\) to have an equivalent \(dt^{R}\).
Lemma 1: _Let \(s\in\text{dom}(A)\) and \(v_{1},v_{2}\in V(s)\) such that \(v_{1}\) and \(v_{2}\) have the same parent node. If a \(dt^{R}\)\(T\) equivalent to \(A\) exists then the variation at either \(v_{1}\) or \(v_{2}\) on input \(s\) is bounded._
Proof: For simplicity assume that \(T\) is a \(dt\). The proof for \(dt^{R}\) is obtained similarly. Assume that the variations at both \(v_{1}\) and \(v_{2}\) on input \(s\) is unbounded. As
\(T\) produces monadic output trees, on input \(s\), only one of the nodes \(v_{1}\) and \(v_{2}\) is processed by a state of \(T\). W.l.o.g. let \(\psi\) be the visiting pair set at \(v_{1}\) on input \(s\) and assume that only \(v_{2}\) is processed by a state of \(T\). Then for all \(s^{\prime}\in\Omega_{\psi}\), \(T\) produces the same output on input \(s[v_{1}\gets s^{\prime}]\). However, \(A\) does not produce the same output as the visiting pair set at \(v_{1}\) on input \(s[v_{1}\gets s^{\prime}]\) is \(\psi\) and the variation of \(\Omega_{\psi}\) is unbounded contradicting the equivalence of \(T\) and \(A\).
Example 2: Consider the \(att\)\(A_{2}=(S,I,\Sigma,\Delta,a,R)\) where \(\Sigma=\{f^{2},e^{0},d^{0}\}\) and \(\Delta=\{f^{1},g^{1},e^{0},d^{0}\}\). The set of attributes are given by \(S=\{a,a_{e},a_{d}\}\) and \(I=\{b_{e},b_{d},\langle e\rangle,\langle d\rangle\}\). In addition to \(\langle e\rangle(\pi 1)\rightarrow\langle e\rangle(\pi)\) and \(\langle d\rangle(\pi 1)\rightarrow\langle d\rangle(\pi)\), the set \(R_{f}\) contains the rules
\[\begin{array}{lclcl}a_{d}(\pi)\to f(a(\pi 1))&&b_{d}(\pi 1)\to a_{d}(\pi 1)&&b_{d}(\pi 2)\to b_{d}(\pi)&&a(\pi) \to a(\pi 2)\\ a_{e}(\pi)\to g(a(\pi 1))&&b_{e}(\pi 1)\to a_{e}(\pi 1)&&b_{e}(\pi 2) \to b_{e}(\pi)&&\end{array}\]
while \(R_{\#}\) contains in addition to \(\langle e\rangle(\pi 1)\to e\) and \(\langle d\rangle(\pi 1)\to d\) the rules
\[a(\pi)\to a(\pi 1)&&b_{e}(\pi 1)\to a_{e}(\pi 1)&&b_{d}(\pi 1) \to a_{d}(\pi 1).\]
Furthermore, we define \(R_{e}=\{a(\pi)\to b_{e}(\pi),a_{e}(\pi)\rightarrow\langle e\rangle(\pi)\}\) and \(R_{d}=\{a(\pi)\to b_{d}(\pi),a_{d}(\pi)\rightarrow\langle d\rangle(\pi)\}\). Let \(s\in T_{\Sigma}\) and denote by \(n\) the length of the leftmost path of \(s\). On input \(s\), \(A\) outputs the tree \(t\) of height \(n\) such that if \(v\in V(t)\) is not a leaf and the rightmost leaf of the subtree \(s/v\) is labeled by \(e\) then \(t[v]=g\), otherwise \(t[v]=f\). If \(v\) is a leaf then \(t[v]=s[v]\).
Clearly, the \(att\)\(A_{2}\) in Example 2 is equivalent to a \(dt^{R}\) and \(A_{2}\) has the single path property. In particular, it can be verified that the variations of all nodes that do not occur on the left-most path of the input tree are bounded. More precisely, if \(v\) does not occur on the leftmost path of the input tree then its visiting pair set is either \(\psi_{e}=\{(b_{e},a)\}\) or \(\psi_{d}=\{(b_{d},a)\}\). Thus, \(\Omega_{\psi_{e}}\) consists of all trees in \(T_{\Sigma}\) whose rightmost leaf is labeled by \(e\). For all such trees the attribute \(a\) yields the output \(b_{e}(\epsilon)\). The case \(\psi_{d}\) is analogous.
In contrast, consider the \(att\)\(A_{1}\) in Example 1. Recall that it translates an input tree \(s\) into a monadic tree \(t\) of height \(n+1\) if \(s\) has \(n\) leaves. This translation is not realizable by any \(dt^{R}\). This is reflected in the fact that the \(att\) of Example 1 does _not_ have the single path property. In particular, consider \(s=f(f(e,e),f(e,e))\). The visiting pair set at all nodes of \(s\) is \(\psi=\{(a,b)\}\) (cf. Figure 1) and \(\Omega_{\psi}\) is \(T_{\Sigma}\). It can be verified that the variation of \(\Omega_{\psi}\) is unbounded.
Recall that we aim to equip \(A\) with look-ahead for obtaining the \(att^{R}\)\(\hat{A}\) that has the following property: On input \(s\in\mathrm{dom}(A)\), attributes of \(A\) only process nodes of a single path of \(s\). Before we show how to test whether or not \(A\) has the single path property, we describe how to construct \(\hat{A}\). Denote by \(B\) the bottom-up relabeling of \(\hat{A}\) and let \(A^{\prime}\) be \(A\) modified to process output trees produced by \(B\). Let \(s\in\mathrm{dom}(A)\) and let \(s^{\prime}\) be obtained from \(s\) via the relabeling \(B\). The idea is that on input \(s^{\prime}\), if attributes of \(A^{\prime}\) process \(v\in V(s^{\prime})\) then the variation of \(v\) on input \(s\) with respect to \(A\) is unbounded. Note that obviously \(V(s)=V(s^{\prime})\). Clearly, if \(A\) has the single path property then attributes of \(A^{\prime}\) only process nodes of a single path of \(s^{\prime}\).
Now the question is how precisely do we construct \(\hat{A}\)? It can be shown that all \(\psi\subseteq I\times S\) that are visiting pair sets of \(A\) can be computed. Let \(\psi\) be a visiting pair set. If the variation of \(\Omega_{\psi}\) is bounded, then a minimal integer \(\kappa_{\psi}\) can be computed such that for all \(a\in S\) such that \((b,a)\in\psi\) for some \(b\in I\) and for all \(s^{\prime}\in\Omega_{\psi}\), height(nf(\(\Rightarrow_{A,s^{\prime}},a(\epsilon)))\leq\kappa_{\psi}\). Whether or not the variation of \(\Omega_{\psi}\) is bounded can be decided as finiteness of ranges of \(att\)'s is decidable [3]. Thus, \(\kappa=\max\{\kappa_{\psi}\ |\ \psi\) is a visiting pair set of \(A\) and the variation of \(\Omega_{\psi}\) is bounded\(\}\) is computable.
Denote by \(T_{\Delta}^{\kappa}[I(\{\epsilon\})]\) the set of all trees in \(T_{\Delta}[I(\{\epsilon\})]\) of height at most \(\kappa\). Informally, the idea is that the bottom-up relabeling of the \(att^{R}\ \hat{A}\) precomputes output subtrees of height at most \(\kappa\) that contain the inherited attributes of the root of the current input subtree. Hence, the \(att\) of \(\hat{A}\) does not need to compute those output subtrees itself; the translation is continued immediately with those output subtrees. Formally, the bottom-up relabeling \(B\) is constructed as follows. The states of \(B\) are sets \(\varrho\subseteq\{(a,\xi)\ |\ a\in S\ \text{and}\ \xi\in T_{\Delta}^{\kappa}[I(\{ \epsilon\})]\}\). The idea is that if \(s\in\text{dom}_{B}(\varrho)\) and \((a,\xi)\in\varrho\) then \(\xi=\text{nf}(\Rightarrow_{A,s},a(\epsilon))\). Given \(\sigma(s_{1},\ldots,s_{k})\), \(B\) relabels \(\sigma\) by \(\sigma_{\varrho_{1},\ldots,\varrho_{k}}\) if for \(i\in[k]\), \(s_{i}\in\text{dom}_{B}(\varrho_{i})\). Note that knowing \(\varrho_{1},\ldots,\varrho_{k}\) and the rules in the set \(R_{\sigma}\) of \(A\), we can easily compute \(\varrho\) such that \(\sigma(s_{1},\ldots,s_{k})\in\text{dom}_{B}(\varrho)\) and hence the rules of \(B\). In particular, \(\varrho\) contains all pairs \((a,\xi)\) such that \(\xi=\text{nf}(\Rightarrow_{A,\sigma(s_{1},\ldots,s_{k})},a(\epsilon))\in T_{ \Delta}^{\kappa}[I(\{\epsilon\})]\). Therefore, \(B\) contains the rule \(\sigma(\varrho_{1}(x_{1}),\ldots,\varrho_{k}(x_{k}))\to\varrho(\sigma_{ \varrho_{1},\ldots,\varrho_{k}}(x_{1},\ldots,x_{k}))\).
Example 3: Consider the \(att\)\(A_{2}\) in Example 2. Recall that all nodes that do not occur on the leftmost path of the input tree \(s\) of \(A_{2}\) have bounded variation. Let \(v\) be such a node. Then the visiting pair set at \(v\) is either \(\psi_{e}=\{(a,b_{e})\}\) or \(\psi_{d}=\{(a,b_{d})\}\). Assume the former. Then nf(\(\Rightarrow_{A_{2},s/v},a(\epsilon))=b_{e}(\epsilon)\). If we know beforehand that \(a\) produces \(b_{e}(\epsilon)\) when translating \(s/v\), then there is no need to process \(s/v\) with \(a\) anymore. This can be achieved via a bottom-up relabeling \(B_{2}\) that precomputes all output trees of height at most \(\kappa=\kappa_{\psi_{e}}=\kappa_{\psi_{d}}=1\). In particular the idea is that if for instance \(v\in V(s)\) is relabeled by \(f_{\{(a,b_{d}(\epsilon))\},\{a,b_{e}(\epsilon)\}}\) then this means when translating \(s/v.1\) and \(s/v.2\), \(a\) produces \(b_{d}(\epsilon)\) and \(b_{e}(\epsilon)\), respectively. The full definition of \(B_{2}\) is as follows: The states of \(B_{2}\) are \(\varrho_{1}=\{(a_{e},\langle e\rangle(\epsilon)),(a,b_{e}(\epsilon))\}\), \(\varrho_{2}=\{(a_{d},\langle d\rangle(\epsilon)),(a,b_{d}(\epsilon))\}\), \(\varrho_{3}=\{(a,b_{d}(\epsilon))\}\), and \(\varrho_{4}=\{(a,b_{e}(\epsilon))\}\). All states are final states. In addition to \(e\to\varrho_{1}(e)\) and \(d\to\varrho_{2}(d)\), \(B_{2}\) also contains the rules
\[\begin{array}{ll}f(\varrho(x_{1}),\varrho_{1}(x_{2}))\to\varrho_{4}(f_{ \varrho,\varrho_{1}}(x_{1},x_{2}))&\quad f(\varrho(x_{1}),\varrho_{2}(x_{2})) \to\varrho_{3}(f_{\varrho,\varrho_{2}}(x_{1},x_{2}))\\ f(\varrho(x_{1}),\varrho_{3}(x_{2}))\to\varrho_{3}(f_{\varrho,\varrho_{3}}(x_ {1},x_{2}))&\quad f(\varrho(x_{1}),\varrho_{4}(x_{2}))\to\varrho_{4}(f_{ \varrho,\varrho_{4}}(x_{1},x_{2})),\end{array}\]
where \(\varrho\in\{\varrho_{1},\ldots,\varrho_{4}\}\). It is easy to see that using \(B_{2}\), attributes of \(A_{2}^{\prime}\), that is, \(A_{2}\) modified to make use of \(B_{2}\), only process nodes of the leftmost path of \(s_{2}\).
With \(B\), we obtain \(\hat{A}=(B,A^{\prime})\) equivalent to \(A\). Note that \(A\) is modified into \(A^{\prime}\) such that its input alphabet is the output alphabet of \(B\) and its rules make use of \(B\). In particular, the rules of \(A^{\prime}\) for a symbol \(\sigma_{\varrho_{1},\ldots,\varrho_{k}}\) are defined as follows. First, we introduce for each state \(\varrho\) of \(B\) an auxiliary symbol \(\langle\varrho\rangle\) of rank \(0\) to \(A\). We define that \(a(\pi)\to t\in R_{\langle\varrho\rangle}\) if \((a,t[\pi\leftarrow\epsilon])\in\varrho\). Denote by \([\pi\gets v]\)
the substitution that substitutes all occurrences of \(\pi\) by the node \(v\), e.g. for \(t_{1}=f(b(\pi))\) and \(t_{2}=f(a(\pi 2))\) where \(f\) is a symbol of rank \(1\), \(a\in S\) and \(b\in I\), we have \(t_{1}[\pi\gets v]=f(b(v))\) and \(t_{2}[\pi\gets v]=f(a(v2))\). Let \(t=\mathrm{nf}(\Rightarrow_{A,\sigma(\langle\varrho_{1}\rangle,\ldots,\langle \varrho_{k}\rangle)}\allowbreak,a(\epsilon))\in T_{\Delta}[I(\{\epsilon\}) \cup S([k])]\). Then we define the rule \(a(\pi)\to t^{\prime}\in R_{\sigma_{\varrho_{1}\ldots,\varrho_{k}}}\) for \(A^{\prime}\) where \(t^{\prime}\) is the tree such that \(t^{\prime}[\pi\leftarrow\epsilon]=t\). It should be clear that since all output subtrees of height at most \(\kappa\) are precomputed, attributes of \(A^{\prime}\) only process nodes whose variation with respect to \(A\) are unbounded (cf. Example 3).
In parallel with the above construction of \(\hat{A}\), we can decide whether or not a given \(att\)\(A\) has the single path property as follows. Let \(s\in\mathrm{dom}(A)\) and let \(s^{\prime}\) be the tree obtained from \(s\) via the relabeling \(B\). If nodes \(v_{1},v_{2}\in V(s^{\prime})\) with the same parent node exist such that on input \(s^{\prime}\), attributes of \(A^{\prime}\) process both \(v_{1}\) and \(v_{2}\) then \(A\) does not have the single path property. Thus, to test whether \(A\) has the single path property, we construct the following \(att^{R}\)\(\hat{A}=(\hat{B},\hat{A^{\prime}})\) from \(\hat{A}=(B,A^{\prime})\). Input trees of \(\hat{A}\) are trees \(s\in\mathrm{dom}(\hat{A})\) where two nodes \(v_{1},v_{2}\) with the same parent node are annotated by flags \(f_{1}\) and \(f_{2}\) respectively. The relabeling \(\hat{B}\) essentially behaves like \(B\) ignoring the flags. Likewise, the \(att\)\(\hat{A^{\prime}}\) behaves like \(A^{\prime}\) with the restriction that output symbols are only produced if an annotated symbol is processed by a synthesized attribute or if a rule in \(R_{\#}\) is applied. For \(i=1,2\) we introduce a special symbol \(g_{i}\) which is only outputted if the node with the flag \(f_{i}\) is processed. Hence, we simply need to check whether there is a tree with occurrences of both \(g_{1}\) and \(g_{2}\) in the range of \(\hat{A}\). Obviously, the range of \(\hat{A}\) is finite. Thus it can be computed.
Lemma 2: _It is decidable whether or not \(A\) has the single path property._
## 4 From Tree to String Transducers and Back
Assume that \(A\) has the single path property. We now convert \(\hat{A}=(B,A^{\prime})\) into a two-way transducer \(T_{W}\). Recall that two-way transducers are essentially attributed tree transducers with monadic input and output1. Informally the idea is as follows: Consider a tree \(s\in\mathrm{dom}(A)\) and let \(s^{\prime}\) be obtained from \(s\) via \(B\). As on input \(s^{\prime}\), attributes of \(A^{\prime}\) only process nodes occurring on a single path \(\rho\) of \(s\), the basic idea is to 'cut off' all nodes from \(s^{\prime}\) not occurring in \(\rho\). This way, we effectively make input trees of \(A^{\prime}\) monadic. More formally, \(T_{W}\) is constructed by converting the input alphabet of \(A^{\prime}\) to symbols of rank \(1\). Denote by \(\Sigma^{\prime}\) the set of all output symbols of \(B\). Accordingly, let \(\Sigma^{\prime}_{k}\) with \(k\geq 0\) be the set of all symbols in \(\Sigma^{\prime}\) of rank \(k\). Let \(\sigma^{\prime}\in\Sigma^{\prime}_{k}\) with \(k>0\). Then the input alphabet \(\Sigma^{W}\) of \(T_{W}\) contains the symbols \(\langle\sigma^{\prime},1\rangle,\ldots,\langle\sigma^{\prime},k\rangle\) of rank \(1\). Furthermore, \(\Sigma^{W}\) contains all
symbols in \(\Sigma^{\prime}_{0}\). Informally, a symbol \(\langle\sigma^{\prime},i\rangle\) indicates that the next node is to be interpreted as the \(i\)-th child. Thus trees over \(\Sigma^{W}\) can be considered prefixes of trees over \(\Sigma^{\prime}\), e.g., let \(f\in\Sigma^{\prime}_{2}\), \(g\in\Sigma^{\prime}_{1}\) and \(e\in\Sigma^{\prime}_{0}\) then \(\langle f,2\rangle\langle f,1\rangle\langle f,1\rangle e\) encodes the prefix \(f(x_{1},f(f(e,x_{1}),x_{1}))\) while \(\langle f,1\rangle\langle g,1\rangle e\) encodes \(f(g(e),x_{1})\). Note that for monadic trees we omit parentheses for better readability. We call \(t_{1}\in T_{\Sigma^{\prime}}[\{x_{1}\}]\) a prefix of \(t_{2}\in T_{\Sigma^{\prime}}\) if \(t_{2}\) can be obtained from \(t_{1}\) by substituting nodes labeled by \(x_{1}\) by ground trees. The idea is that as attributes of \(A^{\prime}\) only process nodes occurring on a single path of the input tree, such prefixes are sufficient to simulate \(A^{\prime}\).
The attributes of \(T_{W}\) are the same ones as those of \(A^{\prime}\). The rules of \(T_{W}\) are defined as follows: Firstly the rules for \(\#\) are taken over from \(A^{\prime}\). As before, assume that only rules for \(\#\) have ground right-hand sides. Let \(A^{\prime}\) contains the rule \(a(\pi)\to t\in R_{\sigma^{\prime}}\) where \(\sigma^{\prime}\in\Sigma^{\prime}_{k}\) and \(a,\alpha\in S\) and \(\alpha(\pi i)\) occurs in \(t\). Then \(T_{W}\) contains the rule \(a(\pi)\to t[\pi i\leftarrow\pi 1]\in R_{\langle\sigma^{\prime},i\rangle}\), where \([\pi i\leftarrow\pi 1]\) denotes the substitution that substitutes occurrences of \(\pi i\) by \(\pi 1\). Analogously, if \(A^{\prime}\) contains the rule \(b(\pi i)\to t^{\prime}\in R_{\sigma^{\prime}}\) where \(b\in I\) and for \(\alpha\in S\), \(\alpha(\pi i)\) occurs in \(t^{\prime}\), then \(T_{W}\) contains the rule \(b(\pi 1)\to t^{\prime}[\pi i\leftarrow\pi 1]\in R_{\langle\sigma^{\prime},i\rangle}\). We remark that as \(A\) has the single path property, \(A^{\prime}\) will never apply a rule \(b(\pi i)\to t^{\prime}\) where \(\alpha(\pi j)\) with \(j\neq i\) occurs in \(t^{\prime}\). Thus, we do not need to consider such rules.
For the correctness of subsequent arguments, we require a technical detail: Let \(\tilde{s}\) be an input tree of \(T_{W}\). Then we require that an output tree \(s\) of \(B\) exists such that \(\tilde{s}\) encodes a prefix of \(s\). If such a tree \(s\) exists we say \(\tilde{s}\)_corresponds to_\(s\). To check whether for a given input tree \(\tilde{s}\) of \(T_{W}\) an output tree \(s\) of \(B\) exists such that \(\tilde{s}\) corresponds to \(s\), we proceed as follows. As \(B\) is a relabeling, its range is effectively recognizable. Denote by \(\bar{B}\) the bottom-up automaton recognizing it. Given \(\bar{B}\), we can construct a bottom-up automaton \(\bar{B}^{\prime}\) that accepts exactly those trees \(\tilde{s}\) for which an output tree \(s\) of \(B\) exists such that \(\tilde{s}\) corresponds to \(s\). W.l.o.g. assume that for all states \(l\) of \(\bar{B}\), \(\operatorname{dom}_{B}(l)\neq\emptyset\). If for \(\sigma^{\prime}\in\Sigma^{\prime}_{k}\), \(\sigma^{\prime}(l_{1}(x_{1}),\ldots,l_{k}(x_{k}))\to l(\sigma^{\prime}(x_{1}, \ldots,x_{k}))\) is a rule of \(\bar{B}\) then \(\langle\sigma^{\prime},i\rangle(l_{i}(x_{1}))\to l(\langle\sigma^{\prime},i \rangle(x_{1}))\) is a rule of \(\bar{B}^{\prime}\). We define that \(\bar{B}^{\prime}\) has the same accepting states as \(\bar{B}\). Using \(\bar{B}^{\prime}\), we check whether for a given input tree \(\tilde{s}\) of \(T_{W}\) an output tree \(s\) of \(B\) exists such that \(\tilde{s}\) corresponds to \(s\) as follows. Before producing any output symbols, on input \(\tilde{s}\), \(T_{W}\) goes to the leaf of \(\tilde{s}\) and simulates \(\bar{B}^{\prime}\) in a bottom-up fashion while going back to the root. If \(\tilde{s}\) is accepted by \(\bar{B}^{\prime}\) then \(T_{W}\) begins to simulate \(A^{\prime}\), otherwise no output is produced. Thus the domain of \(T_{W}\) only consists of trees \(\tilde{s}\) for which an output tree \(s\) of \(B\) exists such that \(\tilde{s}\) corresponds to \(s\). We remark that \(\bar{B}^{\prime}\) may be nondeterministic which in turn means that \(T_{W}\) may be nondeterministic as well, however the translation it realizes is a function. In fact the following holds.
Lemma 3: _Consider the \(att^{R}\)\(\hat{A}=(B,A^{\prime})\) and the two-way transducer \(T_{W}\) constructed from \(\hat{A}\). Let \(\tilde{s}\) be a tree over \(\Sigma^{W}\). If on input \(\tilde{s}\), \(T_{W}\) outputs \(t\) then for all \(s\in\text{range}(B)\) such that \(\tilde{s}\) corresponds to \(s\), \(A^{\prime}\) also produces \(t\) on input \(s\)._
In the following, consider the two-way transducer \(T_{W}\). Assume that the procedure of [1] yields a one-way transducer \(T_{O}\) that is equivalent to \(T_{W}\). Given \(T_{O}\), we
construct a top-down transducer \(T^{\prime}\) that produces output trees on the range of \(B\). In particular, \(T^{\prime}\) has the same states as \(T_{O}\). Furthermore, a rule \(q(\langle\sigma^{\prime},i\rangle(x_{1}))\to t\) of \(T_{O}\) where \(\sigma^{\prime}\in\Sigma^{\prime}_{k}\) and \(i\in[k]\) induces the rule \(q(\sigma^{\prime}(x_{1},\ldots,x_{k}))\to\hat{t}\) for \(T^{\prime}\) where \(\hat{t}\) is obtained from \(t\) by substituting occurrences of \(x_{1}\) by \(x_{i}\), e.g., if \(t=f(g(q^{\prime}(x_{1})))\) then \(\hat{t}=f(g(q^{\prime}(x_{i})))\). Recall that the domain of \(T_{W}\) only consists of trees \(\tilde{s}\) for which an output tree \(s\) of \(B\) exists such that \(\tilde{s}\) corresponds to \(s\). As \(T_{W}\) and \(T_{O}\) are equivalent, the domain of \(T_{O}\) also consists of such trees. Hence, by construction, the following holds.
Lemma 4: _Consider the top-down transducer \(T^{\prime}\) constructed from the one-way transducer \(T_{O}\). Let \(\tilde{s}\) be a tree over \(\Sigma^{W}\). If on input \(\tilde{s}\), \(T_{O}\) outputs \(t\) then for all \(s\in\mbox{range}(B)\) such that \(\tilde{s}\) corresponds to \(s\), \(T^{\prime}\) also produces \(t\) on input \(s\)._
With Lemmas 3 and 4, it can be shown that the following holds.
Lemma 5: _The top-down transducer \(T^{\prime}\) and the \(att\)\(A^{\prime}\) are equivalent on the range of \(B\)._
Therefore it follows that \(\hat{A}=(B,A^{\prime})\) and \(N=(B,T^{\prime})\) are equivalent. Consequently, \(A\) and \(N\) are equivalent. We remark that there is still a technical detail left. Recall that our aim is to construct a \(dt^{R}\)\(T\) equivalent to \(A\). However, the procedure of [1] may yield a nondeterministic \(T_{O}\). Thus \(T^{\prime}\) and hence \(N\) may be nondeterministic (but functional). However, as shown in [5], we can easily compute a \(dt^{R}\) equivalent to \(N\).
The arguments above are based on the assumption that the procedure of [1] yields a one-way transducer equivalent to \(T_{W}\). Now the question is, does such a one-way transducer always exists if a \(dt^{R}\) equivalent to \(A\) exists? The answer to this question is indeed affirmative. In particular the following holds.
Lemma 6: _Consider the \(att^{R}\)\(\hat{A}=(B,A^{\prime})\) equivalent to \(A\). If a \(dt^{R}\)\(T\) equivalent to \(A\) exists, then a (nondeterministic) top-down transducer with look-ahead \(N=(B,N^{\prime})\) exists such that \(\hat{A}\) and \(N\) are equivalent._
Proof: (sketch) Recall that given \(\sigma(s_{1},\ldots,s_{k})\), \(B\) relabels \(\sigma\) by \(\sigma_{\varrho_{1},\ldots,\varrho_{k}}\) if for \(i\in[k]\), \(s_{i}\in\mbox{dom}_{B}(\varrho_{i})\). In the following, denote by \(s_{\varrho}\) a fixed tree in \(\mbox{dom}_{B}(\varrho)\).
Let \(T=(B^{\prime},T^{\prime})\). We sketch the main idea of the proof, i.e., how to so simulate \(B^{\prime}\) using \(B\). First consider the following property which we call the _substitute-property_. Let \(s\in T_{\Sigma}\) and \(\hat{s}\) be obtained from \(s\) via the relabeling \(B\). Let \(v_{1}\) and \(v_{2}\) be nodes of \(s\) with the same parent. As \(T\) exists, the single path property holds for \(A\). Thus on input \(\hat{s}\), \(v_{1}\) or \(v_{2}\) is not processed by attributes of \(A^{\prime}\). Assume that \(v_{1}\) is not processed and that \(s/v_{1}\in\mbox{dom}_{B}(\varrho)\). Then \(\tau_{\hat{A}}(s)=\tau_{\hat{A}}(s[v_{1}\gets s_{\varrho}])\) follows. Informally, this means that \(s/v_{1}\) can be substituted by \(s_{\varrho}\) without affecting the output of the translation. By definition, \(\hat{A}\) and \(A\) are equivalent while \(T\) and \(A\) are equivalent by our premise. Thus, \(\tau_{T}(s)=\tau_{T}(s[v_{1}\gets s_{\varrho}])\).
Now we show how \(B^{\prime}\) is simulated using \(B\). Let \(\hat{q}\) be a state of \(N^{\prime}\). Each such state \(\hat{q}\) is associated with a state \(q\) of \(T^{\prime}\) and a state \(l\) of \(B^{\prime}\). Consider the tree \(\hat{s}\) obtained from \(s\) via \(B\). Assume that the node \(v\) labeled by \(\sigma_{\varrho_{1},\ldots,\varrho_{k}}\) is processed by \(\hat{q}\) in the translation of \(N^{\prime}\) on input \(\hat{s}\), i.e., \(v\) has \(k\) child nodes.
It can be shown that \(\hat{q}\) can compute which of \(v\)'s child nodes is processed by attributes in the translation of \(A^{\prime}\) on input \(\hat{s}\). W.l.o.g. let \(v.1\) be that node and let \(s/v.i\in\text{dom}_{B}(\varrho_{i})\) for \(i\in[k]\). Due to the substitute-property, \(N\) can basically assume that \(s/v.i=s_{\varrho_{i}}\) for \(i\neq 1\). For \(i\neq 1\), let \(s_{\varrho_{i}}\in\text{dom}_{B^{\prime}}(l_{i})\). Now \(N^{\prime}\) guesses a state \(l_{1}\) for \(s/v.1\) such that \(\sigma(l_{1}(x_{1}),\ldots,l_{k}(x_{k}))\to l(\hat{\sigma}(x_{1},\ldots,x_{k}))\) is a rule of \(B^{\prime}\). The state \(\hat{q}\) then 'behaves' as \(q\) would when processing a node labeled by \(\hat{\sigma}\). It can be guaranteed that \(N^{\prime}\) verifies its guess.
The \(dt^{R}\)\(N=(B,N^{\prime})\) can be constructed such that on input \(s\in\text{range}(B)\) an attribute of \(A^{\prime}\) processes the node \(v\) if and only if a state of \(N^{\prime}\) processes \(v\). The existence of such a transducer with look-ahead \(N\) implies the existence of a one-way transducer \(T_{O}\) equivalent to \(T_{W}\). In fact, \(T_{O}\) is obtainable from \(N\) similarly to how \(T_{W}\) is obtainable from \(A_{p}\). Therefore, the procedure of [1] yields a top-down transducer \(T_{O}\) equivalent to \(T_{W}\) if and only if \(T\) exists.
## 5 Final Results
The considerations in Sections 3 and 4 yield the following theorem.
Theorem 5.1: _For an \(att\) with monadic output, it is decidable whether or not an equivalent \(dt^{R}\) exists and if so then it can be constructed._
We show that the result of Theorem 5.1 is also obtainable for nondeterministic functional attributed tree transducers with look-around. Look-around is a formalism similar to look-ahead. An attributed tree transducers with look-around (or \(att^{U}\)) consists of a _top-down relabeling with look-ahead_\(R\) and an \(att\)\(A\) where output trees are computed as \(\tau_{A}(\tau_{R}(s))\) for input trees \(s\). Before we prove our claim, we prove the following result. Denote by \(nATT^{U}\) and \(dATT^{U}\) the classes of translations realizable by nondeterministic and deterministic \(att^{U}\)'s, respectively. Denote by _func_ the class all functions.
Lemma 7: \(nATT^{U}\cap\text{func}=dATT^{U}\)_._
Proof: (sketch) Informally, the basic idea is the same as for functional top-down transducers in [5]. For simplicity, let \(A\) be a functional \(att\) without look-around. Consider a rule set \(R_{\sigma}\) of \(A\) where \(\sigma\in\Sigma\). Let all rules in \(R_{\sigma}\) with the same left-hand side be ordered. Whenever several such rules are applicable, i.e., utilizing these particular rule leads to the production of a ground tree, the first applicable rule in the given order will be executed. Using look-around, we can test whether or not a rule is applicable.
We remark that look-around is required; just look-ahead for instance is not sufficient as \(ATT\cap\text{func}\not\subseteq dATT^{R}\) can be shown.
Due to Lemma 7, it is sufficient to show that for a deterministic \(att^{U}\) it is decidable whether or not an equivalent \(dt^{R}\) exists. Roughly speaking, this result is obtained as follows. Let a deterministic \(att^{U}\)\(\hat{A}=(R,A^{\prime})\), where \(R\) is a top-down relabeling with look-ahead and \(A^{\prime}\) is an \(att\), be given. To show that a
equivalent to \(\hat{A}\) exists it is sufficient to show that a \(dt^{R}\)\(T\) exists such that \(A^{\prime}\) and \(T\) are equivalent on the range of \(R\). This is because \(R\) is also a \(dt^{R}\) and \(dt^{R}\)'s are closed under composition [4]. The \(dt^{R}\)\(T\) can be obtained by slightly modifying the procedure in Section 3.
Theorem 5.1: _For a functional \(att^{U}\) with monadic output, it is decidable whether or not an equivalent \(dt^{R}\) exists and if so then it can be constructed._
Note that by definition \(dt^{R}\)'s with monadic output are by default _linear_. For a linear \(dt^{R}\) it is decidable whether or not an equivalent linear \(dt\) exists [12]. If such a \(dt\) exists it can be constructed. Hence, we obtain the following corollary.
Corollary 1: _For a functional \(att^{U}\) with monadic output, it is decidable whether or not an equivalent \(dt\) exists and if so then it can be constructed._
## 6 Conclusion
We have shown how to decide for a given attributed transducer with look-around but restricted to monadic output, whether or not an equivalent deterministic top-down tree transducers (with or without look-ahead) exists. Clearly we would like to extend this result to non-monadic output trees. The latter seems quite challenging, as it is not clear whether or not the result [1] can be applied in this case. Other questions that remain are the exact complexities of our constructions.
|
2304.05641 | Pseudo-Kleene algebras determined by rough sets | We study the pseudo-Kleene algebras of the Dedekind-MacNeille completion of
the ordered set of rough set determined by a reflexive relation. We
characterize the cases when PBZ and PBZ*-lattices can be defined on these
pseudo-Kleene algebras. | Jouni Järvinen, Sándor Radeleczki | 2023-04-12T06:45:34Z | http://arxiv.org/abs/2304.05641v2 | # Pseudo-Kleene algebras determined by rough sets
###### Abstract
We study the pseudo-Kleene algebras of the Dedekind-MacNeille completion of the ordered set of rough set determined by a reflexive relation. We characterize the cases when PBZ and PBZ*-lattices can be defined on these pseudo-Kleene algebras.
keywords: Rough set, negation, pseudo-Kleene algebra, Brouwer-Zadeh lattice, paraorthomodular lattice, Stone algebra +
Footnote †: journal: International Journal of Approximate Reasoning
## 1 Introduction
In rough set theory, introduced by Z. Pawlak [1], knowledge about elements of a set \(U\) is given in terms an equivalence \(E\) on \(U\) interpreted so that \((x,y)\in E\) if the elements \(x\) and \(y\) cannot be distinguished in terms of the information represented by \(E\). Each set \(X\subseteq U\) is approximated by two sets: the lower approximation \(X^{\bullet}\) consists of elements which certainly belong to \(X\) in view of knowledge \(E\), and the upper approximation \(X^{\blacktriangle}\) consists of objects which possibly are in \(X\).
The pair \((X^{\blacktriangledown},X^{\blacktriangle})\) is called a rough set. We denote by RS the set of all rough sets. It is proved in [2] that RS is a Stone algebra. This result was improved in [3] by showing that RS forms a regular double Stone algebra. The three-valued Lukasiewicz algebras defined by RS were considered for the first time in [4]. P. Pagliani showed in [5] how a semisimple Nelson algebra can be defined on RS.
In the literature can be found studies in which the information about the objects is given in terms of arbitrary binary relations. We have proved in [6] that the rough sets defined by quasiorders (reflexive and transitive binary relations) are exactly the Nelson algebras defined on algebraic lattices. In [7] we characterized the rough sets defined by a tolerance (reflexive and symmetric) relation induced by an irredundant covering of \(U\).
However, it is known that for an arbitrary tolerance, the ordered set RS is not necessarily a lattice; see [8], for instance. In [9], D. Umadevi presented the Dedekind-MacNeille completion of RS for arbitrary binary relations. In this work, we denote this completion by DM(RS). She described the joins and meets in DM(RS). Umadevi also characterized the complemented elements of DM(RS).
In this work, pseudo-Kleene algebras play an essential role. They are bounded lattices equipped with a Kleene complement \(\sim\). Note that Kleene algebras are distributive pseudo-Kleene algebras. Umadevi noted in [9] that DM(RS) forms a pseudo-Kleene algebra. In the literature [10; 11] can be found several studies in which approximation operators are defined by using certain combinations of two or more equivalence relations, meaning that concepts are described by using many granulation spaces. In [12], the authors proved that the Dedekind-MacNeille completion of the so-called optimistic multigranular rough sets forms a paraorthomodular pseudo-Kleene algebra. |
2308.14499 | Efficient and Accurate Tree Detection from 3D Point Clouds through Paid
Crowdsourcing | Accurate tree detection is of growing importance in applications such as
urban planning, forest inventory, and environmental monitoring. In this
article, we present an approach to creating tree maps by annotating them in 3D
point clouds. Point cloud representations allow the precise identification of
tree positions, particularly stem locations, and their heights. Our method
leverages human computational power through paid crowdsourcing, employing a web
tool designed to enable even non-experts to effectively tackle the task. The
primary focus of this paper is to discuss the web tool's development and
strategies to ensure high-quality tree annotations despite encountering noise
in the crowdsourced data. Following our methodology, we achieve quality
measures surpassing 90% for various challenging test sets of diverse
complexities. We emphasize that our tree map creation process, including
initial point cloud collection, can be completed within 1-2 days. | Michael Kölle, Volker Walter, Ivan Shiller, Uwe Soergel | 2023-08-28T11:19:44Z | http://arxiv.org/abs/2308.14499v1 | ## Efficient and Accurate Tree Detection from 3D Point Clouds through Paid Crowdsourcing
## Abstract
Accurate tree detection is of growing importance in applications such as urban planning, forest inventory, and environmental monitoring. In this article, we present an approach to creating tree maps by annotating them in 3D point clouds. Point cloud representations allow the precise identification of tree positions, particularly stem locations, and their heights. Our method leverages human computational power through paid crowdsourcing, employing a web tool designed to enable even non-experts to effectively tackle the task. The primary focus of this paper is to discuss the web tool's development and strategies to ensure high-quality tree annotations despite encountering noise in the crowdsourced data. Following our methodology, we achieve quality measures surpassing 90% for various challenging test sets of diverse complexities. We emphasize that our tree map creation process, including initial point cloud collection, can be completed within 1-2 days.
## Acknowledgment
This work profited from funding through the German Research Foundation as part of Germany's Excellence Strategy - EXC 2120/1 - 390831618.
## 1 Introduction
While manual tree detection is time-consuming and expensive, automated methods are limited by the complexity of the environment, the variability of tree shapes, and the quality of the data. To overcome these challenges, we propose a method that relies on human computation power in terms of paid crowdsourcing. To overcome the inherent data quality inhomogeneity of this technique, we exploit the Wisdom of the Crowds principle, which translates to capturing each tree multiple times by multiple crowdworkers, allowing automatic outlier detection and a significant quality improvement of the results compared to individual acquisitions.
The annotation of trees from 3D point clouds has already been subject of our previous work (Walter et al., 2020). So far, however, we have only applied our method to a few exemplary, manually selected datasets to demonstrate the feasibility in principle. Hence, we see this paper as natural consequence with special emphasis on applying our (adapted) procedures to various data sets depicting scenes of different complexity. Now that we are faced with non-handpicked complex scenes, we have improved our web tool in several ways. This web tool is one of the most crucial components when outsourcing tasks to non-expert annotators as it needs to be as easy and intuitive to handle as possible while still allowing sufficient interaction with 3D data. In this regard, enhancements mainly focus on adapting to extended point clouds, often with a high density of closely standing (intertwined) trees. Mainly, this can be solved by generating profile-shaped tiles from point cloud data, which are then sent to the crowd for labelling, allowing us to achieve satisfactory results at a quality level comparable to that of dedicated experts.
The rest of this paper is organized as follows. After reviewing related work with respect to crowdsourced data acquisition and possible implementations of the Wisdom of the Crowds principle (Section 2), Section 3 will outline our methodology with special emphasis on data pre-processing, implementation of the graphical user interface and data integration routines. Applying those techniques to various data sets presented in Section 4, we are able to generate competitive results discussed in Section 5. Finally, Section 6 will summarize our findings and discuss potential future research directions.
## 2 Related Work
Crowdsourcing is an artificial term created by merging the words "crowd" and "outsourcing", introduced by Howe (2006). While outsourcing means transferring specific tasks to specified individuals, in crowdsourcing typically tasks are delegated to anonymous workers, commonly referred to as crowdworkers, through online platforms. This approach provides employers with access to a vast pool of workers who might otherwise remain inaccessible.
Many prominent crowdsourcing projects rely on the participation of unpaid volunteers, such as Wikipedia (www.wikipedia.org) and Zooniverse (www.zooniverse.org). The voluntary collection of geographic data is known as Volunteered Geographic Information (VGI) (Goodchild, 2007). OpenStreetMap (OSM - www.openstreetmap.org) is a well-known VGI project that aims to create a map of the world to which anyone can contribute (Haklay and Weber, 2008). Successful VGI projects require an active and motivated community of participants. Budhathoki and Haythornthwaite (2012) provide an extensive discussion of motivational factors for participation in VGI projects.
When such intrinsic motivations are not present, we need to rely on extrinsic incentives to encourage participation. A common method for this is to offer financial compensation (Haralabopoulos et al., 2019). In paid crowdsourcing, tasks are most conveniently handled through online marketplaces such as microWorkers (www.microworkers.com) (Hirth et al., 2011) or Amazon Mechanical Turk (MTurk - www.mturk.com) (Ipeirotis, 2010), responsible for the recruitment and payment of the workers. Possible applications of paid crowdsourcing range from image-based polyp detection (Nguyen et al., 2012), image aesthetics evaluation (Redi and Povoa, 2014), translation tasks (Gao et al., 2015) to content creation for museum apps (Lans et al., 2018) and traffic monitoring (Koita and Suzuki, 2019).
However, when employing annotators with different skills, education and background knowledge, often, we are confronted with a high level of data inhomogeneity (Vaughan, 2017; Daniel et al., 2018). This is especially true for paid crowdsourcing, where there may be dishonest workers trying to maximize their income by submitting as many tasks as possible, but providing incomplete or poor results (Hirth et al., 2011). Getting accurate and high-quality results from non-experts is a major challenge in crowdsourcing (Zhou et al. 2012; Leibovici et al., 2017; Liu et al., 2018).
There are two options for improving the quality of crowdsourced data (Zhang et al., 2016): i) Quality Control on Task Designing and ii) Quality Improvement after Data Collection. For the first approach, techniques such as qualification tasks, control tasks and tutorials are commonplace. A detailed
overview of such methods can be found in the work of Daniel et al. (2018). Although these methods are likely to improve the overall quality of crowdsourcing campaigns, there will still be results that are exceptionally good or bad. This is especially the case for geospatial data acquisition as crowdworkers are typically not familiar with the standards of geospatial data collection (Hashemi and Abbaspous, 2015) and often, even dedicated experts disagree on the true annotation (Walter and Soergel, 2018).
As for the second option, Quality Improvement after Data Collection, respective methods often rely on collecting data multiple times by different crowdworkers in order to infer the true result and eliminate outliers in the process, referred to as "truth inference" (Zhen et al., 2017). Surowieck (2005) demonstrates for many different applications that aggregating results from multiple contributors can lead to a result that even exceeds the best individual (expert) contribution. This approach is known as "The Wisdom of the Crowds". However, this requires solving tasks multiple times (thus also increasing costs when dealing with paid crowdsourcing) and formulating a specific aggregation rule (Simons, 2004).
The simplest realization of the Wisdom of the Crowds principle is majority voting (Juni and Eckstein, 2017), which is based on the assumption that the majority of workers are trustworthy (Kazemi et al., 2013). In this approach, the employer assigns the same task to multiple crowdworkers, assuming that the true answer aligns with the majority vote (Hirth et al., 2013). Majority voting is a simple but effective technique (Zhang et al., 2016) and has been used in various spatial labelling tasks. For example, cropland identification in remote sensing images (Salk et al., 2016), classification of building footprint data (Hecht et al., 2018), accessibility map generation (Liu et al., 2018) and crowd-based labelling of 3D LiDAR point clouds (Koelle et al., 2020). For localization tasks, such as tree annotation, we cannot perform simple majority voting, but compute an average position from the crowdsourced data, e.g., based on a DBSCAN clustering procedure (Walter et al., 2020).
For the aforementioned adaptions of our previous methodology (Walter et al., 2020), we were influenced by the work of Herfort et al. (2018), who seek tree annotations from crowdworkers by means of providing them with small point cloud subsets with a point-cloud-section-like appearance. Precisely, the authors employ crowdworkers for the classification of such subsets into a set of categories and conduct crown base estimation by means of averaging answers from multiple crowdworkers.
## 3 Methodology
### Data Acquisition & Pre-Processing
Within this work, we intend to generate tree cadastre data by developing a method that is i) independent of any additional data and ii) that runs in a manner where expert involvement can be reduced to a minimum. To achieve a maximum level of independence, we propose capturing point clouds with a mobile laser scanning system like the _GeoSlam Zeb Horizon_ scanner (although data from other sources, such as airborne platforms, can be processed in the same way), which realizes a SLAM-based acquisition built upon a _Velodyne Puck VLP-16_ laser scanner ([https://geoslam.com/solutions/zeb-horizon/](https://geoslam.com/solutions/zeb-horizon/)). The main advantage using this system is the flexibility to acquire any scene of interest in a matter of minutes (all test sites described in Section 4 were captured in less than 30 minutes). Hence, the methodology presented in the following can be easily applied to other sites.
As a next step, the captured point cloud data is to be prepared for the web tool (cf. Section 3.2) and the crowd, respectively. In this regard, we refrain from picking representative circular subsets with 50 m radii like proposed in our previous work (Walter et al., 2020), and modify the generation of tiles to be suitable for large-scale acquisition campaigns. This is accomplished i) by utilizing rectangular tiles and ii) by shaping them to be more similar to profile sections. While the first aspect mainly allows covering extended areas more efficiently, the latter is key to the endeavour of crowd-based tree acquisition. In densely vegetated regions, crowdworkers would be simply overwhelmed with the amount of trees in wide areas (cf. Figure 1 (a)), but even experts would fail to succeed due to occlusions of individual trees. In this regard, reducing the spatial extent of tiles can already help, but occlusions will nevertheless remain (cf. Figure 1 (b)). Consequently, we reduce the tile size even further, and turn from quadratic shapes to point cloud profiles (cf. Figure 1 (c)) where individual stems become clearly visible.
Apart from extracting such subsets, the raw point cloud data is subsampled to 20 cm point spacing to minimize the data footprint and to allow for quick loading of data by crowdworkers utilizing the web tool. Furthermore, as visible from Figure 1, we create a pseudo-colorization with a height-above-ground colour scheme to allow for easier interpretation. This requires having either a Digital Terrain Model (DTM) of the area of interest, which can be obtained from national mapping agencies, or alternatively, can be generated on the fly with respective software packages such as SCOP++ (Pfeifer et al., 2001).
With respect to the type of annotations desired, we advocate marking tree stems and their height using cylinders (with a fixed radius of 0.5 m). We would like to emphasize that we refrain from asking crowdworkers to record the exact radii of tree crowns (e.g., by using minimal bounding cylinders as in the work of Walter et al. (2020), as this is a less clear task that allows for a wide range of possible valid solutions (e.g., try to imagine vertical bounding lines that clearly separate the tree crowns in Figure 1 (c)).
## 3 2 Graphical User Interface
If an employer aims to employ crowdsourcing, easy-to-use tools are to be provided. This is especially the case if crowdworkers are to work with probably very unfamiliar data such as 3D point clouds. Therefore, we have improved an initial prototype (Walter et al., 2020) for this purpose in several ways, as described below. In addition, we recommend interested readers to try a demo version of our tool, which is available at [https://crowd.ifp.uni-stuttgart.de/DEMO](https://crowd.ifp.uni-stuttgart.de/DEMO) Trees/main local.php.
As motivated in the previous section, we are interested in annotating trees by means of cylinders (with fixed radii) at the stem location and adjusting the height of the cylinder to fit the stem. Hence, the basic working principle of our web tool also follows this scheme. Precisely, we distinguish between a control panel (Figure 2, right) and three data panels (Figure 2, left). Working with our tool, a crowdworker would first create a new cylinder (spawning in the point cloud centre with default height), then use the data panels to precisely position the cylinder at the correct location and afterwards scale the height by usage of the respective buttons in the control panel.
Figure 1: Exemplary subsets from one of the densely vegetated areas with a tile size of (a) 50m x 50m, (b) 25m x 25m and (c) 25m x 4m.
For the positioning of the cylinders, we provide three different data panels. This includes an interactive 3D view (cf. Figure 2, _bottom left_) that allows to freely interact and explore the point cloud and is intended to be used for an initial coarse positioning, i.e., crowdworkers are asked to move the cylinder from the initial position to a stem location by means of a simple drag and drop operation. Cylinder bottoms are automatically set to the local DTM height (known from providing respective DTM tiles, cf. Section 3.1). The second data panel shows a static 2D side view that can be interpreted as a projection of the point cloud data to the xz-plane (cf. Figure 2, _top left_). This data panel gives an overview of the complete data set as well as a sense of how many trees are included and to be marked.
The next step is to refine the already (roughly) set cylinder, which can be achieved either by adjusting the cylinder position in the 3D view panel or by using the third data panel, which presents a detailed 3D view (cf. Figure 2, _centre_). We extract a subset of the point cloud around 5 m of the current stem and hide all points near the ground by masking points that are among the 5% lowest points in this point cloud subsets. This allows a bottom-up view in which stems are easily recognizable.
Figure 2: Our web-based graphical user interface for the acquisition of trees from 3D point clouds. The cylinder that is currently under review is highlighted in yellow and a 5 m cylindrical neighbourhood is cropped and visualized in detail in the right panel. Cylinders that have already been set are indicated in _green_, but can be altered at any time afterwards.
As we might encounter tiles that do not include any trees at all we give crowdworkers the option to mark tiles as including no trees (cf. Figure 2, "I see no stems"-button), which leads to presenting the crowdworker alternative point clouds until annotations are made. This way, we can minimize the risk of workers misusing this option to quickly skip through the task and being paid undeservedly.
Aside from the methods described above, for generating accurate data, quality control methods are critical in paid crowdsourcing. As mentioned in Section 2, we distinguish between i) Quality Control on Task Designing and ii) Quality Improvement after Data Collection.
With regard to the first option, we have developed a tool suitable for achieving high-quality results and train crowdworkers in the proper use of this tool by means of short introductory videos. Additionally, each of the crowd jobs consists of two point clouds that are to be annotated. For the first one, Ground Truth (GT) data is available and is utilized to assess the quality of acquired data. Only if the acquired data match the GT data in terms of the correct number of stems, a maximum position discrepancy of 1 m and a maximum height discrepancy of 2 m, we accept the contribution as valid submission and pay crowdworkers. The subsequent second point cloud is the actual payload job, for which no GT is available.
To improve quality after data collection, we present each of the tiles to multiple crowdworkers and then integrate the results as described in the following section.
### Integration of Results in Post-Processing
Since our crowdworkers capture the point cloud multiple times, we need to find a way to integrate these multiple acquisitions while eliminating outliers. Here, we follow the method of Walter et al. (2020), but adapt it to a stem-based acquisition scheme.
The basic idea is to merge all crowd acquisitions and identify clusters, which are supposed to form around true tree stem positions. Since we are unaware of the true number of trees in the data set, we utilize the DBSCAN algorithm (Ester et al., 1996) for clustering, for which this knowledge is not required. DBSCAN, on the other hand, is parametrized by the maximum distance _epsilon_ that is considered for neighbour identification and the minimum number of instances \(n_{min}\) that are required to form a cluster. In a nutshell, DBSCAN visits all points in a data set and classifies them either as noise (if \(<n_{min}\) points are within distance _epsilon_), core points (if \(\geq n_{min}\) are within distance _epsilon_) or border points (if at least one core point is within distance _epsilon_), with the latter two actually contributing to clusters. To allow an intuitive understanding of those parameters, we apply this
clustering to \(xy\)-positions of stems only (and do not include height values at the same time). Thus, _epsilon_ can be understood as minimum distance between individual clusters in the result (precisely, the minimum distance between the two closest points from the clusters) and \(n_{min}\) is to be set in consideration with the number of multiple acquisitions per tile.
After the clustering step, we compute the mean position of all stems included in this cluster. However, dealing with large-scale data and various scenes, typically it is hard to find a value for _epsilon_ that fits each situation depicted in the data sets. Thus, we recommend refining all clusters that are comprised by more stem acquisitions than the number of multiple acquisitions, since this indicates a value for _epsilon_ that is set too large for the specific region resulting in merging stems from actually two different trees that are in close proximity to each other. Hence, if the number of acquisitions exceeds a certain threshold \(n_{max}\), we initialize a second DBSCAN clustering sequence operating on only those points belonging to the respective cluster and iteratively decrease _epsilon_ by 0.5 m until less than \(n_{max}\) points are included in all clusters formed.
Finally, as a last step of refinement, we eliminate outliers within each cluster by comparing the cluster's median tree parameters (both position and height) to all acquisitions in this cluster and iteratively eliminate acquisitions with a position and height difference that is greater \(d_{pos}\) and \(d_{h}\), respectively. This step is repeated until all acquisitions are within these error margins compared to the median parameters.
## 4 Data
In order to prove the effectiveness of the proposed methodology in various scenarios, we collected a set of four different point clouds with different characteristics each, which are described in more detail in the following and visualized in Figure 3:
* Scene I: This point cloud represents an urban area with well-maintained tree alleys being comprised of regularly planted trees and some single standing trees.
* Scene II: Increasing the complexity, this scene depicts a cared-for city park area with trees being positioned a bit more irregularly, partly constituting rather densely vegetated areas.
* Scene III: This data set includes an old cemetery with trees being irregular in shape and height and positioned in an equally irregular pattern.
* Scene IV: This last scene is expected to be most complex, as it incorporates a densely vegetated forest scene with tree positions following a highly irregular pattern.
Each of those test sites is subject to pre-processing as discussed in Section 3.1. For tiling, the desired strip length of the point cloud profile sections is 60 m, but we allow ourselves to refrain from this value in favour of obtaining consistent, equally sized tiles for the whole data set (with close-to-desired dimensions), avoiding small remnant tiles at data borders. However, for the densely vegetated forest area (Scene IV), we created shorter and less deep sections pursuing the idea of having roughly an equal number of trees per crowd job and to avoid occlusions as motivated in Section 3.1. A detailed statistics of the data set-up including the number of GT stems can be found in Table 1. GT data was created carefully by the authors using our web tool presented in Section 3.2.
Figure 3: An overview of the different test sites with different characteristics each. While the left column gives a visual impression of the respective data set, the right column depicts the respective area as captured as point cloud data.
To get a feeling for the complexity of the different scenes, Figure 4 visualizes exemplary point cloud sections. Scene I and Scene II have similar levels of complexity, although they originate from point cloud datasets of different characteristics. Scene III seems to be more demanding, which is mainly due to a high level of beneath-crown vegetation and due to neighbouring trees overlapping/intertwining.
For the forest test site (Scene IV), on the other hand, the crowd jobs become rather straightforward as trees in deep forest are typically characterized by an extended crown and scarcely any boughs beneath. Thus, such scenes pose close-to-optimal conditions for creating easily comprehensible profile point cloud sections for a clear identification of individual tree stems. However, individual stem positions are rather close to each other, what might overwhelm crowdworkers, as stems to be set would also be in close proximity to each other. Hence, this might lead to crowdworkers missing some trees and also to a feeling of being stuck as it feels even more tedious to annotate objects with a small distance in between. As a remedy, we stretch the \(xy\)-coordinates by a factor of 1.5 (cf. Figure 4 (d) + (e)).
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline
**Data Set** & **Area Size [ha]** & **\# Tiles** & **Tile Size** & **GT Trees** \\ \hline Scene I: Urban tree alley area & 2.80 & 51 & 64 x 10 m & 146 \\ \hline Scene II: Maintained city park area & 4.27 & 76 & 60 x 10 m & 193 \\ \hline Scene III: Natural city park area & 2.53 & 61 & 52 x 10 m & 217 \\ \hline Scene IV: Forest area & 0.75 & 101 & 20 x 4 m & 386 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data set-up for the test sites.
## 5 Results
For all four test sites described in the previous section, we launched a crowd campaign on the _microWorkers_ platform. We offered the crowd jobs to the _Top Performers_ group only. For each successfully submitted crowd job (i.e., passing the control task, cf. Section 3.2), we granted a payment of $0.10. Each job was posted to 10 different crowdworkers leading to overall campaign costs of $51/$76/$61/$101 for Scenes I-IV plus a 10% _microWorkers_ fee each.
Figure 4: Exemplary profile sections of the different test sites.
For the integration of the multiple acquisitions, we employed the integration scheme, discussed in Section 3.3, where parameters of the initial DBSCAN clustering are set to \(epsilon=1\) m in order keep acquisitions of trees in close proximity apart and \(n_{min}=4\), i.e., we expect that at least 4 out of 10 crowdworkers are capable to detect the trees. For the successive DBSCAN refinement step (cf. Section 3.3), we chose \(n_{max}=15\), i.e., assume that 15 acquisitions (and more) per cluster are a strong indication of having merged annotations from two different trees as the maximum number of acquisitions should not exceed the number of multiple acquisitions (10). To only keep high-quality acquisitions per cluster, we filter clusters as described in Section 3.3, with \(d_{pos}=1\) m and \(d_{h}=2\) m, i.e., we eliminate acquisitions in each cluster until none of the acquisitions has a difference to the cluster's median tree \(>d_{pos}\) and \(>d_{h}\). Please note that the same set of parameters is used for all test sites.
Figure 5 visualizes this integration process for an exemplary subset of Scene IV. As can be seen from Figure 5 (a), in most cases, crowd stems concentrate on certain spots with only a narrow scattering range, that is, whenever trees are easily distinguishable. Hence, this scattering range and related to that the spatial extent of clusters formed within this initial DBSCAN clustering step can serve as a measure for the complexity of the scene (cf. Figure 5 (b)). Apart from that, the initial clustering acts as filter for coarse outliers (cf. Figure 5 (a)). Such extended clusters also underline the importance of the second DBSCAN clustering stage to refine the segments initially set into distinct clusters while only operating on stems of each specific cluster from the first stage. As can be observed from Figure 5 (d), this allows to further purify clusters, i.e., detect outliers (cf. Figure 5 (c)) and leads to the aspired subdivision of extended clusters. Please note that an alternative single DBSCAN clustering step with a smaller \(epsilon\) poses the threat of ending up with an excessive number of integrated stem positions and has proven inefficient.
Finally, the last step of purging already set clusters from crowd acquisitions that do not match the current median tree is a strict filtering stage (cf. Figure 5 (e)), necessary to determine a set of stems with a quality sufficient to contribute to the final stem derivation of each cluster (cf. Figure 5 (f)). As depicted in Figure 5 (f) and 5 (g), in most cases we succeed to compute stem positions in an accurate and complete manner. However, the integration fails, whenever stem acquisitions actually belonging to different trees are too close to each other not allowing for a clear distinction between individual acquisitions, thus ending up in integrating acquisitions of multiple trees in one stem position only (e.g., central blue cluster in Figure 5, right column).
Figure 5: Step-by-step visualization of the integration process to eliminate outliers and to derive integrated stem positions.
To not only give a visual impression of the achieved results but also to assess the results quantitatively, we computed the completeness and correctness. However, this requires matching the GT stems with the integrated crowd stems. Eventually, this comes down to determining the number of True Positives (TPs), False Positives (FPs) and False Negatives (FNs). That means that we have to identify which crowd stems and GT stems, respectively, belong to which of the aforementioned categories, which is a non-trivial task as we encounter also many-to-many relationships (Walter et al., 2020).
As we are just dealing with stem positions and heights, we are limited to only comparing these parameters. This means, whenever we are confronted with a GT stem for which there is an integrated crowd acquisition with a position and height difference of \(d_{pos}\leq 1\) m and \(d_{h}\leq 2\) m (same thresholds as for the refinement process in Section 3.3), we consider it as TP. Otherwise, we are confronted with a FN. Similarly, all integrated crowd acquisitions that do not match any GT trees within these limits are marked as FPs. Based on the number of representatives for each of those categories, we finally compute Recall, Precision and Quality values as outlined in the work of Walter et al. (2020).
From Table 2 we can observe that we can reach a quality value well over 90% for all test sites underlining the robustness and efficiency of our approach. As already assumed, Scene I and II perform rather similar, as both test sites are equally complex (cf. Figure 4 (a) & (b)). For Scene III, however, crowdworkers perform worst as this area is most demanding for identifying individual stems. This is
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline
**Data Set** & **GT** & **TP** & **FN** & **FP** & **Recall/** & **Precision/** & **Quality** & **Price** & **Price** \\ & & & & & **PA [\%]** & **UA [\%]** & **[\%]** & **per** & **per ha** \\ & & & & & & & & **TP [\%]** & **[\%]** \\ \hline Scene I: Urban tree & 146 & 136 & 10 & 9 & 93.15 & 93.79 & 93.47 & 0.38 & 18.21 \\ alley area & & & & & & & & \\ \hline Scene II: Maintained city park area & 193 & 187 & 6 & 10 & 96.89 & 94.92 & 95.90 & 0.41 & 17.81 \\ \hline Scene III: Natural city park area & 217 & 185 & 32 & 6 & 85.25 & 96.86 & 90.69 & 0.33 & 24.16 \\ \hline Scene IV: Forest area & 386 & 375 & 13 & 7 & 96.63 & 98.16 & 97.39 & 0.27 & 134.90 \\ \hline \end{tabular}
\end{table}
Table 2: Quality evaluation of the crowd-based tree mapping for the four different test sites along with the price per TP and per hectare in the paid crowdsourcing scenario.
due to crowdworkers missing a significant number of trees (low recall value). Figure 6 gives an idea with regard to which scenes are causing this effect. Obviously, such FNs occur whenever trees are in close proximity to each other (cf. Figure 6, top row). If crowdworkers do not carefully set cylinders to tree stems (but with certain offsets), this will cause regions with a high concentration of acquisitions in which the identification of individual clusters is hard to accomplish. Therefore, our integration mechanism also contributes to this effect in a way, since we will fail to distinct individual sub-clusters and we will unintentionally eliminate TP acquisitions of individual crowdworkers in the purification step (cf. Figure 5 (e)).
Secondly, FNs also occur in regions where crowdworkers are confronted with a high variation of both tree shape and tree height (cf. Figure 6, bottom row). In such cases, crowdworkers typically successfully annotate most prominent trees but miss smaller trees in between. This is understandable, as often such smaller trees are more reminiscent of beneath-crown vegetation such as shrubs. Although, we exemplify these two causes for FNs, i.e., trees in close proximity to each other and a high irregularity with regard to tree types and sizes for Scene III, similar examples also occur in the other data sets.
Furthermore, Figure 7 visualizes regions in which FPs occur, which, generally speaking, are scarce however. As crowdworkers are sensitive to annotating all kind of objects resembling tree stems, it is only understandable that lamp posts or wire posts will be annotated as well. However, this only holds true if these posts incorporate features that are similar to branches or tree crowns. For instance, the street light post in Figure 7 (top row) incorporates multiple light sources at different height levels. Hence, from a certain level of abstraction, we could be very well dealing with an extremely pruned tree stem.
Even harder to separate from actual tree stems are lamp posts situated in close vicinity to trees as depicted in Figure 7 (bottom row). From the 2D side view, such objects cannot be separated from actual stems and even interacting with such regions in the 3D views requires a high level of point cloud familiarity and experience. Nevertheless, for such complex scenes also experts would most likely achieve different outcomes. However, we would like to stress that other lamp posts like the ones on the left in Figure 7 (bottom row) correctly lack an annotation.
Figure 6: Exemplary point cloud profiles (left column) along with the integrated crowd acquisitions (right) classified as TP (green) and FN (red) in Scene III, which either root from trees being in close proximity to each other (top) or from a vast range of different tree sizes and shapes (bottom).
Apart from these minor misclassifications, we would like to stress that our proposed methodology does not only yield high quality values as listed in Table 2, but is also time- and cost-efficient. For all scenes, respective campaigns ran for approx. 24 hours, meaning that an accurate map can be ready just the day after the collection of the 3D point cloud. With respect to cost-efficiency, on average, we payed $0.35 for each TP stem with overall costs depending both on the extent of the area of interest as well as the complexity of the scene, i.e., the more dense the vegetation, the narrower we should choose the tiles, which will eventually yield to higher costs per ha. Hence, for Scene IV, in which we are faced with a dense forest area, the cost is the highest per ha (but individual tree annotations are cheapest, cf. Table 2). Nevertheless, we believe that especially such densely vegetated forest areas are a suitable application area for our approach, since, such scenes are often not charted and the production of tree maps would require an enormous effort. Moreover, such areas are easy for crowdworkers to work on. Typically, beneath-crown vegetation is rather limited due to lack of light passing through dense tree crown layers.
Figure 7: Exemplary point cloud profiles (left column) along with the integrated crowd acquisitions (right) classified as TP (green) and FP (pink) rooting from both Scene I (top) and Scene II (bottom), which are caused by crowdworkers confusing lamp posts and power line posts with tree stems.
In presence of dense beneath-crown vegetation, on the other hand, our approach is rather limited and is likely to yield unsatisfactory results - at least with respect to completeness. An example vividly illustrating this issue can be found in Figure 8. Although this is an excerpt of the tree alley scene (Scene I), some regions are characterized by dense hedges planted along such a tree alley. Due to this dense vegetation, hardly any tree stems are clearly visible. Thus, the only hint for presence of trees are distinct tree crowns, which, however, are not sufficient for crowdworkers to estimate correct stem positions. Additionally, the majority of workers make use of the "I see no stems"-button (cf. Figure 2) and are not to blame for doing so. Please note that the correct tree annotations in Figure 8 are limited to isolated trees standing in front of the described tree and hedge alley.
## 6 Conclusion
Within this paper, we have presented a crowd-based approach to generate tree maps in an accurate and timely manner. To achieve this, we recommend a self-sustaining device for data capturing like a handheld SLAM-based system, so that arbitrary scenes can be captured quickly. However, our approach is equally well suited for more sophisticated data sources that might be carried by airborne platforms. After pre-processing, that can be fully automatized, we leverage human processing power, i.e., we employ paid crowdworkers as human processing units. Those can be flexibly hired by means of respective crowdsourcing platforms such as _microWorkers_.
Figure 8: An exemplary point cloud profile (left) along with the crowd annotations (right) classified as TP (green) and FN (red). A high number of FNs occurs due to extensive beneath-tree-crown vegetation.
This requires having a web tool that is suited for crowdsourced data acquisition, i.e., we need to set-up a graphical user interface that allows workers without any experience with 3D data to do the required annotations. Developing this tool is demanding and time-consuming and was conducted in an engineering-like trial and error process.
Since we typically face a high degree of heterogeneity in quality of data collected by crowdowrkers, we developed methods to improve quality both during and after collection. For this purpose, we have realized an implementation of the Wisdom of the Crowds principle and developed a DBSCAN-based integration routine, which in turn can run fully automatically. Hence, apart from the initial data capturing step, all remaining steps can run without any expert involvement.
We see the main application of our approach in the generation of tree cadastres, since trees are often not mapped and the generation of corresponding maps is very expensive, if the data collection has to be done by experts.
## Conflict of Interest
The authors have no conflict of interest to declare that are relevant to this article.
|
2306.14135 | Interpretable Neural Embeddings with Sparse Self-Representation | Interpretability benefits the theoretical understanding of representations.
Existing word embeddings are generally dense representations. Hence, the
meaning of latent dimensions is difficult to interpret. This makes word
embeddings like a black-box and prevents them from being human-readable and
further manipulation. Many methods employ sparse representation to learn
interpretable word embeddings for better interpretability. However, they also
suffer from the unstable issue of grouped selection in $\ell1$ and online
dictionary learning. Therefore, they tend to yield different results each time.
To alleviate this challenge, we propose a novel method to associate data
self-representation with a shallow neural network to learn expressive,
interpretable word embeddings. In experiments, we report that the resulting
word embeddings achieve comparable and even slightly better interpretability
than baseline embeddings. Besides, we also evaluate that our approach performs
competitively well on all downstream tasks and outperforms benchmark embeddings
on a majority of them. | Minxue Xia, Hao Zhu | 2023-06-25T05:57:01Z | http://arxiv.org/abs/2306.14135v1 | # Interpretable Neural Embedding with Sparse Words Self-Representation
###### Abstract
Interpretability benefits the theoretical understanding of representations. Existing word embeddings are generally dense representations. Hence, the meaning of latent dimensions is difficult to interpret. This makes word embeddings like a black-box and prevents them from being human-readable and further manipulation. Many methods employ sparse representation to learn interpretable word embeddings for better interpretability. However, they also suffer from the unstable issue of grouped selection in \(\ell 1\) and online dictionary learning. Therefore, they tend to yield different results each time. To alleviate this challenge, we propose a novel method to associate data self-representation with a shallow neural network to learn expressive, interpretable word embeddings. In experiments, we report that the resulting word embeddings achieve comparable and even slightly better interpretability than baseline embeddings. Besides, we also evaluate that our approach performs competitively well on all downstream tasks and outperforms benchmark embeddings on a majority of them.
## 1 Introduction
Word embeddings have gained immense success and obtained state-of-the-art results in a wide range of NLP tasks, such as parsing [1, 1], named entity recognition [13], and sentiment analysis [2]. One of the main attraction in these embeddings is that they capture the underlying complexity and latent trends in the data. However, a critical issue is that they are difficult to interpret, hence people are unaware of what each dimension represents. [1] examined top-ranked words per dimension ("top dimensions words") in benchmark embeddings like Glove [2] and Word2Vec [1] and demonstrated that it does not form a semantically coherent group. For instance, the word 'Internet' is mainly composed by the following top dimensions words in GloVe (as shown in Table. 1) which is far from being interpretable.
People have utilized sparse coding that transforms word embeddings into sparse vectors [1, 1]. It learns interpretable embeddings by applying non-negative and sparsity constraints on original word embeddings.
However, there are two technical drawbacks for these solutions. Firstly, it fails to deal with grouped selections due to the employment of \(\ell 1\) ( Lasso) [14]. More specifically, if there is a group of words among which the pairwise correlations are very high, Lasso tends to select only one word from that group and does not care which one is selected [15]. For instance, there exists two groups of words representing two semantic concepts \(college\) and \(creature\). Given the target word \(youtube\), Lasso tends to select the most similar word in both groups and combine them to represent the target word. Secondly, online learning makes the learned dictionary unstable and unrepeatable. It may yield different basis of the dictionary if we trained the model multiple times. Such phenomenon is observed in SPINE [1], we examined the overlap between top 5 dimensions only accounts for 7% when SPINE trained twice. Last but not least, rare words are difficult to be considered as the basis of the dictionary, because it cannot be fitted well by other words and cannot fits well to other words.
To alleviate these problems, we propose a method based on words self-representation (SR) to generate highly efficient and interpretable word
embeddings. Motivated by the linear algebraic structure of word senses Arora et al. (2018), we extend the linear assertion and assume a word in a subspace can always be expressed as a linear combination of other words in that subspace. Therefore, we use a neural model with Sparse Words Self-Representation (SWSR) to obtain desired embeddings. The self-expressiveness of data guarantees the stability of the basis in the dictionary, and the neural approach can efficiently avoid the issue of grouped selection.
Our contributions can be arranged as follows: 1) We formulate the word embedding with self-expressiveness property of data. 2) We propose a neural model to solve words self-representation with non-negative and sparsity constraints. 3) Last but not least, our approach is competitive to existing methods on many downstream tasks.
## 2 Related Work
To learn interpretable word embeddings, current approaches are generally based on sparse coding. There are two major types of inputs: inputs derived from Positive Point-wise Mutual Information (PPMI) and inputs derived from the neural embedding. Murphy et al. (2012); Luo et al. (2015) apply sparse matrix factorization with non-negativity on the PPMI inputs. Subramanian et al. (2018); Faruqui et al. (2015) employ neural inputs such as pre-trained GloVe and Word2Vec embeddings. Comparing Subramanian et al. (2018) and Faruqui et al. (2015), the first approach uses neural-based method to constraint sparse and non-negative embeddings, while the second utilizes the convex optimization to transform dense representations into sparse and interpretable. Other method like Park et al. (2017) rotates word vectors to enhance interpretability. Our method is constructed based on the neural approach and is different from other methods, since it is more expressive than linear methods like matrix factorization or rotations. Next, words in our method are allowed to participate at varying levels in different dimensions naturally.
## 3 Methodology
In order to obtain stable and interpretable embeddings, we introduce the self-expressiveness property of data, stating each word in a union of subspaces can be efficiently represented as a linear combination of other words. Normally, such representation is not unique because there are infinite approaches to express a word through a combination of other words. Ideally, a sparse representation of a word corresponds to a combination of few words from its own subspace.
In the viewpoint of sparse representation, this avoids learning a dictionary in advance and therefore has a more stable 'dictionary'. The sparse representation NEED TO BE IMPLEMENTED. We implement words self-representation in a shallow neural network and also modify the loss functions to pose sparsity constraints on parameters.
### Learning Embedding with Self-Expressiveness Property
Given a set of word embeddings \(\{X\}_{i=1,\ldots,|V|}\in R^{d\times|V|}\), where \(|V|\) is the vocabulary size, and \(d\) is the number of dimensions in the input word embeddings. We assume that word representations are drawn from multiple linear subspaces \(\{S\}_{i=1,\ldots,K}\). Each word in the subspace can be regarded as a linear combination of other words in the same subspace. This property is called self-expressiveness property (SEP) in Rao et al. (2008). We can use a single equation, i.e., _X = XC_, to represent the property, where \(C\) is the self-representation coefficient matrix. It has been shown in Rao et al. (2008) that, under the assumption that the subspaces are independent, by minimizing certain norms of \(C\), \(C\) is guaranteed to have a block-diagonal structure (up to certain permutations), i.e., \(c_{ij}>0\) if and only if point \(x_{i}\) and point \(x_{j}\) lie in the same subspace. So we can leverage the words self-representation \(C\) to construct new words embedding that is interpretable because only similar words belong to the same subspace and thus have similar representations in \(C\).
The self-representation coefficient matrix can
\begin{table}
\begin{tabular}{|c|c|} \hline & Top dimensions words \\ \hline \multirow{3}{*}{’Internet’} & thousands, residents, palestinian \\ & intelligence, government, foreign \\ & nhl, writer, writers, drama \\ \hline \end{tabular}
\end{table}
Table 1: An example of un-interpretable of word embeddings. It shows top-ranked words per dimension, each row represents words from top dimensions from GloVe for word ’Internet’.
be obtained by solving the optimization problem:
\[\min\|C\|_{p}\;\;s.t.\;\;X=XC,\;diag(C)=0,\;C_{ij}\geq 0 \tag{1}\]
where \(\|\cdot\|_{p}\) represents an arbitrary matrix norm, and the optional diagonal constraint on \(C\) prevents trivial solutions for sparsity inducing norms, such as the \(\ell 1\)-norm. Various norms for \(C\) have been proposed in the literature, e.g., the \(\ell 1\)-norm in Sparse Subspace Clustering (SSC) Elhamifar and Vidal (2009), the nuclear norm in Low Rank Representation (LRR) Liu et al. (2013) and Low Rank Subspace Clustering (LRSC) Favaro et al. (2011), and the Frobenius norm in Least-Squares Regression Lu et al. (2012) (LSR).
### Self-Expressive Layer in Shallow Neural Network
In this paper, we propose a neural model to obtain words self-representation. Our goal is to train a shallow neural network where its inputs are well-suited to SEP with a parameter \(C\). To this end, we introduce a new layer that encodes the notion of self-expressiveness property. To encode words self-representation under the architecture of a neural network, we reformulate the Eq. 1 to another form defined as:
\[\min_{C}\frac{1}{2}\|X-XC\|_{2}^{2}+\lambda\Omega(C) \tag{2}\]
where \(\Omega(C)\) is the regularization terms. Typical terms include sparsity-induced norms \(\|\cdot\|_{1}\)Elhamifar and Vidal (2009), and nuclear norm Favaro et al. (2011).
In contrast to the sparse coding or convex optimization approach of solving the parameter \(C\)Elhamifar and Vidal (2009); Liu et al. (2013), we propose a shallow neural network with a sparsity penalty, a non-negative constraint on \(C\) to obtain sparse, interpretable embedding. Then we can define the forward step in our method like:
\[\hat{X}=Xf(C) \tag{3}\]
where \(f\) is an appropriate element-wise activation function to ensure that \(C\) is non-negative, and \(W\in R^{|V|\times|V|}\) is model parameters. In order to produce non-negative values and exact zeros, we use Capped-ReLU Subramanian et al. (2018) as the activation function to map continuous value into [0, 1]. Mathematically, we define the activation like:
\[\textbf{Capped-ReLU}(x)=\left\{\begin{array}{l l}0,&if&x\leq 0\\ x,&if&0<x<1\\ 1,&if&x\geq 1\end{array}\right. \tag{4}\]
In this setting, given \(X\), our shallow neural network is trained to minimize the following loss function.
\[\textbf{L}=\textbf{RL}+\lambda_{1}\textbf{ASL}+\lambda_{2}\textbf{PSL} \tag{5}\]
where Reconstruction Loss (**RL**) is an average loss in reconstructing the input representation from the learned representation, the loss function can be defined with a mean square error like:
\[\textbf{RL}=\frac{1}{|V|}\sum\limits^{|V|}\|X_{i}-\hat{X_{i}}\|_{2}^{2} \tag{6}\]
In order to simulate sparsity-induced norm in our neural model, we penalize any deviation of the observed average activation value in \(C\) from the desired average activation value of a given hidden unit. We formulate this Average Sparsity Loss (**ASL**) as follows.
\[\textbf{ASL}=\sum\limits_{i}^{n}(\max(\frac{\sum\limits_{j}^{n}c_{ij}}{n}- \rho,0)) \tag{7}\]
where \(\rho\) is the the parameter of sparsity ratio.
Not only obtain k-sparse activation values for input, but we also hope to have discriminative representations such as binary codes. Thus a Partial Sparsity Loss (**PSL**) term Subramanian et al. (2018) is used to penalize values that are neither close to 0 nor 1, pushing them close to either 0 or 1. The formulation of PSL is as following:
\[\textbf{PSL}=\frac{1}{n}\sum\limits_{i}^{n}C_{i}-C_{i}C_{i}^{T} \tag{8}\]
where \(C_{i}\) is the \(i\)-th row of the parameter \(C\).
## 4 Experiments
To enable fair comparisons, experiments are conducted within the framework of SPINE Subramanian et al. (2018). Briefly, we use pre-trained GloVe and Word2Vec embeddings for training and both embeddings are 300 dimensions long.
We compare the embedding generated by our model (denoted as SWSR) against GloVe, Word2Vec, as well as SPOWV and SPINE. All hyperparameters in SPINE and SPOWV are under authors' recommendations. To test the quality of these embeddings, we use them in the
following benchmark downstream classification tasks: 1) News classification on the 20 Newsgroups dataset (Sports/Computers/Religion); 2) Noun Phrase Bracketing [10]; 3) Question Classification [11]. For text classification tasks, we use the average of embeddings of the word in a sentence as features. Algorithms, like SVM, Logistic Regression, Random Forests are experimented.
### Downstream Task Evaluations
To test the quality of the embeddings, all performances are evaluated through accuracy (Table 2 and 3). It is clear that embeddings generated by our method generally perform competitively well on all benchmark tasks, and do significantly better on a majority of them.
### Interpretability
We investigate interpretability of word embeddings on the word intrusion detection task, seeking to measure how coherent each dimension of these vectors are. For a given dimension, we introduce a word set containing top 5 words in that dimension and a noisy word (also called intruder) from the bottom half of the dimension which ranks high (top 10%) in other dimensions.
We follow the automatic evaluation metric **DistRatio** proposed by [20] to measure the interpretability of word representations. Formulations presented as follows:
\[\textbf{DistRatio}=\frac{1}{d}\sum_{i=1}^{d}\frac{\textbf{InterDist}}{\textbf{ IntraDist}} \tag{9}\]
\[\textbf{InterDist}=\sum_{w_{j}\in top_{k}(i)}\sum_{\begin{subarray}{c}w_{k}\in top _{k}(i)\\ w_{k}\neq w_{j}\end{subarray}}\frac{dist(w_{j},w_{k})}{k(k-1)} \tag{10}\]
\[\textbf{IntraDist}=\sum_{w_{j}\in top_{k}(i)}\frac{dist(w_{j},w_{b})}{k} \tag{11}\]
where \(top_{k}(i)\), \(w_{bi}\) denotes top-k words on dimension \(i\) and the intruder word for dimension i. We use Euclidean distance to measure \(dist(w_{i},w_{j})\). \(IntraDist\) stands for the average distance between top words, while \(InterDist\) represents the average distance between top words and the intruder word.
As we can see from Table 4, SWSR achieves comparable performances with previous baseline approaches and even perform the best on pre-trained Word2Vec embeddings. These results confirm our model generates more interpretable word embeddings. We attribute the success of SWSR to the expressiveness of a neural model, and the non-negativity, as well as sparsity, further lead to semantically coherent dimensions. In this way, SWSR constructs expressiveness and interpretable word representations.
## 5 Conclusion
We have presented a method that converts word representations derived from any state-of-the-art embedding models into sparse by implementing Self Representation within a shallow Neural Network. Experiments demonstrate that embeddings generated by our method generally perform competitively than currently recognized methods on a diverse set of downstream tasks. Besides, it also achieves comparable and slightly better interpretability. Moreover, the utilization of words self-representation guarantees selecting multiple words from the same top dimensions to construct interpretable representations. The full usage of data points in SR also ensures the repeatable, stable of the dictionary.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & Sparsity & DistRatio \\ \hline Glove & 0\% & 0.99 \\ SPINE & 85\% & 1.25 \\ SPOWV & 90\% & 1.12 \\ SWSR & 95\% & **1.27** \\ \hline Word2Vec & 0\% & 0.98 \\ SPINE & 85\% & 1.28 \\ SPOWV & 90\% & 1.08 \\ SWSR & 99\% & **1.34** \\ \hline \end{tabular}
\end{table}
Table 4: Average performances across all vector models on the word intrusion task, measured with \(DistRatio\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Tasks & GloVe & SPOWV & SPINE & SWSR \\ \hline Sports & 95.31 & 95.80 & **96.42** & 91.53 \\ Religion & 80.62 & 81.27 & 79.80 & **81.59** \\ Computers & 71.64 & 78.16 & 75.35 & **80.77** \\ \hline NPBracket & 73.31 & 69.89 & **73.58** & 68.69 \\ \hline Question. & 82.80 & 87.45 & 88.75 & **90.28** \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy on three downstream tasks using Glove word embeddings.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Tasks & Word2Vec & SPOWV & SPINE & SWSR \\ \hline Sports & 93.01 & **95.72** & 93.96 & 94.1 \\ Religion & 80.09 & 83.77 & 83.43 & **84.17** \\ Computers & 71.37 & 76.32 & 74.26 & **81.85** \\ \hline NPBracket & **77.91** & 70.34 & 72.56 & 70.65 \\ \hline Question. & 89.02 & 90.50 & 92.34 & **93.49** \\ \hline \end{tabular}
\end{table}
Table 3: Accuracy on three downstream tasks using Word2Vec word embeddings. |
2307.03760 | CODAG: Characterizing and Optimizing Decompression Algorithms for GPUs | Data compression and decompression have become vital components of big-data
applications to manage the exponential growth in the amount of data collected
and stored. Furthermore, big-data applications have increasingly adopted GPUs
due to their high compute throughput and memory bandwidth. Prior works presume
that decompression is memory-bound and have dedicated most of the GPU's threads
to data movement and adopted complex software techniques to hide memory latency
for reading compressed data and writing uncompressed data. This paper shows
that these techniques lead to poor GPU resource utilization as most threads end
up waiting for the few decoding threads, exposing compute and synchronization
latencies.
Based on this observation, we propose CODAG, a novel and simple kernel
architecture for high throughput decompression on GPUs. CODAG eliminates the
use of specialized groups of threads, frees up compute resources to increase
the number of parallel decompression streams, and leverages the ample compute
activities and the GPU's hardware scheduler to tolerate synchronization,
compute, and memory latencies. Furthermore, CODAG provides a framework for
users to easily incorporate new decompression algorithms without being burdened
with implementing complex optimizations to hide memory latency. We validate our
proposed architecture with three different encoding techniques, RLE v1, RLE v2,
and Deflate, and a wide range of large datasets from different domains. We show
that CODAG provides 13.46x, 5.69x, and 1.18x speed up for RLE v1, RLE v2, and
Deflate, respectively, when compared to the state-of-the-art decompressors from
NVIDIA RAPIDS. | Jeongmin Park, Zaid Qureshi, Vikram Mailthody, Andrew Gacek, Shunfan Shao, Mohammad AlMasri, Isaac Gelado, Jinjun Xiong, Chris Newburn, I-hsin Chung, Michael Garland, Nikolay Sakharnykh, Wen-mei Hwu | 2023-07-07T08:27:01Z | http://arxiv.org/abs/2307.03760v1 | # CODAG: Characterizing and Optimizing Decompression Algorithms for GPUs
###### Abstract
Data compression and decompression have become vital components of big-data applications to manage the exponential growth in the amount of data collected and stored. Furthermore, big-data applications have increasingly adopted GPUs due to their high compute throughput and memory bandwidth. Prior works presume that decompression is memory-bound and have dedicated most of the GPU's threads to data movement and adopted complex software techniques to hide memory latency for reading compressed data and writing uncompressed data. This paper shows that these techniques lead to poor GPU resource utilization as most threads end up waiting for the few decoding threads, exposing compute and synchronization latencies.
Based on this observation, we propose CODAG, a novel and simple kernel architecture for high throughput decompression on GPUs. CODAG eliminates the use of specialized groups of threads, frees up compute resources to increase the number of parallel decompression streams, and leverages the ample compute activities and the GPU's hardware scheduler to tolerate synchronization, compute, and memory latencies. Furthermore, CODAG provides a framework for users to easily incorporate new decompression algorithms without being burdened with implementing complex optimizations to hide memory latency. We validate our proposed architecture with three different encoding techniques, RLE v1, RLE v2, and Deflate, and a wide range of large datasets from different domains. We show that CODAG provides 13.46\(\times\), 5.69\(\times\), and 1.18\(\times\) speed up for RLE v1, RLE v2, and Deflate, respectively, when compared to the state-of-the-art decompressors from NVIDIA RAPIDS.
## I Introduction
Compression has become a vital component of the data storage infrastructure due to unprecedented data size growth in big-data, high-performance computing (HPC), and data analytics applications [1, 2, 3, 4, 5, 6]. Compression aims to reduce the number of bits used to represent the data through various encoding techniques specialized for different data types and patterns. Compressed data must be first decompressed before it can be processed by compute operations.
As GPUs are increasingly used to meet the compute throughput demands of emerging big-data workloads [7, 8, 9], data analytics pipelines typically read compressed data from the storage devices into the GPU memory and deploy one or more GPU kernels for decompression before performing query computation on the data. Prior works [6, 10] have shown this approach's benefits over storing and moving uncompressed data. Thus, high-throughput decompression on the GPU is becoming a key requirement to avoid performance bottlenecks in these pipelines. Our profiling of data-dependent queries with the state-of-the-art GPU-enabled RAPIDS pipeline [1], e.g., "What is the average fare per trip for ride starting from Williamsburg?" on the NYC taxicab dataset [11], a popular dataset for evaluating data analytics pipelines, shows that approximately 91% of the total GPU execution time is consumed by the decompression kernel.
Prior works [12, 13, 14] have proposed various data-layout transformations on compressed datasets to reduce the GPU execution time spent on decompression. However, applying data-layout transformations on existing massive datasets of modern big-data applications [2, 15, 16, 17, 10] is an expensive proposition, making decompression techniques that require such transformations impractical. Therefore, it's much more desirable for a decompression scheme to support the standard compressed data formats, e.g., ORC [2], and not require data-layout transformations, a design goal that we aim to achieve in this work.
To better understand why decompression accounts for so much of the GPU execution time, we performed a detailed profiling study of the state-of-the-art GPU decompressor in RAPIDS. As modern compression schemes support indexed access to multiple chunks of compressed data that correspond to fixed-size chunks of uncompressed input data, the state-of-the-art decompressor exploits chunk-level parallelism by assigning compressed chunks to thread blocks. However, data dependencies still serialize the decoding operations within each chunk. Thus, the state-of-the-art decompressor dedicates a single thread, the leader thread, in each thread block to
decode the compressed data and broadcast the decoded information to the remaining threads in the thread block. All threads then collaborate to execute the memory writing or copying operations based on the decoded information. The state-of-the-art decompressor further enhances the memory bandwidth utilization and hides more of the memory latency by dedicating one warp in each thread block to prefetch compressed data.
By profiling the RAPIDS decompressor, we observe that (1) the memory-bandwidth-driven resource provisioning strategy in the state-of-the-art decompressor in fact fails to effectively hide compute operation, memory, and synchronization latencies; and (2) the root cause of the exposed latencies is the insufficient number of decoding (leader) threads that cannot take full advantage of the GPU's parallelism and provide the GPU warp schedulers with sufficient numbers of concurrently available instructions to hide latencies.
Based on these key observations, we propose CODAG, a modular and flexible framework for high-throughput decompression on GPUs. In CODAG, we assign compressed chunks to warps rather than thread blocks to drastically increase the exploitation of the GPU's parallelism to process more chunks at any given time, while maintaining the ability to exploit fine-grained parallelism within each chunk. We refer to the assignment of each chunk to a warp as _warp-level decompression compression_. We believe that the warp-level decompression offers a number of benefits in CODAG. First, warp-level decompression results in a much higher number of active decoding threads, which helps the hardware scheduler to effectively hide memory, synchronization, and decoding compute operation latencies. Second, since warp-level synchronization primitives have much lower latency and higher throughput than block-level synchronization, warp-level decompression reduces synchronization overhead.
On top of that, we further propose (1) a novel strategy for CODAG to ensure that all on-demand reads or writes in each warp are performed on a cache-line granularity and are thus coalesced, making efficient use of the available GPU memory bandwidth; (2) a clean abstraction for common operations during decompression with the associated Application Programming Interface (API) functions as implemented in decompression kernels. This new abstraction provides the flexibility for programmers to easily incorporate their own encoding techniques into the decoding section of the CODAG kernels while leveraging the rest of the kernel to achieve the desired level of latency tolerance and memory bandwidth efficiency.
We make the following key contributions in this paper.
* We conduct a systematic characterization study on the state-of-the-art GPU decompressor, identify the performance bottlenecks, and determine that decompression on modern GPUs is a compute-bound process.
* We present CODAG, a novel and superior GPU resource provisioning strategy for decompressors with various optimization techniques and abstraction. The proposed APIs can support common decompression operations so that users can easily integrate their compression algorithms into CODAG with low programming effort.
* We demonstrate CODAG's effectiveness and flexibility by implementing three encoding techniques, i.e., RLE v1, RLE v2, and Deflate. Results based on the NVIDIA A100 GPUs show that CODAG achieves 13.46\(\times\), 5.69\(\times\), and 1.18\(\times\) speedups over a state-of-the-art decompressor, respectively.
## II Background
In this section, we provide an overview of common compression encoding techniques and how their decompression algorithms have been traditionally parallelized. We then briefly explain why the resource provisioning strategies used in the state-of-the-art GPU decompressors limit the performance and efficiency of decompression on GPUs.
### _Traditional Encoding Techniques_
Generally, compressors take a contiguous sequence of bytes and attempt to encode that sequence in as few symbols as possible. The primary measure of merit of a compressor is the ratio of the size of the uncompressed data over that of the compressed data, referred to as the compression ratio. To maximize compression ratio, compressors use an arsenal of encoding techniques each of which targets different data types and patterns. Here, we describe some of the most common encoding techniques.
The run-length encoding (RLE) technique takes advantage of the same byte or value being repeated in a contiguous sequence, called a _run_. The RLE v1 encoding scheme encodes each _run_ with two values: a literal and the length of the run.
Delta encoding provides a high compression ratio when there is a contiguous sequence of values and the differences (deltas) between adjacent values are small. Delta encoding stores the starting value and the number of values in the sequence followed by the deltas to create a compressed data representation. The RLE v2 encoding scheme leverages delta encoding in addition to RLE to capture more data patterns and further improve the compression ratio over RLE v1.
Dictionary-based encoding generates compact encoding when sequences of values are repeated between different sections of the uncompressed data. Given a sequence of bytes, the decompressor can use a dictionary to look up the sequence to see if it has been encountered in the past. If so, the repeated instance can be encoded with 2 values, an offset into the dictionary, and the length of the repeated sequence.
In dictionary encoding, some sequences are repeated more than others. Thus, some references (offset/length pairs) appear more frequently than others [18] in the compressed stream. The Deflate encoding scheme sorts the literals and the length-offset pairs according to their frequency into a Huffman tree [19] so that the number of bits used for each is inversely proportional to their frequency, thus minimizing the total number of bits used.
### _Parallelizing Decompression_
Designing high-throughput parallelizable decompression schemes is challenging due to the nature of compressed data. First, compressors output a sequence of variable-length blocks of compressed data, often without any alignment guarantees (See SS II-A), resulting in size irregularities. Second, the encoding of a bit or byte in a compressed stream depends on the actions taken for the bits or bytes before it in the stream, creating data-dependencies in the compressed stream. Thus, decompression requires starting at the first byte of the compressed stream and decoding the stream sequentially, limiting parallelism during decompression.
To expose data parallelism during decompression, modern compression data formats divide the uncompressed input into fixed-size chunks, compress each chunk independently, and keep the bytes of the compressed chunk sequential in memory [1, 2, 16]. During decompression, each compressed chunk is assigned to a processing unit so that it can be decompressed independently of other chunks. The data format further specifies metadata for offsets in the uncompressed and/or compressed streams.
### _State of the Art in Exploiting GPUs for Decompression_
As GPUs become increasingly adopted by big-data applications due to their high compute throughput and memory bandwidth, there has been growing interest in high-performance decompression on GPUs. Generally, high-performance decompressors on GPUs assume modern data formats that support chunking. Let's take a look at how the decompression kernel is mapped to GPU resources in the state-of-the-art NVIDIA RAPIDS implementation [1, 4, 5].
As shown in Figure 0(a), after finding the offset for each compressed chunk, RAPIDS assigns each compressed chunk to a thread block, referred to as a decompression unit. Each decompression unit consists of a prefetching warp, batch buffers in the fast shared memory, and decoding/writing warps. RAPIDS opts to use a specialized prefetching warp that fetches bytes of the compressed stream from the slow global memory in a coalesced manner to the batch buffers. Having a specialized prefetching warp in the same thread block as the decoding/writing warps has the benefit of allowing the batch buffers to reside in the fast shared memory and the prefetch operations to overlap with the decoding operations for the same chunk. Both features help to reduce the latency of decompressing each chunk.
In the decoding warp, a leader thread is responsible for decoding the bytes from the compressed stream, as this is a sequential process. During decoding, the leader thread reads the data from the batch buffers. When the information has been decoded, the leader thread synchronizes with the other threads in the thread block and directs them to collectively perform a writing operation to the output decompressed data stream in a coalesced manner.
## III Characterization Study of State-of-the-art Decompressor
Understanding and optimizing decompression on GPUs is essential as the decompression can consume approximately 91% of total GPU time on a modern data analytics pipeline. However, none of the prior works have performed a detailed analysis of existing state-of-the-art decompression schemes on GPUs. To this end, we conduct an in-depth characterization study of the state-of-the-art decompressor in NVIDIA RAPIDS [1], and identify its key performance bottlenecks.
**Setup:** We use a variety of datasets across different domains as shown in Table IV and evaluate RAPIDS's decompressor with RLE v1 and Deflate schemes. Due to space limitations, we only present the results from two datasets (MC0 and TPC) but similar trends are observed for the rest. To characterize decompressor performance, we use the system configuration described in Table III and use the NVIDIA Nsight profiling tools to capture important metrics.
**Key insights:** From the left side of Figure 2, we know that for the RLE v1 encoding scheme, both compute and memory resources are far from being fully utilized. As we
Fig. 1: Comparison of CODAG with the state-of-the art decompressor NVIDIA RAPIDS implementation. In CODAG design, the conditional statement A checks whether there is enough space in the input buffer to store full cacheline data, and B checks whether threads need to write uncompressed data based on the decoded information.
Fig. 2: The peak throughput percentages and the stalled instruction distribution of RAPIDS decompressor for RLE v1.
discussed in SSII-C, these results point to latency exposed due to insufficient parallelism during the decoding process as all threads in a thread block wait on a barrier for the leader thread to complete its decoding operation. To understand the impact of the barrier, we measured the distribution of conditions causing an SM scheduler stall cycle, which is shown on the right side of Figure 2. This metric provides the percentage of threads waiting for each type of stalling condition during the cycles where an SM scheduler cannot find a warp ready for execution.
From Figure 2, during the SM scheduler stall cycles, most (up to 80%) of the threads are not ready because they are waiting on a barrier for the leader thread to complete its current decoding operation. Apart from barriers, up to 20% of the instructions are waiting for the resolution of branch conditions (Branch Resolve) when there is a lack of ready threads for the SM scheduler to dispatch for execution. The Wait metric indicates that a warp is stalled waiting on a fixed latency (arithmetic/logic unit) execution dependency. Moreover, some threads do not receive any work during the writing stage as run lengths can be smaller than the size of the thread block, further degrading the achievable performance.
For Deflate, unlike the traditional view that it is memory bandwidth bound [1, 13], our profiling results in Figure 3 show that its memory bandwidth utilization is less than 25%, which indicates that it is not limited by memory bandwidth but is compute limited. In contrast to RLE v1, Deflate has a much higher utilization of compute throughput (up to 65%) than RLE v1 because of its high computational complexity in dictionary look-up and Huffman tree traversal. We need more detailed metrics and analysis in order to discern whether the performance is limited by a particular type of compute pipeline, memory latency, or compute latency.
We use the profiler to analyze the utilization of the different GPU compute pipelines for Deflate decompression. The right side of Figure 3 shows that the arithmetic logic units (ALU) are up to 55% utilized while the fused multiply/add and load-store units are up to 35% and 20% utilized. This modestly high degree of ALU and FMA utilization mainly comes from the fact that during decoding, the leader thread executes a large number of arithmetic instructions for every byte to identify the next action for generating the output for the byte.
Nevertheless, all these results point to memory latency and/or compute latency as the main limiter of Deflate performance. Due to the high computational complexity of Deflate decoding process, it takes a long time for the leader thread to complete the decoding of each byte while most threads in the thread block are waiting for the decoding thread. During this time, there are not enough decoding threads for the SM scheduler to tolerate the compute pipeline latency. As a result, the execution speed of Deflate is limited by compute pipeline latency in addition to barrier stalling.
**Summary and take-aways:** From the profiling results, we see that the decompression speed of RLE v1 is limited by barrier latency and that Deflate is limited by barrier stall latency as well as the fixed latency of compute pipelines. Neither the large number of writing threads nor the specialized prefetching warps help address this limitation. Any further improvement in decompression speed for both RLE v1 and Deflate would require more simultaneously running decoding threads (more chunks being decoded in parallel), which is an important design goal of the CODAG architecture.
## IV CODAG Design
The goal of CODAG is to provide a flexible framework for high-throughput decompression on GPUs without requiring developers to implement sophisticated, GPU-specific optimizations. CODAG's architecture, resource provisioning strategy, and kernel APIs enable developers to easily incorporate their encoding techniques of choice into CODAG and rely on the rest of the framework to achieve high decompression speed. On the one hand, as discussed in Section III, the key to improved decompression speed is to increase the number of simultaneously running decompression threads so that the SM scheduler can better hide the synchronization/memory/compute-pipeline overheads.
On the other hand, it is also critical to maintain the coalesced access patterns in reading uncompressed data, reading dictionary value sequences, and writing decompressed data so that the memory access cost does not become excessive and creates new bottlenecks. Thus, the CODAG architecture must address three key challenges in providing an efficient and effective solution that can be used with a wide variety of encoding techniques:
* CODAG must provision GPU execution resources so that many threads can simultaneously decode more chunks in each SM to effectively tolerate latencies and efficiently utilize hardware resources.
* As the naive on-demand reading and writing operations can make poor use of memory bandwidth and exhibit excessive latency, CODAG must optimally coalesce these accesses to minimize memory access overhead in spite of the irregular consumption of compressed data and sporadic generation of decompressed data by the decoder.
* As the sophisticated optimizations for efficiently using memory bandwidth and tolerating latencies require significant programming skills and efforts, the optimizations in the CODAG framework must be generalized into reusable primitives for a wide range of encoding techniques.
### _CODAG System Overview and Abstraction_
Figure 1b shows an overview of the CODAG framework. CODAG uses warps as decompression units instead of the larger thread blocks used by the state-of-the-art decompressor, enabling CODAG to simultaneously decompress many
Fig. 3: The compute/memory peak throughput percentages and compute unit utilization of RAPIDS decompressor for Deflate.
compressed chunks. This enables the hardware scheduler to overlap instruction execution from several independent warps and efficiently hide compute pipeline and memory latencies. In CODAG, all threads in each warp perform the decoding operations as well as amalgamate to read uncompressed data and write decompressed data. Thus, CODAG avoids the high barrier-synchronization overhead that the state-of-the-art decompressor incurs.
In CODAG, each warp executes a decompression GPU kernel that implements a control flow of three main operations: sequential decoding, coalesced reading, and coalesced writing, as shown in Figure (b)b. All threads in a warp follow the same control flow to avoid using any broadcast synchronization operations. The control flow consists of a main loop for the decoding process and two conditional statements for on-demand reading and writing operations. During decoding, all threads sequentially execute the same decoding process in-compliance with the serial nature of the compressed data stream. In our current framework implementation, the sequential decoding code for different combinations of pertinent encoding techniques can be easily incorporated into the kernel as a CUDA device function.
For coalesced on-demand reading operations, all threads in the warp collaborate to fetch the next cache-line section (e.g., 128 bytes in the A100 GPUs) of the compressed input data stream in a coalesced manner into an input buffer. Having a buffer for the fetched bytes minimizes the number of fetches in spite of the potentially irregular patterns of compressed data consumption during decoding, which helps to reduce the cost of accessing the compressed data. In comparison with the RAPIDS decompressor, CODAG does not have a specialized prefetch warp in each thread block to asynchronously prefetch input data while the thread block's leader thread is decoding. However, with many independent warps being simultaneously executed, CODAG can still tolerate the memory latency.
When a writing operation is required during decoding, all threads in the warp collaborate to execute the coalesced on-demand writing of a cache-line of the uncompressed output data to the output buffer. Compared to the RAPIDS decompressor, this design at first glance may not seem to offer as much memory parallelism when writing into each output chunk, as RAPIDS uses thread blocks with up to 1024 threads available for writing into each output chunk.
However, the benefit with CODAG is that warp-level barriers are significantly less expensive than their block-level counterparts. Furthermore, with the same thread block size, CODAG supports decompressing and writing up to \(32\times\) more independent chunks in parallel. The increased data parallelism can easily compensate for the loss of memory-level parallelism within a chunk.
As the reading pattern is the same across all encoding techniques, the on-demand reading operation code is portable to all (combinations of) encoding techniques. However, the on-demand writing operations can vary depending on the encoding techniques and need to be specialized by decompressor developers. To shield the decompressor developers from the complexity of GPU algorithmic optimizations that are critical to high-performance writing operations, CODAG provides several major types of optimized writing primitives to support a wide range of popular encoding techniques.
### _CODAG Meta-data and Performance APIs_
CODAG provides an infrastructure for realizing the warp-level decompression as well as the optimized reading and writing operations to lower the programming effort when using CODAG for different encoding techniques. To isolate the complexity of GPU optimizations from decompressor developers, CODAG provides input_stream and output_stream abstractions that transparently manage the synchronization, coalescing and metadata for the on-demand reading and writing operations. Developers can simply call the member functions, shown in Table I and Table II, as needed to read and write data efficiently.
To support the highly efficient on-demand reading and writing operations, input_stream and output_stream internally manage meta-data such as offsets for the reading and writing pointers, information of input buffer, or fetched bites in the top entry of the input buffer. With these provided reading and writing primitives, CODAG only requires small changes to the sequential decoding code as the memory access optimizations are hidden via the internal functions. In the next few sections, we will discuss the CODAG decompression scheme and its optimizations more in detail.
### _Warp Level Decompression_
The large granularity of the decompression units in the state-of-the-art limits the scheduler's ability to hide latency, because the vast majority of threads end up spending most
\begin{table}
\begin{tabular}{|l|l|} \hline
**Member Functions** & **Description** \\ \hline fetch\_bits(int n) & Fetch the next n bits in the compressed stream. \\ \hline peek\_bits(int n) & Peek at the next n bits in the compressed stream. \\ \hline \end{tabular}
\end{table} TABLE I: CODAG APIs for input_stream.
Fig. 4: Illustration of the issued instructions between the baseline and CODAG in a steady state.
of their execution time waiting for a few leader threads to finish sequential decoding. The left side of Figure 4 is a cartoon illustration of how the instructions are issued during the decompression for the state-of-the-art decompressor. We assume an SM with two schedulers and an occupancy capacity of 4 warps for the example. In this tiny example, assume that the state-of-the-art has a decompression unit of a thread block that consists of two warps. Thus, only two independent compressed chunks (green and orange) can be assigned to a single SM in the state-of-the-art where one of the two warps is specialized to prefetch data from global memory into the buffers, and both warps work together to perform writes to global memory after the leader has decoded a symbol.
We assume that during the first part of the timeframe, each scheduler is issuing instructions of the warp containing the leader thread of one of the decompressor units and, for simplicity, that the batch buffer is full so there is no prefetch activity. Since there is effectively only one thread active in each scheduler, the latency of the function units is fully exposed, shown as pipeline bubbles between the decoding operations. After the decoding operations are complete, all threads synchronize, incurring the synchronization pipeline bubbles before the write operations. The SM execution resources are underutilized due to the pipeline bubbles.
The right side of Figure 4 shows the execution timing of CODAG. CODAG uses warps as a decompression unit to allow more chunks (green, orange, purple, blue) to be decompressed simultaneously. This gives the hardware scheduler more independent warps to work with. Assume that the green and purple warps are assigned to scheduler 1 whereas the orange and blue are assigned to scheduler 2. During the first part of the time frame, the decoding operations of the green and blue warps can be issued alternatively to tolerate each other's latency. Thus, the SM execution resources are much better utilized. In reality, the reading, decoding, and writing operations of all the warps will be mixed together to more fully utilize the memory bandwidth than what is shown in Figure 4. That is, since there are no inter-warp dependencies in CODAG, the SM's scheduler has a wider pool of useful instructions to schedule from and thus can better reduce pipeline bubbles and hide latencies. Furthermore, there is no more synchronization overhead before the writing operations.
### _CODAG Decoding Scheme_
As the prefetch warp is removed in CODAG's warp-level decompression scheme, all threads in the warp collectively execute on-demand reading operations during decoding whenever there are not enough fetched bytes. To do this, the decoding state needs to be saved before the reading operation and restored after the reading operation to enable the leader thread to continue decoding after the reading operations.
To avoid the above-mentioned overhead during the transitions between the decoding operations and the reading/writing operations, CODAG uses an all-thread decoding technique by activating all threads in the warp to execute the same serial decoding operations on the same chunk. In this decoding scheme, all threads have a local copy of decoded information as they execute the same decoding process thus broadcasts are no longer needed. As for the reading operations, all threads in the warp only need to perform a low-cost warp-level synchronization to ensure a consistent view of the input buffer before reading as newer GPU architectures [20, 21] do not guarantee lock-step execution of the threads in a warp. After the reading operation, all threads in the warp can return naturally to continue the decoding operation, eliminating the need for explicitly saving and restoring the decoding state.
Since the decoding operation is redundantly performed by all threads in a warp, one might take a pessimistic view of the all-thread decoding technique. Thus, we use a micro-benchmark to compare the ALU compute throughput of the single-thread decoding and the all-thread decoding. For this micro-benchmark, the number of arithmetic operations executed per global memory access is varied from 1 to 100,000 to capture the compute throughput at different compute intensity levels. The result shows that the difference in compute throughput of the two decoding techniques never exceeds 0.1%. Thus, the additional arithmetic operations executed due to the all-thread decoding technique do not degrade the achieved compute throughput.
```
1:if len - counter \(<\) 128 then
2:barrier()
3:read_idx = offset + lane * 4
4:en_idx = (head + counter + lane) % len
5:Bufflem_idx ] = Input(read_idx)
6:counter (= 128
7:barrier()
8:endif if
9:re_val = Buff(head)
10:head = (head + 1) % len
11:counter-.
12:return req_val
```
**Algorithm 1** On-demand Reading With Shared Input Buffer
### _Coalesced On-Demand Reading Operation_
To maximize the benefits of the warp-level decompression scheme, CODAG also provides the APIs for highly efficient on-demand reading operations since it is also critical to maintain the coalesced access patterns in reading compressed data to optimize GPU memory bandwidth utilization. Thus, CODAG implements coalesced on-demand reading operations behind the input_stream abstraction.
CODAG's on-demand reading operation with its default configuration is illustrated with the pseudocode in Algorithm 1. By default, CODAG deploys an input buffer in shared memory, to temporarily store the fetched bytes, minimizing the number of global memory accesses. The size of the input buffer is at least twice the cacheline size so that entire
\begin{table}
\begin{tabular}{|l|l|} \hline
**Member Functions** & **Description** \\ \hline write\_byte(int8\_t b) & Write a literal (byte) to the output buffer \\ \hline write\_run(T init, size\_t & Write a run based on an initial value of \\ len, T delta) & type T, a length, and a delta. \\ \hline memcpy(int off, int len) & Memory operation for dictionary-based \\ & encoding with the offset and length. \\ \hline \end{tabular}
\end{table} TABLE II: CODAG APIs for output_stream.
cachelines can be easily enqueued in the input buffer. An input buffer also has a head pointer for threads to determine where to read from in the input buffer, and a length to keep track of the number of bytes left in the buffer.
For a given on-demand reading operation, all threads in a warp first collectively refill the shared input buffer with bytes fetched from a cacheline. Each thread in the warp fetches 4-bytes after a warp-synchronization barrier to ensure coalesced memory accesses. Once the input buffer is refilled, threads update the offset into the input in the input_stream instance. Then, the threads read the requested bits from the input buffer using the head pointer and update the internal state of its input_stream for the next on-demand reading operation.
**Using Registers:** The shared memory usage for the input buffer might limit the applicability of CODAG in applications with high shared memory utilization. To address this, CODAG provides a configuration where the input buffer is implemented with the threads' local registers instead of the shared memory. In this configuration, each thread uses two 32-bits registers to implement the input buffer.
Similar to the default configuration, the input buffer is first refilled when there is enough space to store a complete cacheline. This is done by leveraging double buffering where two sets of 32 local registers across the warp operate as a double buffer. Since the data is stored in a local register, CODAG exploits warp-wise broadcast when fetching requested bytes from the input buffer. For this operation, the head pointer is first used to identify which thread's local register is holding the requested bytes so that the requested bytes can be broadcast by the appropriate thread to all other threads in a warp.
### _Coalesced On-demand Writing Operation_
```
1:ifs_4B_aligned(write_ptr)then
2:pad_bytes = (4 - (write_ptr % 4)) % 4
3:byte_memory(offset_pad_bytes)
4:endif
5:barrier()
6:num_ptr = eceil(len - pad_bytes) / 128)
7:for (int + 0; i = num_ptr; i+ + 4) do
8:read_idx = eceil((read_ptr + lane+4) / 4)
9:r1 = eull(eradd_idx)
10:r2 = eull(eradd_idx)
11:r4 = funnel_shift(r1, r2, read_ptr % 4)
12:out(write_ptr + 4+id) / 4) = rdata
13:read_ptr, write_ptr + 32
14:barrier()
15:endfor
```
**Algorithm 2** Memcpy(offset, len)
**Writing a literal value:** As this operation only requires a few bytes to write to the output buffer, only one thread executes this operation.
**Writing a run:** This operation is commonly used for RLE-based encoding techniques. For this operation, each thread independently computes its decompressed value based on the initial value, delta, and thread index and then writes it to the output buffer at the appropriate offset to maximize memory utilization.
**Memcpy:** Common dictionary-based encoding techniques compress data with a length specifying the number of bytes to copy from the dictionary to the output buffer and an offset into the output buffer from the last byte written before the current memcpy starts. Unlike a regular memcpy, using a warp to perform a memcpy operation in a dictionary-based compression algorithm has a couple of challenges.
First, as the dictionary is implicit in the previously written output, the reading and the writing pointer both point to the same output buffer. Second, common algorithms only provide byte alignment for both the input and the output. These characteristics make it difficult to exploit the warp's parallelism in memcpy, especially when we would like each thread to copy more than 1-byte (e.g. 4-bytes) at a time for better coalescing, reduced executed memory instructions, and improved GPU cacheline and memory bandwidth utilization.
To address these challenges, we design CODAG's fast memcpy operation, shown in Algorithm 2. For a given memcpy operation, the threads first fetch the writing pointer from output_stream and check whether it is 4-byte aligned (line 1 in Algorithm 2). If it is not aligned, the threads collectively copy up to 3 bytes to guarantee 4-byte alignment (line 3). Once the writing pointer is aligned, the warp synchronizes to ensure that the copied bytes are visible to the whole warp. Then, the warp executes copy operation in iterations to cover the entire memcpy length, with each thread writing one 4-byte value to the output each iteration. Each thread independently calculates its reading pointer for a given iteration based on its warp lane index (line 8). Since the reading pointer is not guaranteed to be 4-byte aligned, each thread reads two consecutive 4-byte elements (line 9-10), then bit-shifts and concatenates them to generate the correct 4-byte output element (line 11) to write to its assigned offset (line 12).
Note, in each iteration, each thread in the warp independently reads two 4-byte elements written in previous iterations and writes one complete 4-byte element. We found that using 4-byte granularity for the memcpy loop body provided the best trade-off between overhead to align the writing pointer, the amount of parallelism exposed, and cacheline utilization. All threads in a warp synchronize at the end of each loop iteration, preventing race conditions between iterations.
Moreover, with dictionary compression, it is possible that the length of a memcpy is larger than the offset. In this special case, the byte sequence from the offset till the end of the output (before the memcpy executes) is to be appended to the output, possibly multiple times. Just like in Algorithm 2, we first align the writing pointer to 4-bytes and then assign
each thread one 4-byte output element to generate per memcpy loop iteration. However, in this special case, each 4-byte output element is created by combining two consecutive 4-byte elements in a fixed circular window of 4-byte elements, starting from the 4-byte element pointed to by the offset until the last 4-byte element of the output buffer. Thus, each thread can independently use modulo-arithmetic to find the two consecutive 4-byte elements in the specified window, and bit-shift and concatenate them to generate its output element each iteration. We don't show the algorithm listing for this case due to space constraints.
## V Evaluation
Our evaluation demonstrates that: (1) CODAG framework improves the performance of decompression by efficiently utilizing the hardware resources provided by the GPU. CODAG provides 13.46\(\times\), 5.69\(\times\), 1.18\(\times\) decompression throughput speed up (geomean) on ORC RLE v1, ORC RLE v2, and Deflate compared to the state-of-the art implementation in NVIDIA RAPIDS [1]. (2) A decompressor implemented in the CODAG framework can rival the state-of-the-art GPU implementation without application-specific optimizations.
### _Experiment Setup_
We evaluate CODAG using three different encoding techniques, namely RLE v1 and RLE v2 from the Apache ORC file format, and Deflate. These encoding schemes are chosen due to wide applicability but other encodings can also be enabled. We compare CODAG and the state-of-the-art NVIDIA RAPIDS baseline on an environment shown in Table III. We lock the GPU's clock frequency to peak before running the experiments.
### _Evaluation Datasets_
For each algorithm, we evaluate CODAG and the baseline on seven real-world sample datasets curated from four industrial application domains as shown in Table IV. Each dataset has unique properties enabling us to evaluate the proposed scheme on not only different types of encoding techniques but also different data types and patterns. For example, MC0 and MC3 have long run-lengths while TPC and TPT contains many repeated patterns. CD2, and TC2 follow power law distributions for the data. HRG is human genome dataset and has repeated text pattern containing A,T,C,G and N characters.
For the baseline, we use the official data management tools provided by ORC [2] to compress data with RLE v1 and RLE v2 algorithms. To compress the data with deflate, we use the zlib library [22] with a compression level of 9, the highest compression level. The chunk size for the original data is fixed to be 128KB for both CODAG and the baseline.
### _Latency Analysis_
To analyze how effectively CODAG hides latency and utilizes resources, we measured the stalled instruction distribution and compute/memory throughput percentages of CODAG and the baseline. For the stalled instruction distribution, we measured the number of instructions stalled due to synchronization (SB) and Math Pipeline Throttle (MPT). Measuring SB helps to understand thread occupancy during decompression, and measuring MPT helps us to understand the pressure on computing units as MPT means the instructions are stalled to wait on a specific oversubscribed math pipeline.
In this experiment, MC0 and TPC datasets are used for the measurement as they have drastically different compression ratios. As shown in Figure 5, we observe a dramatic difference in SB percentages between the baseline and CODAG. The lower SB percentage shows that there are more active threads during decompression. Moreover, we observe substantially more instructions are stalled due to MPT in CODAG, showing CODAG utilizes more compute resources and shifts decompression from latency-bound to compute-bound on GPUs.
Figure 6 shows that CODAG achieves a higher memory throughput percentage compared to the baseline. This is because CODAG optimally utilizes memory bandwidth by leveraging coalesced on-demand reading/writing operations and enabling more memory instructions to be executed in parallel by higher ILP.
### _Decompression Throughput_
We now evaluate the decompression throughput (output bytes per second) of CODAG and RAPIDS on three encoding techniques. Figure 7 shows the decompression throughput of
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c|}{**Compression Ratio**} & \multicolumn{2}{c|}{**Avg Comp Sym Len**} \\ \cline{2-5} & **RLE v1** & **RLE v2** & **Deflate** & **RLE v1** & **Deflate** \\ \hline \hline MC0 & 0.023 & 0.022 & 0.017 & 29.7 & 81.3 \\ \hline MCS & 0.038 & 0.039 & 0.015 & 40.5 & 156.6 \\ \hline TPC & 0.867 & 0.637 & 0.119 & 1.00 & 10.1 \\ \hline TPT & 1.41 & 0.99 & 0.042 & 1.00 & 32.6 \\ \hline CD2 & 0.286 & 0.308 & 0.625 & 20.9 & 5.89 \\ \hline TC2 & 0.087 & 0.075 & 0.0172 & 34.3 & 67.3 \\ \hline HRG & 0.975 & 0.972 & 0.305 & 1.00 & 7.76 \\ \hline \end{tabular}
\end{table} TABLE V: **Eompression Ratios and average compressed symbol length when the datasets are compressed by RLE v1, RLE v2, and Deflate. The compressed symbol length for RLE v2 is excluded due to various compressed types for RLE v2.**
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \multicolumn{1}{|l|}{**Dataset**} & \multicolumn{1}{l|}{**Category**} & \multicolumn{1}{l|}{**DType**} & \multicolumn{1}{l|}{**Size (GB)**} \\ \hline \hline Mortgage Col 0 (MC0) [23] & Analytics & uint\_64 & 4.86 \\ \hline Mortgage Col 3 (MC3) [23] & Analytics & fp32 & 2.43 \\ \hline NYC Taxi Passenger Count & Analytics & int\_8 & 3.07 \\ \hline NYC Taxi Payment Type (TPP) [11] & Analytics & char & 7.41 \\ \hline Crito Dense 2 (CD2) [17] & Recommenders & uint\_32 & 0.73 \\ \hline Twitter COO Col 1 (TC2) [24] & Graph & uint\_64 & 5.47 \\ \hline Human Reference Genome (HRG) [25] & Genomics & char & 3.1 \\ \hline \end{tabular}
\end{table} TABLE IV: **Real-world dataset used for evaluating CODAG.**
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c|}{**Compression Ratio**} & \multicolumn{2}{c|}{**Avg Comp Sym Len**} \\ \cline{2-4} & **RLE v1** & **RLE v2** & **Deflate** & **RLE v1** & **Deflate** \\ \hline \hline MC0 & 0.023 & 0.022 & 0.017 & 29.7 & 81.3 \\ \hline MCS & 0.038 & 0.039 & 0.015 & 40.5 & 156.6 \\ \hline TPC & 0.867 & 0.637 & 0.119 & 1.00 & 10.1 \\ \hline TPT & 1.41 & 0.99 & 0.042 & 1.00 & 32.6 \\ \hline CD2 & 0.286 & 0.308 & 0.625 & 20.9 & 5.89 \\ \hline TC2 & 0.087 & 0.075 & 0.0172 & 34.3 & 67.3 \\ \hline HRG & 0.975 & 0.972 & 0.305 & 1.00 & 7.76 \\ \hline \end{tabular}
\end{table} TABLE V: **Eompression Ratios and average compressed symbol length when the datasets are compressed by RLE v1, RLE v2, and Deflate. The compressed symbol length for RLE v2 is excluded due to various compressed types for RLE v2.**
each encoding technique for both CODAG and the baseline, and Figure 8 shows the CODAG speedup over the baseline decompressor. As shown in Figure 7, CODAG achieves 38.07 GBps, 26.87 GBps, and 51.96 GBps geo-mean decompression throughput while the state-of-the-art decompressor achieves 2.83 GBps, 4.72 GBps, and 44.18 GBps. Thus, CODAG achieves 13.46\(\times\), 5.69\(\times\), and 1.18\(\times\) geo-mean speed ups compared to the baseline as shown in Figure 8.
CODAG sees significantly higher decompression throughput in MC0 and MC3 datasets. This is because highly compressable datasets amplify the effective memory bandwidth, requiring fewer bytes to decompress the chunk and enabling more active threads during writing operations. In our datasets, the mortgage datasets (MC0 and MC3) are highly compressable datasets, thus showing the highest decompression throughput.
We note that CODAG achieves substantially higher speedups over the baseline for RLE-based encoding techniques than Deflate. This is because with the state-of-the-art the average run length needs to be longer for RLE v1 for best thread occupancy; else, most threads become inactive during writing operations.
### _Single-Thread Decoding vs All-Thread Decoding_
Next, we present the benefit of performing all-thread decoding over single-thread decoding for RLE v1 (RLE v2 has similar behavior) and Deflate decompression. When we compare the decompression throughput of the two decoding techniques, the result shows the CODAG with the all-thread decoding technique achieves 1.17\(\times\) and 1.19\(\times\) higher geo-mean decompression throughput than the single-thread decoding technique. As discussed in SS IV-D, this shows CODAG successfully reduces the number of broadcast operations while maintaining the achieved compute throughput of ALUs.
### _Impact of Finer Decompression Unit_
This section presents an ablation study to show that (1) reducing the number of threads responsible for decoding a chunk is key to CODAG's performance gains, and (2) CODAG's warp-level decompression, i.e., without a prefetching warp, further improves performance.
For this evaluation, we add a prefetch warp to CODAG, such that two warps are scheduled per chunk - one warp to prefetch compressed input data and one warp to perform the decompression. This makes the CODAG architecture more similar to the baseline except that the baseline allocates more threads to decompress each chunk - 1024 threads for RLE v1 and RLE v2 and 128 threads for Deflate.
Figure 8 shows CODAG with a prefetching warp is able to achieve 7.10\(\times\), 4.33\(\times\), and 1.02\(\times\) higher geo-mean decompression throughputs compared to the baseline for the three respective encoding techniques. Based on the result, finer decompression units enable the GPU hardware to tolerate compute operation and memory latency more effectively.
However, the performance gains for each of the three encoding techniques is lower than the performance gains achieved by CODAG's single-warp decompression unit, i.e. without a prefetching warp. This is because the decompression process is not bounded by memory bandwidth, thus manually allocated hardware resources for prefetching to tolerate memory latency limits hardware resource utilization. Additionally, CODAG minimizes the cost of memory access latency by coalesced on-demand memory operations. Thus, the benefits from reallocating the resources used for prefetching to decoding out-scales the benefits from prefetching units.
### _Impact of Hardware Resources_
In the previous sections, we establish that CODAG offers superior decompression bandwidth on A100 GPU compared to the baseline. In this section, we constrain the CODAG and baseline design by downgrading the GPU to NVIDIA Volta V100. From this experiment, we analyze the impact of hardware resources on CODAG and the scalability of CODAG and the baseline design.
As shown in Figure 8, CODAG achieves 11.19\(\times\), 4.39\(\times\), and 1.10\(\times\) geo-mean speed up compared to the baseline on V100 while it achieves 13.46\(\times\), 5.69\(\times\), and 1.18\(\times\) geo-mean speed up compared to the baseline on A100. The impact of compute under-utilization becomes more critical as newer GPU provides higher compute resources and memory bandwidth. Thus, we observe CODAG scales better than the baseline with the available hardware resources as it more efficiently exploits parallelism provided by the GPUs.
## VI Related Work
**Parallel compression/decompression with CPUs:** Multiple efforts to parallelize compression and decompression algorithms on CPUs have been proposed in the past [26, 27, 28]. pigz decompression is inherently hard to parallelize due to data dependency between the variable-length compressed blocks [26, 29]. Zstd [28] and pbzip2 [27] implementations add decompressed size as part of their headers to exploit parallelism. However, such coarse-grain parallelism across the compressed blocks is insufficient to fully utilize GPUs [13].
Fig. 5: Stalled instruction distribution comparison between CODAG and RAPIDS baseline.
Fig. 6: Comparing the compute/memory peak throughput of CODAG and RAPIDS. Comp/Mem stand for compute/memory throughput percentage.
**GPU Decompression:** Several prior works have proposed implementing efficient compression or encoding algorithms for GPUs [12, 13, 14, 30, 31, 32, 33, 34, 35]. The prior works [12, 13, 14, 30, 33], define their own file formats to achieve high decompression throughput for Deflate algorithm on GPU and propose to split the input data into smaller data blocks. However, compared to CODAG, the datasets should go through expensive pre-processing steps to recompress the entire datasets into their own data format. Other prior works [31, 32, 34, 35] leverage algorithm-specific optimizations, limiting flexibility.
## VII Conclusion
In this work, we introduced CODAG, a flexible framework for high throughput decompression on GPUs without requiring decompressor developers to implement sophisticated, GPU-specific optimizations. CODAG exploits warp-level decompression and highly efficient on-demand reading and writing operations to effectively hide latencies. Moreover, CODAG completely removes the broadcast operations by dedicating all threads to participate in decoding. Our evaluations show that CODAG can achieve 13.46\(\times\), 5.69\(\times\), and 1.18\(\times\) speed up compared to the state-of-the-art decompressor for RLE v1, RLE v2, and Deflate encoding techniques.
|
2306.14653 | Optimization of the Generalized Covariance Estimator in Noncausal
Processes | This paper investigates the performance of the Generalized Covariance
estimator (GCov) in estimating and identifying mixed causal and noncausal
models. The GCov estimator is a semi-parametric method that minimizes an
objective function without making any assumptions about the error distribution
and is based on nonlinear autocovariances to identify the causal and noncausal
orders. When the number and type of nonlinear autocovariances included in the
objective function of a GCov estimator is insufficient/inadequate, or the error
density is too close to the Gaussian, identification issues can arise. These
issues result in local minima in the objective function, which correspond to
parameter values associated with incorrect causal and noncausal orders. Then,
depending on the starting point and the optimization algorithm employed, the
algorithm can converge to a local minimum. The paper proposes the use of the
Simulated Annealing (SA) optimization algorithm as an alternative to
conventional numerical optimization methods. The results demonstrate that SA
performs well when applied to mixed causal and noncausal models, successfully
eliminating the effects of local minima. The proposed approach is illustrated
by an empirical application involving a bivariate commodity price series. | Gianluca Cubadda, Francesco Giancaterini, Alain Hecq, Joann Jasiak | 2023-06-26T12:46:24Z | http://arxiv.org/abs/2306.14653v3 | # Optimization of the Generalized Covariance Estimator in Noncausal Processes
###### Abstract
This paper investigates the performance of the Generalized Covariance estimator (\(GCov\)) in estimating mixed causal and noncausal Vector Autoregressive (VAR) models. The \(GCov\) estimator is a semi-parametric method that minimizes an objective function without making any assumptions about the error distribution and is based on nonlinear autocovariances to identify the causal and noncausal orders of the mixed VAR. When the number and type of nonlinear autocovariances included in the objective function of a \(GCov\) estimator is insufficient/inadequate, or the error density is too close to the Gaussian, identification issues can arise, resulting in local minima in the objective function of the estimator at parameter values associated with incorrect causal and noncausal orders. Then, depending on the starting point, the optimization algorithm may converge to a local minimum, leading to inaccurate estimates. To circumvent this issue, the paper proposes the use of the Simulated Annealing (SA) optimization algorithm as an alternative to conventional numerical optimization methods. The results demonstrate that the SA optimization algorithm performs effectively when applied to multivariate mixed VAR models, successfully eliminating the effects of local minima. The approach is illustrated by simulations and an empirical application of a bivariate mixed VAR model with commodity price series.
keywords: Multivariate causal and noncausal models, Generalized covariance estimator, Simulated Annealing, Optimization, Commodity prices. _JEL:_ C32
## 1 Introduction
The causal vector autoregressive model (VAR) was introduced by Sims (Sims (1980)) as an alternative to the simultaneous equation models for macroeconomic variables. The VAR model explains the current value of a vector of macroeconomic variables as a function of their past values and is commonly used by the NBER, central banks, and European institutions.
Let us consider an extension of Sim's model to a \(n\)-dimensional strictly stationary VAR of order \(p\), denoted VAR(\(p\)):
\[Y_{t}=\Theta_{1}Y_{t-1}+\cdots+\Theta_{p}Y_{t-p}+u_{t}, \tag{1}\]
where \(Y_{t}=(y_{1,t},\ldots,y_{n,t})^{\prime}\) is a \((n\times 1)\) vector, \(\Theta_{i}\) are \((n\times n)\) coefficient matrices, and \(u_{t}\) is a vector \((n\times 1)\) of \(i.i.d.\) error terms with a non-Gaussian distribution, mean of zero and \(Var(u_{t})=\Sigma_{u}\). Whenever the process in (1) is characterized by roots outside the unit circle, that is \(\det\Theta(z)\neq 0\) for \(|z|\leq 1\), where \(\Theta(z)=I_{n}-\sum_{j=1}^{p}\Theta_{j}z^{j}\), then, there exists a strictly stationary purely causal solution to equation (1) with the following one-sided moving average representation:
\[Y_{t}=\Psi(L)u_{t}=\sum_{j=0}^{\infty}\Psi_{j}u_{t-j}, \tag{2}\]
where \(\Psi_{0}=I_{n}\), and \(\Psi_{j}\) converge to zero at a geometric rate as \(j\to\infty\). In this case, the error term \(u_{t}\) is defined as \(Y_{t}\)-fundamental since it only depends on the present and past values of \(Y_{t}\)(Alessi et al. (2008)). When some of the roots of \(\det\Theta(z)=0\) are inside the unit circle, the VAR stated in (1) is defined as noncausal (Lanne and Saikkonen (2011)). According to Alessi et al. (2008), this may occur because the model developed from the econometric theory is, by definition, nonfundamental or because the econometricians do not observe some relevant variables and thus not included in the analysis (Blanchard and Quah (1988), Lippi and Reichlin (1993), Fernandez-Villaverde et al. (2007)). In the presence of a noncausal component, the moving average polynomial \(\Psi(L)\) of order infinity in the lag operator \(L\) involves both positive and negative powers of \(L\), implying the dependence of \(Y_{t}\) also on future errors (see Alessi et al. (2008), and Lanne and Saikkonen (2013)). Then, the assumption of a non-Gaussian distribution of serially i.i.d. errors is crucial for the identification of the model dynamics because it allows for distinguishing between the roots which are inside or outside the unit circle: \(\det\Theta(z)\neq 0\) for \(|z|=1\) of the mixed causal and noncausal VAR process. In practice, the VAR process (1) with noncausal components exhibits nonlinear dynamics, such as local trends (bubbles) and conditional heteroscedasticity observed in the time series of commodity (oil) prices and cryptocurrency rates, for example. Moreover, when the roots inside the unit circle are considered, the matrix polynomial on the right-hand side of equation (1) no longer represents the conditional expectation of the linear model. Indeed, in the context of noncausality, the error term \(u_{t}\) is not orthogonal to the lags of \(Y_{t}\)(see Lanne and Saikkonen (2013), Hecq et al. (2016), Cubadda et al. (2019), and Gourieroux and Jasiak (2017) Gourieroux and Jasiak (2022b)).
Two strategies have been developed to estimate multivariate mixed causal and noncausal models expressed as in (1) with roots inside and/or outside the unit circle: the Approximate Maximum Likelihood Estimator (MLE) (see Davis and Song (2020)) and the Generalized Covariance estimator (see Gourieroux and Jasiak (2017) and Gourieroux and Jasiak (2022a)). This paper focuses on the Generalized Covariance estimator (\(GCov\)), which is a semi-parametric estimator
designed to minimize an objective function without requiring distributional assumptions on the error term (see Gourieroux and Jasiak (2017) and Gourieroux and Jasiak (2022a)). The objective function involves the nonlinear autocovariances, i.e. the autocovariances of nonlinear functions of model errors.
Our study shows that local minima can appear in the objective function of thhe GCov estimator when identification issues arise due to a difficulty in distinguishing the causal and noncausal dynamics because either the number of nonlinear autocovariances is insufficient, or the nonlinear transformations are inadequate, or the error density is too close to the Gaussian distribution. These identification problems lead to local minima, to which the maximization algorithm can converge, depending on the starting point. This can occur in the application of the GCov estimator to either univariate and multivariate noncausal processes (however, the main focus of this paper is on the multivariate framework). Therefore, selecting a suitable optimization algorithm and appropriate initial values is crucial to ensure successful convergence. It is worth noting that this problem also arises when alternative parametric estimators are employed to estimate noncausal processes (see respectively Bec et al. (2020) and Hecq and Velasquez-Gaviria (2022) for the MLE and the spectral identification of univariate models). We investigate in this paper the performance of the Simulated Annealing optimization algorithm that facilitates the choice of the initial point. We find that the SA algorithm is useful and convenient for the semi-parametric GCov-based estimation of multivariate mixed causal and noncausal VAR models.
The paper is organized as follows. Section 2 discusses the mixed (causal-noncausal) VAR model. Section 3 introduces the \(GCov\) estimator. Section 4 shows that its objective function can admit local minima under some conditions. Section 5 suggests the use of the Simulated Annealing to overcome the issue of local minima and optimize the choice of initial values. Section 6 investigates some commodity price series. Section 7 concludes.
## 2 Model representation
This section reviews the univariate and multivariate mixed causal-noncausal models that have been considered in the literature. In particular, it aims at providing an overview of their key characteristics, and alternative specifications.
Univariate mixed causal and noncausal models for a strictly stationary series \(y_{t},t=1,...T\) were introduced by Breidt et al. (1991) as autoregressive models with the feature of having their roots both inside and outside the unit circle:
\[\phi^{+}(L)\phi^{-}(L)y_{t}=\eta_{t}, \tag{3}\]
where \(\eta_{t}\) is serially \(i.i.d.\) and non-Gaussian. In particular, \(\phi^{+}(L)=1-\phi_{1}^{+}L\cdots-\phi_{r}^{+}L^{r}\), also called the causal polynomial, is characterized by \(r\) roots outside the unit circle, and \(\phi^{-}(L)=1-\phi_{1}^{-}L\cdots-\phi_{s}^{-}L^{s}\), denoted as the noncausal polynomial, by \(s\) roots inside the unit circle (with \(r+s=p\)). We assume that \(y_{t}\) has zero mean to simplify the notation. Although representation (3) leads to
locally explosive behaviors, the series does not globally diverge because of the correlation between the error term \(\eta_{t}\) and the noncausal polynomial's explanatory variable, \(y_{t+s}\) (see Breid et al. (1991), and Lanne and Saikkonen (2011)).
Lanne and Saikkonen (2011) write the noncausal polynomial as a lead polynomial, resulting in an alternative representation of univariate mixed causal and noncausal models, known as MAR(\(r\),\(s\)), such that:
\[\phi^{+}(L)\varphi(L^{-1})y_{t}=\varepsilon_{t}, \tag{4}\]
where \(\varepsilon\) is \(i.i.d.\) and non-Gaussian, and \(\varphi(L)=1-\varphi_{1}L^{-1}-\cdots-\varphi_{s}L^{-s}\) represents the noncausal component. This alternative representation has the advantage that both backward and forward polynomials are now characterized by roots outside the unit circle (see Hecq et al. (2016), Gourieroux and Jasiak (2016), Gourieroux and Jasiak (2018), Hecq and Voisin (2021), Giancaterini and Hecq (2022)):
\[\phi^{+}(z)\neq 0\ \ and\ \ \varphi(z)\neq 0,\ \ for\ \ |z|\leq 1.\]
Note that when \(\phi_{1}=\cdots=\phi_{r}=0\) the process in (4) is defined as purely noncausal (MAR(0,\(s\))); on the other hand, when \(\varphi_{1}=\cdots=\varphi_{s}=0\) it is purely causal (MAR(\(r\),0)).
Lanne and Saikkonen (2013) propose the multivariate VMAR(\(r\),\(s\))
\[\Phi(L)\Pi(L^{-1})Y_{t}=\epsilon_{t}, \tag{5}\]
where \(\Phi(L)\) and \(\Pi(L^{-1})\) are matrix polynomials of order \(r\) and \(s\) respectively. Both polynomials are characterized by roots outside the unit circle:
\[det\Phi(z)\neq 0\ \ \text{and}\ det\Pi(z)\neq 0,\ \ \text{for}\ \ |z|\leq 1, \tag{6}\]
and \(\epsilon_{t}\) is a sequence of \(i.i.d.\) random non-Gaussian (\(n\times 1\)) vectors. However, unlike the univariate case, the model representation displayed in (5) presents two main differences. The first one is that the matrix multiplication is not commutative, and hence \(\bar{\Pi}(L^{-1})\bar{\Phi}(L)Y_{t}=\bar{\epsilon}_{t}\) which is observationally equivalent, provides different coefficient matrices than (5). The second point about the multivariate noncausal representation (5) is that, unlike the univariate case, they cannot always be represented by the product of lead and lag polynomials (see Lanne and Saikkonen (2013), Davis and Song (2020), and Gourieroux and Jasiak (2017).
These two issues have motivated Davis and Song (2020), Gourieroux and Jasiak (2017), Gourieroux and Jasiak (2022a), and Gourieroux and Jasiak (2022b) to consider a VAR(\(p\)) expressed as in (1). In this representation, a process is classified as purely causal if the eigenvalues of the companion-form matrix of the process (1), with dimensions of \(n\times p\), lie inside the unit circle. Similarly, if the roots of the characteristic equation \(|I_{n}-\Theta_{1}z-\cdots-\Theta_{p}z^{p}|=0\) in (1) are located outside the unit circle, it indicates a purely causal process. On the other hand, if the eigenvalues lie outside the unit circle or the roots of the characteristic equation are inside the unit circle, the process is considered purely noncausal.
Finally, it is identified as mixed causal and noncausal when the roots, or eigenvalues, lie both inside and outside the unit circle.
Starting from Section 3, we use the notation VAR\((n_{1},n_{2},p)\) to represent a mixed VAR\((p)\) model with \(n_{1}\) roots outside and \(n_{2}\) roots inside the unit circle. Additionally, we define purely causal and non-causal processes as VAR\((n_{1},0,p)\) and VAR\((0,n_{2},p)\), respectively.
## 3 Generalized covariance estimator
This section describes the two versions of the \(GCov\) estimator introduced respectively in Gourieroux and Jasiak (2017) and Gourieroux and Jasiak (2022a). Let us consider a mixed strictly stationary \(n-dimensional\) VAR\((n_{1},n_{2},1)\) process:
\[Y_{t}=\Theta Y_{t-1}+u_{t}, \tag{7}\]
where the error term \(u_{t}\) is a sequence of \(i.i.d.\) non-Gaussian random vectors of dimension \(n\times 1\). We also assume that \(u_{t}\) are square-integrable, with \(E(u_{t})=0\), and a variance-covariance matrix \(V(u_{t})=\Sigma\). The existence of a strictly stationary solution of (7), as well as the two-sided moving average representation of \(Y\), is provided in Gourieroux and Jasiak (2017).
**Representation theorem** ( _Gourieroux and Jasiak (2017)_): When a \(n\)-dimensional VAR\((n_{1},n_{2},1)\) is considered, there exists an invertible \((n\times n)\) real matrix \(A\), and two square real matrices: \(J_{1}\) of dimension \((n_{1}\times n_{1})\) and \(J_{2}\) of dimension \((n_{2}\times n_{2})\) such that all eigenvalues of \(J_{1}\) (resp. \(J_{2}\)) are those of \(\Theta\) with a modulus strictly less (resp. larger) than 1, and such that:
\[Y_{t}=A_{1}Y_{1,t}^{*}+A_{2}Y_{2,t}^{*} \tag{8}\]
\[Y_{1,t}^{*}=J_{1}Y_{1,t-1}^{*}+u_{1,t}^{*},\quad Y_{2,t}^{*}=J_{2}^{-1}Y_{2,t+ 1}^{*}-J_{2}^{-1}u_{2,t+1}^{*} \tag{9}\]
\[Y_{1,t}^{*}=A^{1}Y_{t},\quad Y_{2,t}^{*}=A^{2}Y_{t} \tag{10}\]
\[u_{1,t}^{*}=A^{1}u_{t},\quad Y_{2,t}^{*}=A^{2}u_{t} \tag{11}\]
where \([A_{1},A_{2}]=A\) and \([A^{1^{\prime}},A^{2^{\prime}}]^{\prime}=A^{-1}\).
From equation (9), we can see that \(Y_{1,t}^{*}\) and \(Y_{2,t}^{*}\) are, respectively, purely causal and purely noncausal processes. Hence, these two components can be interpreted as the causal and noncausal components of the process \(Y_{t}\).
In addition, Gourieroux and Jasiak (2017) show that:
\[\Gamma_{12}^{*}=Cov(Y_{1,t}^{*},Y_{2,t-h}^{*})=0\ for\ h\leq 0. \tag{12}\]
Exploiting the moment condition (12) may yield a suitable VAR\((n_{1},n_{2},p)\) estimator. However, distinguishing between mixed models \((n_{1},n-n_{1})\) and \((n-n_{1},n_{1})\) based solely on linear second-order moments of the component series of \(Y_{t}\) is not possible (Gourieroux and Jasiak (2017)). In this regard, nonlinear
covariance-based conditions can be used to identify the model, as long as the error terms \(u_{t}\) are serially independent (Chan et al. (2006)). This led to the development of the \(Gcov17\) and \(Gcov22\) estimators by Gourieroux and Jasiak (2017) and Gourieroux and Jasiak (2022a), respectively. In particular, the estimator \(Gcov22\) aims to minimize
\[\hat{\Theta}=\operatorname*{argmin}_{\Theta}\sum_{h=1}^{H}Tr\big{[}\hat{\Gamma} (h)\hat{\Gamma}(0)^{-1}\hat{\Gamma}(h)^{\prime}\hat{\Gamma}(0)^{-1}\big{]}, \tag{13}\]
where \(H\) is the highest selected lag, \(\hat{\Gamma}(h)\) is the sample autocovariance between \(a_{j}(Y_{t}-\Theta Y_{t-1})\) and \(a_{k}(Y_{t-h}-\Theta Y_{t-h-1})\), and \(a_{k}\), \(j,k=1\ldots,K\) are a set of the element by element functions which depends on the series of interest. In particular, \(a_{k}\) are linear and nonlinear transformations of the error term, and they have to be twice continuously differentiable with respect to \(\Theta\). For the asymptotic normality of \(GCov22\), finite fourth moments of \(a(u_{t})\) are needed (see Gourieroux and Jasiak (2022a)).
The matrix in (13) is diagonalizable, with a trace equal to the sum of its eigenvalues, which are the squares of the canonical correlations between \(a_{j}(Y_{t}-\Theta Y_{t-1})\) and \(a_{k}(Y_{t}-\Theta Y_{t-1})\).
On the other hand, the estimate provided by \(Gcov17\) minimizes:
\[\hat{\Theta}=\operatorname*{argmin}_{\Theta}\sum_{h=1}^{H}Tr\big{[}\hat{\Gamma }(h)diag(\hat{\gamma}(0))^{-1}\hat{\Gamma}(h)^{\prime}diag(\hat{\gamma}(0))^{- 1}\big{]}, \tag{14}\]
where \(diag(\hat{\gamma}(0))\) is the matrix with only the diagonal elements of \(\hat{\Gamma}(0)\). The objective functions (13) and (14) are multivariate portmanteau tests statistics, examined in Cubadda and Hecq (2011).
Note that for \(VAR(n_{1},n_{2},p)\) processes with \(p\geq 2\), we can use the companion matrix to rewrite the process as \(VAR(n_{1},n_{2},1)\) so that the estimators (13) and (14) can easily be applied to the new representation of the model.
The \(GCov\) estimator (13) is consistent, and asymptotically normally distributed when the 4th-order moments of \(a(u_{t})\) are finite. It is also semi-paremetrically efficient because of the optimal choice of weights \(\widehat{\Gamma}(0)^{-1}\) in the objective function (13). Its semi-parametric efficiency is achieved by an appropriate choice of weights \(\widehat{\Gamma}(0)^{-1}\) in the objective functions (13). Moreover, it is a one-step estimator.
The GCov estimator (14) has similar properties except for semi-parametric efficiency. Its advantage is in application to high dimensional matrices when \(K\) and \(n\) are large, because the objective functions (14) does not require the inversion of \(\hat{\Gamma}(0)\) matrices.
## 4 Finite sample performances of Gcov22
In this section, we investigate the behavior of the objective function associated with the \(GCov\) estimator when applied to a 2-dimensional mixed VAR\((n_{1},n_{2},1)\)
model as described in (7). The underlying assumption is that the population matrix \(\Theta_{0}\) has one eigenvalue (\(j_{1}\)) inside the unit circle and another eigenvalue (\(j_{2}\)) outside it. The error term \(u_{t}=(u_{1,t},u_{2,t})^{\prime}\) are serially \(i.i.d.\) and follows a multivariate Student's-\(t\) distribution with \(\nu=4\) degrees of freedom, and a variance-covariance matrix \(\Sigma=\nu/(\nu-2)I_{2}\). We focus on reporting results for the \(Gcov22\) estimator, as both \(Gcov22\) and \(Gcov17\) perform similarly when \(\Sigma\) is diagonal. However, it is worth noting that \(Gcov22\) exhibits greater effectiveness than \(Gcov17\) when the assumption of a diagonal covariance matrix is relaxed, as it considers \(\Gamma(0)\) as a whole.
Our analysis consists of two cases. In Case 1, in addition to considering a diagonal variance-covariance matrix \(\Sigma\) (i.e., errors contemporaneously cross-sectionally independent), we also take into account a diagonal autoregressive matrix \(\Theta_{0}\) in (7). This setup allows us to evaluate the estimator's performance in both univariate and multivariate scenarios. By examining the objective function of \(Gcov22\) with respect to \(\theta_{11}\) (while keeping the other elements of the matrix constant at their true values), we can analyze its behavior when applied to the purely causal univariate process \(y_{1,t}=\theta_{11}y_{1,t-1}+u_{t}\). Similarly, studying the objective function of \(Gcov22\) with respect to \(\theta_{22}\) (while holding the other elements constant at their true values) provides valuable insights into its performance with the purely noncausal process \(y_{2}\). On the other hand, to assess the estimator's performance in the multivariate setting, we conduct Monte Carlo experiments and calculate the empirical density function of matrix \(\Theta\), considering various initial values for the optimization problem.
In Case 2, while we maintain the assumption of contemporaneously cross-sectionally independent errors, we relax the assumption of a diagonal population matrix \(\Theta_{0}\). We proceed with a similar analysis to explore the implications of this relaxation.
Finally, for each case, we consider two DGPs. In one DGP, the eigenvalues of the population matrix for the causal and noncausal components are significantly smaller and larger than the modulus of 1, respectively. In the other DGP, the eigenvalues are slightly smaller and larger than the modulus of 1.
### Case 1: Analysis of a 2-dimensional process with independent variables
#### 4.1.1 Presence of Bimodality Issue
We examine a \(2-dimensional\) mixed VAR(1,1,1) process, denoted as (7), where the cleaned \(Y_{t}\) components \(y_{1,t}\) and \(y_{2,t}\) are purely causal and noncausal processes, respectively. Specifically, the autoregressive matrix exhibits a Jordan decomposition with the following coefficient matrices:
\[\begin{split}\Theta_{0}&=AJA^{-1}\\ &=\begin{bmatrix}2.1&0\\ 0&1.5\end{bmatrix}\begin{bmatrix}0.5&0\\ 0&2\end{bmatrix}\begin{bmatrix}2.1&0\\ 0&1.5\end{bmatrix}^{-1}\\ &=\begin{bmatrix}0.5&0\\ 0&2\end{bmatrix}.\end{split} \tag{15}\]
It is important to emphasize, based on Proposition 4 in Gourieroux and Jasiak (2017), that distinguishing the true bivariate mixed process from either the purely causal process (with eigenvalues \(j_{1}\) and \(j_{2}^{-1}\)) or the purely noncausal process (with eigenvalues \(j_{1}^{-1}\) and \(j_{2}\)) solely based on the linear autocovariance function becomes impossible when both \(\Sigma\) and \(\Theta_{0}\) are diagonal. We denote the matrix with eigenvalues \(j_{1}\) and \(j_{2}^{-1}\) as the causal counterpart of \(\Theta\) and the matrix with eigenvalues \(j_{1}^{-1}\) and \(j_{2}\) as the noncausal counterpart of the population matrix. Furthermore, as previously stated, the diagonality of the two matrices allows us to investigate how the estimator performs in the univariate framework, i.e. when expressed as a function of \(\theta_{11}\) (or \(\theta_{22}\)). Figure 1 shows the \(one-dimensional\)\(Gcov22\)'s objective function with \(T=1000\) observations, \(H=10\) in (13) and using the following nonlinear transformations (\(a_{k}\)):
* T1: \(a_{1}(u)\)=\(u_{1}\), \(a_{2}(u)\)=\(u_{2}\), \(a_{3}(u)\)=\(u_{1}^{2}\), \(a_{4}(u)\)=\(u_{2}^{2}\), \(a_{5}(u)\)=\(u_{1}^{3}\), \(a_{6}(u)\)=\(u_{2}^{3}\), \(a_{7}(u)\)=\(u_{1}^{4}\), \(a_{8}(u)\)=\(u_{2}^{4}\);
* T2: \(a_{1}(u)\)=\(u_{1}\), \(a_{2}(u)\)=\(u_{2}\), \(a_{3}(u)\)=\(log(u_{1}^{2})\), \(a_{4}(u)\)=\(log(u_{2}^{2})\)
* T3: \(a_{1}(u)\)=\(sign(u_{1})\), \(a_{2}(u)\)=\(sign(u_{2})\), \(a_{3}(u)\)=\(u_{1}^{2}\), \(a_{4}(u)\)=\(u_{2}^{2}\),
* T4: \(a_{1}(u)\)=\(sign(u_{1})\), \(a_{2}(u)\)=\(sign(u_{2})\), \(a_{3}(u)\)=\(log(u_{1}^{2})\), \(a_{4}(u)\)=\(log(u_{2}^{2})\).
Our findings indicate that the objective function of \(Gcov22\) displays local minima when applied to the univariate framework. Specifically, we observe that for the causal process \(y_{1}\), the objective function exhibits a local minimum that corresponds to its noncausal counterpart \(j_{1}^{-1}\) (or \(\theta_{11}^{-1}\)), while for the noncausal process \(y_{2}\), the objective function has a local minimum that corresponds to its causal counterpart \(j_{2}^{-1}\) (or \(\theta_{22}^{-1}\)). The reason behind this is that, as mentioned in Section 3, also in the univariate framework, it is not possible to distinguish between a pure causal representation and a pure noncausal representation based solely on knowledge of the linear second-order moment of \(Y\). To accurately determine the correct specification, it is necessary to consider nonlinear second-order moments. The \(GCov\) estimator effectively leverages this characteristic. However, it is crucial to note that if the number of nonlinear autocovariances considered is insufficient or if the chosen nonlinearities are inadequate, it may result in the presence of local minima during the estimation process. Finally, Figure 1 shows that T1 is the nonlinear transformation that better performs in amplifying the difference between the global and local minima, improving the estimator accuracy in identifying the process.
To gain insights into the performance of the estimator in the multivariate setting, we conduct three Monte Carlo experiments. In each experiment, we compute the empirical density function of \(\Theta\) (\(\hat{\Theta}\)) using a different initial point for the optimization problem. We consider \(T=1000\) observations and \(N=1000\) replications. Furthermore, we discard the initial and final \(10\%\) of observations from each simulated time series and set \(H=10\) in equation (13). Finally, we employ the Broyden Fletcher Goldfarb Shanno (BFGS) numerical optimization algorithm. The BFGS algorithm is a well-known deterministic optimization
method that approximates the inverse gradient of the objective function to locate the minimum. The algorithm starts with an initial minimum estimate and gradually improves this estimation using gradient information and an approximation of the inverse Hessian matrix. We also implemented other conventional numerical optimization algorithms in our simulation studies, such as the Nelder-Mead, conjugate gradient method, and limited-memory BFGS. However, since they provide similar results, they are available only upon request.
Figure 2 shows the density function of \(\hat{\Theta}\) when the causal counterpart of \(\Theta_{0}\) (\(\Theta_{C}\)) is used as the initial point for the optimization problem, with:
\[\Theta_{C}=\begin{bmatrix}j_{1}&0.1\\ 0.1&j_{2}^{-1}\end{bmatrix}=\begin{bmatrix}0.5&0\\ 0&0.5\end{bmatrix}. \tag{16}\]
Figure 3 shows the estimator's performance when \(\Theta_{0}\) is selected as the starting point. Figure 4 shows the density function of \(\hat{\Theta}\) when the noncausal counterpart of \(\Theta_{0}\) (\(\Theta_{NC}\)) is used as the initial point:
\[\Theta_{NC}=\begin{bmatrix}j_{1}^{-1}&0\\ 0&j_{2}\end{bmatrix}=\begin{bmatrix}2&0\\ 0&2\end{bmatrix}. \tag{17}\]
Our findings reveal that when \(\Theta_{C}\) is selected as the initial point, the empirical density function of \(\hat{\Theta}\) shows two peaks (Figure 2). The shorter peak corresponds to the population matrix \(\Theta_{0}\), while the higher peak corresponds to its causal counterpart \(\Theta_{C}\). Therefore, the process is often identified as purely causal in this framework (see Table 1). On the other hand, selecting \(\Theta_{NC}\) as the initial point also yields two peaks in the empirical density function of \(\hat{\Theta}\). The smaller one remains associated with \(\Theta_{0}\), but the higher peak is now associated with \(\Theta_{NC}\). As a result, Table 1 indicates that the process is often identified as purely noncausal under these conditions. Finally, selecting \(\Theta_{0}\) as the starting point for the optimization problem leads to a significant improvement in the estimator's performance. The numerical algorithm BFGS can now capture the global minimum, resulting in the disappearance of any local minimum.
The findings reveal that the objective function domain of the _Gcov_22 estimator consists of three sets, characterized by the matrices yielding specific roots:
* Set 1: Matrices satisfying \(det\Theta(z)\neq 0\) for \(|z_{1}|\leq 1\) and \(|z_{2}|\leq 1\);
* Set 2: Matrices satisfying \(det\Theta(z)\neq 0\) for \(|z_{1}|\geq 1\) and \(|z_{2}|\geq 1\);
* Set 3: Matrices satisfying \(det\Theta(z)\neq 0\) for \(|z_{1}|\leq 1\) and \(|z_{2}|\geq 1\), or \(|z_{1}|\geq 1\) and \(|z_{2}|\leq 1\).
Each set contains a matrix that minimizes the objective function when considering linear transformations of the autocovariance, that is, \(\Theta_{C}\), \(\Theta_{0}\), and \(\Theta_{NC}\). However, when nonlinear transformations of the autocovariance of the error term are considered in (13), only \(\Theta_{0}\) represents the global minimum, while the causal and noncausal counterparts of \(\Theta_{0}\) act as local minima. Indeed, Figures
2-4 show that in the \(Gcov22\)'s domain set where \(\det\Theta(z)\neq 0\) for \(|z_{1}|\leq 1\) and \(|z_{2}|\leq 1\), the local minimum is associated with \(\Theta_{C}\). On the other hand, in the set where \(\det\Theta(z)\neq 0\) for \(|z_{1}|\geq 1\) and \(|z_{2}|\geq 1\), the local minimum is associated with \(\Theta_{NC}\). These findings show the \(GCov\) estimator's ability to capture matrices (\(\Theta_{C}\) and \(\Theta_{NC}\)) that produce the same autocovariance function as the underlying process. This remains true despite the estimator taking into account both linear and nonlinear transformations of the autocovariance of the error term. Consequently, local minima corresponding to these matrices arise.
In conclusion, if the optimization problem starts within a set where a local minimum occurs, it is likely that the numerical optimization algorithm will get trapped in that set and converge to the local minimum instead of the global one.
_Empirical density function of \(\Theta\) obtained when \(\Theta_{C}\) is selected as the starting point, and the population matrix \(\Theta_{0}\) is expressed as in (15). The black vertical lines in the figure represent the true values of the matrix \(\Theta_{0}\), while the red dashed lines correspond to the elements of the matrix \(\Theta_{C}\). Case 1 is considered._
Figure 3: Empirical density function of \(\Theta\) with \(\Theta_{0}\) as starting point
Figure 2: Empirical density function of \(\Theta\) with \(\Theta_{C}\) as starting point
#### 4.1.2 **The Absence of Bimodality Issue**
We proceed to investigate whether any local minima vanish when the eigenvalues of the population matrix for the causal and noncausal components are slightly smaller and larger than 1, respectively, while assuming cross-sectionally independent errors. As Figure 1 suggests, this scenario may reduce the global and local minima gap, prompting us to analyze if any local minima disappear. To achieve this goal, we will focus on the following population matrix:
\[\Theta_{0}^{*}=\begin{bmatrix}\theta_{11}^{*}&\theta_{12}^{*}\\ \theta_{21}^{*}&\theta_{22}^{*},\end{bmatrix}=\begin{bmatrix}0.85&0\\ 0&1.2\end{bmatrix}. \tag{18}\]
Even in this case, we derive the one-dimensional \(Gcov22\)'s objective function expressed as a function of \(\theta_{11}^{*}\) and \(\theta_{22}^{*}\), respectively, while holding the other elements of the same matrix constant and equal to \(\Theta_{0}^{*}\) (Figure 5). Finally, we conduct a new Monte Carlo experiment with the same inputs as the previous simulation studies and compute the empirical density function of \(\Theta^{*}\), when its causal counterpart (\(\Theta_{C}^{*}\)) is considered as the starting point for the optimization problem:
\[\Theta_{C}^{*}=\begin{bmatrix}0.85&0\\ 0&(1.2)^{-1}\end{bmatrix}\]
Figure 4: Empirical density function of \(\Theta\) with \(\Theta_{NC}\) as starting point
Figures 5 and 6 show the results. Furthermore, Table 1 displays the frequency with which the correct model is identified in the Monte Carlo experiment.1
Footnote 1: In the alternative case where the noncausal counterpart of \(\Theta_{0}^{*}\) is selected as the initial point for the optimization problem, similar results are obtained and are available upon request.
Our findings suggest that under this new DGP, the distance between the global and local minimums is not significant enough to give rise to the convergence of the algorithm to local minima. In other words, the reduced distance between the global and local minima facilitates the numerical algorithm in identifying the global minimum, even if initiated within a set where a local minimum occurs. This is true regardless of the nonlinear transformation \(a_{k}\) employed.
_The Gcov22's objective function is shown when expressed as a function of \(\theta_{11}^{*}\) and \(\theta_{22}^{*}\) while holding the other elements of the same matrix constant and equal to the population matrix \(\Theta_{0}^{*}\), expressed as in (18). The black, red, and green dashed vertical lines represent the coefficients of matrices \(\Theta_{0}^{*}\), \(\Theta_{C}^{*}\), and \(\Theta_{NC}^{*}\), respectively. Case 1 is considered._
Figure 5: Univariate \(Gcov22\)’s objective function
### Case 2
#### 4.2.1 Presence of Bimodality Issue
We now relax the assumption of independence between \(y_{1,t}\) and \(y_{2,t}\), and maintain the assumption of cross-sectional independent error term. Specifically, we consider the following non-diagonal autoregressive population matrix:
\[\Theta_{0}=AJA^{-1}=\begin{bmatrix}0.3&0.7\\ 0.4&-1\end{bmatrix}\begin{bmatrix}0.4&0\\ 0&2\end{bmatrix}\begin{bmatrix}0.3&0.7\\ 0.4&-1\end{bmatrix}^{-1}=\begin{bmatrix}1.17&-0.58\\ -1.10&1.23.\end{bmatrix} \tag{19}\]
This relaxation allows us now to leverage the second-order moment of the process to differentiate the true mixed process from its causal and noncausal counterpart (see Gourieroux and Jasiak (2017)).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline PM & Initial value & \(VAR(2,0,1)\) & \(VAR(1,1,1)\) & \(VAR(0,2,1)\) \\ \hline \(\Theta_{0}\) & \(\Theta_{C}\) & 87.6\% & 12\% & 0.4\% \\ \hline \(\Theta_{0}\) & \(\Theta_{0}\) & 0.3\% & 99.3\% & 0.4\% \\ \hline \(\Theta_{0}\) & \(\Theta_{NC}\) & 0\% & 9,8\% & 90.2\% \\ \hline \(\Theta_{0}^{*}\) & \(\Theta_{C}^{*}\) & 8.8\% & 78.2\% & 13.0\% \\ \hline \hline \end{tabular} _The table illustrates the performance of \(Gcov22\) in estimating the dynamic of a 2-dimensional VAR(1,1,1). ‘PM’ denotes the population matrix. Case 1 is considered._
\end{table}
Table 1: Dynamic captured by \(Gcov22\); Case 1
Figure 6: Empirical density function of \(\Theta^{*}\) with \(\Theta_{C}^{*}\) as starting point
In contrast to Section 4.1, analyzing the objective function in terms of \(\theta_{11}\) and \(\theta_{22}\) does not provide information about the univariate case due to the lack of independence between \(y_{1}\) and \(y_{2}\). Therefore, we directly perform three new Monte Carlo experiments and compute the empirical density functions of \(\Theta\) for different initial points in the optimization problem. We set \(T=1000\) observations and \(N=1000\) replications, discard the first and last 10% of observations in each simulated time series, and set \(H=10\) in (13). Finally, we consider the nonlinear transformation of the error term T1. The results are presented in Figures 7-9 and Table 2.
Figure 7 displays the results obtained using the OLS estimate of the process \(Y\) as the initial point. It has been shown in Gourieroux and Jasiak (2017) that the OLS estimator is not capable of capturing the noncausal component of the process, and it estimates a matrix with eigenvalues \(j_{1}\) and \(j_{2}^{-1}\). This is because the OLS estimator assumes Gaussian errors, in which case the causal and noncausal dynamics cannot be distinguished, and the model is not identifiable. Therefore, the OLS estimate of \(\Theta\) (\(\Theta_{OLS}\)) provides the causal counterpart of the matrix (19). Figure 8 shows the density function of \(\hat{\Theta}\) when the noncausal counterpart of \(\Theta\) is selected as the initial point. To obtain this matrix, it is necessary to estimate a VMAR(0,1) process by OLS and invert the estimated matrix. We denote this matrix as \(\widetilde{\Theta}\), which is characterized by \(j_{1}^{-1}\) and \(j_{2}\) as eigenvalues. Finally, Figure 9 presents the results when the initial value is \(\Theta_{MIX}\), a randomly selected matrix that lies in the same set as the global minimum (one root inside and one root outside the unit circle):
\[\begin{split}\Theta_{MIX}=AJA^{-1}=&\begin{bmatrix} 0.8&0.75\\ 0.46&-1\end{bmatrix}\begin{bmatrix}0.8&0\\ 0&1.2\end{bmatrix}\begin{bmatrix}-0.8&0.75\\ 0.46&-1\end{bmatrix}^{-1}\\ &=&\begin{bmatrix}0.50&-0.53\\ 0.40&1.50.\end{bmatrix}\end{split} \tag{20}\]
The results reveal that when the optimization process starts at the parameter values corresponding to the reciprocals of the true eigenvalues of the autoregressive matrix, the maximization algorithm becomes stuck at a local minimum. Consequently, in practical applications, identification issues may emerge, and local minima manifest at parameter values associated with incorrect orders. Therefore, depending on the initial starting point, the GCov optimization algorithm might converge towards a local minimum. Indeed, similar to Case 1, the results show that when the optimization is initiated within the domain set of \(Gcov22\) where \(det\Theta(z)\neq 0\) for \(|z_{1}|\leq 1\) and \(|z_{2}|\leq 1\), the algorithm converges towards a local minimum associated with the matrix having eigenvalues \(j_{1}\) and \(j_{2}^{-1}\) (\(\Theta_{OLS}\)). In contrast, when the numerical optimization algorithm is initialized in the set where \(det\Theta(z)\neq 0\) for \(|z_{1}|\geq 1\) and \(|z_{2}|\geq 1\), the local minimum is associated with a matrix having eigenvalues \(j_{1}^{-1}\) and \(j_{2}\) (\(\widetilde{\Theta}\)). Finally, the elimination of convergence to local minima is observed when \(\Theta_{MIX}\) is chosen as the initial point. This finding emphasizes the importance of selecting a matrix that belongs to the same set as the population matrix. Table 2 further confirms these results.
_Empirical density function of \(\Theta\) obtained when \(\Theta_{C}\) is selected as the starting point, and the population matrix \(\Theta_{0}\) is expressed as in (19). The black vertical lines in the figure represent the true values of the matrix \(\Theta_{0}\), while the red dashed lines correspond to the elements of the matrix \(\Theta_{OLS}\). Case 2 is considered._
Figure 8: Empirical density function of \(\Theta\) with \(\widetilde{\Theta}\) as starting point
Figure 7: Empirical density function of \(\Theta\) with \(\Theta_{OLS}\) as starting point
#### 4.2.2 **The Absence of Bimodality Issue**
We proceed to examine whether local minima vanish when the causal and noncausal eigenvalues are marginally smaller and larger than one, respectively, even when the processes \(y_{1,t}\) and \(y_{2,t}\) are not independent. Furthermore, we maintain the assumption of cross-sectional independent errors. To this end, we consider the following autoregressive matrix:
\[\Theta_{0}^{*}=AJA^{-1}=\begin{bmatrix}0.3&0.7\\ 0.4&-1\end{bmatrix}\begin{bmatrix}0.85&0\\ 0&1.2\end{bmatrix}\begin{bmatrix}0.3&0.7\\ 0.4&-1\end{bmatrix}^{-1}=\begin{bmatrix}1.02&-0.13\\ -0.24&1.03,\end{bmatrix} \tag{21}\]
and implement a Monte Carlo simulation study considering the OLS estimate of \(\Theta^{*}\) (\(\Theta_{OLS}^{*}\)) as the initial value (see Figure 10 and Table 2). The results highlight that the density function of \(\hat{\Theta}^{*}\) is well-shaped around the true population matrix. In the alternative scenario where a matrix with both eigenvalues outside the unit circle is selected as the starting point, similar results are obtained and available upon request. Therefore, as also shown in Section 4.1, we conclude that in some cases, the distance between the global and local minima is insufficient to give rise to convergence to local minima. In other words, the small gap between the global and local minima makes it easier for the numerical algorithm to locate the global minimum, even if initiated within a set where a local minimum occurs.
Figure 9: Empirical density function of \(\Theta\) with \(\Theta_{MIX}\) as starting point
## 5 Simulated Annealing algorithm
In the previous section, we stressed the importance of selecting an initial point that belongs to the same set as the global minimum. Specifically, selecting an initial matrix with the same \(n_{1}\) and \(n_{2}\) as the population matrix is crucial for achieving successful convergence of the BFGS optimization algorithm. However, in practical applications, determining the number of roots that lie within and outside the unit circle of the population matrix can be challenging. Therefore, in this section, we use Simulated Annealing (SA) (Goffe et al. (1994)) to investigate whether it can find an initial point in the same set where the global minimum lies. If successful, this approach would allow the BFGS optimization algorithm to converge successfully.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline PM & Initial value & \(VAR(2,0,1)\) & \(VAR(1,1,1)\) & \(VAR(0,2,1)\) \\ \hline \(\Theta_{0}\) & \(\Theta_{OLS}\) & 34.2\% & 65.4\% & 0.4\% \\ \hline \(\Theta_{0}\) & \(\widetilde{\Theta}\) & 0.2\% & 15.3\% & 84.5\% \\ \hline \(\Theta_{0}\) & \(\Theta_{MIX}\) & 0.1\% & 91.7\% & 8.2\% \\ \hline \(\Theta_{0}^{*}\) & \(\Theta_{OLS}^{*}\) & 6.6\% & 90.2\% & 3.2\% \\ \hline \hline \end{tabular} _The table illustrates the performance of \(Gcov22\) in estimating the dynamic of a 2-dimensional VAR(1,1,1). ’PM’ denotes the population matrix. Case 2 is considered._
\end{table}
Table 2: Dynamic captured by \(Gcov22\); Case 2
Figure 10: Empirical density function of \(\Theta^{*}\) with \(\Theta_{OLS}^{*}\) as starting point
SA is an optimization method that was inspired by the annealing process used in metallurgy. In metallurgy, materials are cooled gradually to eliminate any imperfections and achieve a more stable state. The algorithm starts at a high temperature (\(T^{o}\)) and gradually cools down over time to reduce the likelihood of getting stuck in a local minimum. Hence, in optimization problems, \(T^{o}\) is a parameter that controls the search space exploration during optimization. When \(T^{o}\) is high, the algorithm is more likely to accept worse solutions than the current one, enabling it to escape local optima and explore new areas of the search space. As \(T^{o}\) decreases, the algorithm is less likely to accept suboptimal solutions and converge toward the global optimum. However, if the cooling rate is too high, the algorithm may not be able to escape local minima (see Corana et al. (1987), Goffe et al. (1992), Goffe et al. (1994), and Goffe (1996)).
Let us now explain how the SA algorithm works when applied to the \(GCov\) estimator. We consider a VAR(\(n_{1},n_{2},1\) ) as (7), and we use \(f\) to denote the objective function of either \(GCov17\) or \(GCov22\). It is not necessary to specify which \(GCov\) is being employed for the purposes of illustrating how the algorithm works. SA aims to select the matrix \(\Theta\) close to the global optimum. We assume that the \((i,j)\)-\(th\) element of matrix \(\Theta\) (denoted as \(\theta_{i,j}\), for \(i,j=1\ldots n\)) lies within the interval \([\theta_{MIN}\), \(\theta_{MAX}]\). This step helps us to restrict the domain, \(\mathbb{R}^{n\times n}\), of our objective function and hence narrow the search area. Additionally, we denote the maximum and minimum temperature values of \(T^{o}\) as \(T^{o}_{MAX}\) and \(T^{o}_{MIN}\), respectively.
To begin the optimization process, a random initial state of \(\Theta\), denoted as \(\Theta^{S}\), is selected, and its corresponding function value, \(f(\Theta^{S})\), is computed. Specifically, each element of the matrix \(\Theta^{S}\) is randomly chosen from \(\theta_{MIN}\) to \(\theta_{MAX}\). Subsequently, a new value of \(\Theta\), represented by \(\Theta^{\prime}\), is generated. We implement the following computation to derive the \(ij\)-\(th\) element of matrix \(\Theta^{\prime}\):
\[\theta^{\prime}_{ij}=\theta^{S}_{ij}+m_{ij}\quad\forall i,j=1,\ldots,n, \tag{22}\]
where \(m_{ij}\) is the \(ij\)-\(th\) element of an \((n\times n)\) matrix \(M\) and is chosen randomly from the interval \([-\theta^{S}_{ij}-\theta_{MIN},-\theta^{S}_{ij}+\theta_{MAX}]\). We choose this interval to ensure that \(\theta^{\prime}_{ij}\) remains within the range \([\theta_{MIN},\theta_{MAX}]\).
The function \(f(\Theta^{\prime})\) is then computed and compared with \(f(\Theta^{S})\). If \(f(\Theta^{\prime})<f(\Theta^{S})\), \(\Theta^{\prime}\) is accepted and the algorithm progresses downhill. In the opposite case, \(f(\Theta^{\prime})>f(\Theta^{S})\), the eventual acceptance of \(\Theta^{\prime}\) is based on the Metropolis criterion. According to this criterion, we compute the variable \(p^{o}\) such that
\[p^{o}=e^{-\frac{(f(\Theta^{\prime})-f(\Theta^{S}))}{T^{o}}}, \tag{23}\]
to compare it with \(p^{*}\), a number selected randomly from 0 to 1. If \(p^{o}<p^{*}\), \(\Theta^{\prime}\) is rejected, and the algorithm remains on the same point of the function. On the other hand, if \(p^{o}>p^{o,\prime}\), we accept \(\Theta^{\prime}\) and move uphill. The acceptance probability in simulated annealing is determined by Equation (23), which is controlled by the parameter \(T^{o}\). To find the optimal solution, the procedure is repeated \(M\) times for each \(T^{o}\), starting from \(T^{o}_{MAX}\) and gradually reducing it
at a rate of \(r\), for a total of \(Q\) times, until it reaches \(T^{o}_{MIN}\).
The SA method presents two main drawbacks. First, it requires exploring a large number of candidate solutions, especially for large values of \(n\) and \(p\). To mitigate this challenge, setting boundaries, \(\theta_{MIN}\) and \(\theta_{MAX}\), to restrict the search area can be helpful. By limiting the number of candidate solutions that need to be evaluated, we can reduce the computational time required to find the global minimum using the SA method. Secondly, the parameters associated with the SA method, such as \(\theta_{MIN}\), \(\theta_{MAX}\), \(T^{o}_{MAX}\), \(r\), \(Q\), and \(M\), are typically treated as black-box functions and are contingent upon the objective function to be minimized. Goffe et al. (1994) provides some tips on how to set these parameters, but, in general, their choice requires experimentation.
In empirical studies, a common approach to investigate whether the global minimum has been found is to repeat the algorithm with a different initial state \(\Theta^{S}\). If the same global minimum is reached, it can be concluded with high confidence that convergence has been achieved. In cases where a different result is obtained, it may be necessary to modify one or more of the parameters involved in the simulated annealing algorithm.
We now conduct a Monte Carlo experiment to evaluate the effectiveness of SA on the \(GCov\) estimator. The experiment uses the same DGP described in Section 4, which is known to produce local minima. Specifically, we employ the population matrix \(\Theta_{0}\) displayed in (19). As \(\Sigma\) is diagonal in this case, we obtained similar results when applying SA to either \(Gcov22\) or \(Gcov17\). However, to make a more accurate comparison with the findings of the previous section, we only present the results obtained by \(Gcov22\), while the results from \(Gcov17\) are available upon request.
Given the time needed to run the \(Gcov22\) estimator with SA, we reduced the number of observations to \(T=250\) observations. We keep \(N=1000\) replications. The initial temperature \(T^{o}\) is set to \(T^{o}_{MAX}=1600\), and we let it decreases at a rate of \(r=0.85\) for \(Q=200\) iterations until it reaches \(T^{0}_{MIN}\).2 In order to efficiently explore the search area, our experiments suggest setting \(M=1000\) for each \(T^{o}\). Setting \(M\) too high may result in inefficiencies in terms of time. Conversely, setting \(M\) too low may restrict the ability of the algorithm to explore the search area thoroughly. It's worth noting that in some replications of our experiment, the objective function may require higher values of either \(Q\), \(M\), or both. However, for practical reasons, we set the values of \(Q\) and \(M\) to be the same for all replications. We set \(\theta_{MIN}\) and \(\theta_{MAX}\) to -3.5 and 3.5, respectively. As previously mentioned, these parameters are typically problem-dependent, and their selection requires experimentation. The matrix \(\Theta\) obtained through the SA technique is then used as the initial point for our optimization problem, which is then solved using the \(BFGS\) algorithm. The aim of the Monte Carlo experiment is to investigate which strategy for capturing the initial values, namely \(SA\), \(\Theta_{OLS}\), and \(\widehat{\Theta}\) gets us closer to the global minimum. Figures
11 and Table 3 show the results.
The SA algorithm significantly improves the results compared to the previous section despite the fact that the sample size considered in this section is significantly smaller. Bimodality issues disappear, and the \(Gcov22\) estimator accurately identifies the process dynamics with a success rate of approximately 78 percent. In the previous section, when the same DGP is considered, the correct dynamics were identified with success rates of 65.4% and 15.3% when using \(\Theta_{OLS}\) and \(\widetilde{\Theta}\) as initial values, respectively, and \(T=1000\) observations.
## 6 Empirical investigations
This section describes two empirical investigations of bivariate processes. The first investigation examines the wheat and corn futures from October 18, 2016, to March 29, 2018. The second one examines soybean and wheat futures
Figure 11: Empirical density function of \(\Theta\) with \(SA\)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline PM & Initial value & \(VAR(2,0,1)\) & \(VAR(1,1,1)\) & \(VAR(0,2,1)\) \\ \hline \(\Theta_{0}\) & SA & 8.8\% & 78.2\% & 13.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dynamic captured by \(Gcov22\)
using the same time period as the previous process.3 The demeaned data is presented in Figures 12-13. The primary objective of this study is to evaluate the performance of the \(Gcov22\) estimator when applied to real-world data and to investigate whether any eventual local minima arise.
Footnote 3: Data obtained from [https://ca.finance.yahoo.com](https://ca.finance.yahoo.com) (wheat futures: ticker ZW=F, corn futures: ticker ZC=F, and soybean futures: ticker ZS=F)
The series does not exhibit global trends or other widespread and persistent explosive patterns; instead, they display localized trends, bubbles, and spikes. Therefore, we assume that the series is strictly stationary.
Since the standard normality tests reject the null hypothesis of normality in all the considered time series, we can apply \(Gcov22\) to our demeaned data. In this regard, we use both the OLS estimate (\(\Theta_{OLS}\)) and the results obtained by the \(SA\) method as starting points for the optimization problems. Table 4 shows the results. \(Gcov22\) identifies the process related to wheat and corn as mixed causal and noncausal (VAR\((1,1,1)\)). In particular, the same estimated matrix is found regardless of the strategy employed to capture the initial guess, and eigenvalues \(\hat{j}_{1}=0.94\) and \(\hat{j}_{2}=1.12\) are found. This result confirms the findings of Section 4: there are no local minima when the causal and noncausal eigenvalues are respectively slightly smaller and larger than 1.
When analyzing the bivariate process of soybean and wheat, we find that the results depend on the starting point chosen for the optimization problem. In particular, when the OLS strategy is used to capture the initial value, \(Gcov22\) identifies the process as purely causal (VAR(4,0,2)). On the other hand, when the \(SA\) method is used, a lower function value is achieved, and the bivariate process is identified as VAR(3,1,2), with eigenvalues \(j_{1}=0.972\), \(j_{2}=0.88\), \(j_{3}=0.604\), and \(j_{4}=-4.355\). Even in this case, the empirical investigation confirms our previous findings. Specifically, local minima arise when either the causal or the noncausal or both eigenvalues are significantly smaller and larger than 1.
Figure 12: Bivariate process of wheat and corn futures
Figure 13: Bivariate process of wheat and soybean futures
## 7 Conclusions
In this paper, we have investigated the performance of the \(GCov\) in estimating mixed causal and noncausal models. The \(GCov\) estimator, being a semi-parametric method, offers the advantage of not assuming any specific error distribution. By utilizing a portmanteau-type criterion based on nonlinear autocovariances, it ensures consistent estimates and consequently allows for the identification of the causal and noncausal orders of the mixed VAR.
Our findings highlight the importance of considering an adequate number and type of nonlinear autocovariances in the objective function of the \(GCov\) estimator. When these autocovariances are insufficient or inadequate, or when the error density closely resembles the Gaussian distribution, identification issues can arise. This manifests in the presence of local minima in the objective function, occurring at parameter values associated with incorrect causal and noncausal orders. Consequently, the optimization algorithm may converge to a local minimum, leading to inaccurate estimates.
To overcome the problem of local minima and improve the estimation accuracy of mixed VAR models, we propose the use of the Simulated Annealing (SA) optimization algorithm as an alternative to conventional numerical optimization methods. The SA algorithm effectively manages the identification issues caused by local minima, successfully eliminating their effects. By exploring the parameter space in a more robust and flexible manner, SA provides a reliable solution for obtaining more accurate estimates of the causal and noncausal orders.
Finally, the paper applies the \(Gcov22\) method to two bivariate commodity price series and assesses its performance by employing various initial points for the optimization problem. The results highlight the existence of local minima
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{8}{c}{\(\quad\
in one of the bivariate processes, emphasizing the importance of carefully selecting suitable initial values to obtain accurate estimates when utilizing the \(GCov\) estimator. |
2310.18957 | Weighted frames, weighted lower semi frames and unconditionally
convergent multipliers | In this paper we ask when it is possible to transform a given sequence into a
frame or a lower semi frame by multiplying the elements by numbers. In other
words, we ask when a given sequence is a weighted frame or a weighted lower
semi frame and for each case we formulate a conjecture. We determine several
conditions under which these conjectures are true. Finally, we prove an
equivalence between two older conjectures, the first one being that any
unconditionally convergent multiplier can be written as a multiplier of Bessel
sequences by shifting of weights, and the second one that every unconditionally
convergent multiplier which is invertible can be written as a multiplier of
frames by shifting of weights. We also show that these conjectures are also
related to one of the newly posed conjectures. | Peter Balazs, Rosario Corso, Diana Stoeva | 2023-10-29T09:58:18Z | http://arxiv.org/abs/2310.18957v1 | # Weighted frames, weighted lower semi frames and unconditionally convergent multipliers
###### Abstract
In this paper we ask when it is possible to transform a given sequence into a frame or a lower semi frame by multiplying the elements by numbers. In other words, we ask when a given sequence is a weighted frame or a weighted lower semi frame and for each case we formulate a conjecture. We determine several conditions under which these conjectures are true. Finally, we prove an equivalence between two older conjectures, the first one being that any unconditionally convergent multiplier can be written as a multiplier of Bessel sequences by shifting of weights, and the second one that every unconditionally convergent multiplier which is invertible can be written as a multiplier of frames by shifting of weights. We also show that these conjectures are also related to one of the newly posed conjectures.
**Keywords:** weighted frames, weighted lower semi frames, multipliers, unconditionally convergent multipliers.
## 1 Introduction
Let \(\mathcal{H}\) be an infinite dimensional separable Hilbert space with inner product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\). Throughout the paper we use \(\mathbb{N}\) as an index set, also implicitly. Let \((f_{n})\) be a sequence with elements of \(\mathcal{H}\). Recall that \((f_{n})\) is called a _frame for \(\mathcal{H}\)_ if there exist positive constants \(A\) and \(B\) such that
\[A\|f\|^{2}\leq\sum_{n}|\langle f,f_{n}\rangle|^{2}\leq B\|f\|^{2},\qquad\forall f \in\mathcal{H}. \tag{1}\]
Frames have been introduced by Duffin and Schaffer [24] and they provide reconstruction formulas on the entire Hilbert space. More precisely, if \((f_{n})\) is a frame for \(\mathcal{H}\), then there exists a frame \((g_{n})\) for \(\mathcal{H}\) such that
\[f=\sum_{n}\langle f,f_{n}\rangle g_{n}=\sum_{n}\langle f,g_{n}\rangle f_{n}, \qquad\forall f\in\mathcal{H},\]
see [24]. Separate consideration of the inequalities in (1) has also been of interest. The sequence \((f_{n})\) is a _Bessel sequence in \(\mathcal{H}\)_ with Bessel bound \(B\in(0,\infty)\) if the upper inequality in (1) holds for every \(f\in\mathcal{H}\). For more on frames and Bessel sequences, we refer e.g. to the books [17, 26]. The sequence \((f_{n})\) is called a _lower semi frame for \(\mathcal{H}\)_ with bound \(A>0\) if the lower inequality in (1) holds for every \(f\in\mathcal{H}\). A lower semi frame \(\mathcal{F}=(f_{n})\) for \(\mathcal{H}\) provides a reconstruction formula of the type
\[f=\sum_{n}\langle f,f_{n}\rangle g_{n},\qquad\forall f\in\mathcal{D}(C_{ \mathcal{F}}), \tag{2}\]
where \((g_{n})\) is some Bessel sequence in \(\mathcal{H}\) and \(\mathcal{D}(C_{\mathcal{F}})\) is the domain of the analysis operator1 of \((f_{n})\). This result dates back to [15]. Contrary to the frame case, where reconstruction of all the elements of the Hilbert space is guaranteed, for certain lower semi frames a reconstruction formula may hold only on a strict subset of \(\mathcal{H}\), see [15, 33]. For more on lower frame sequences, we refer to [1, 2, 15, 18, 33, 34]. For the purpose of a notion symmetrical to the lower semi frame one, an upper semi-frame for \(\mathcal{H}\) has been introduced [1] to be a Bessel sequence in \(\mathcal{H}\) that is furthermore complete in \(\mathcal{H}\).
Footnote 1: See Section 2 for the definition of the analysis operator.
This paper is centered on the following notions.
**Definition 1.1**.: _A sequence \((f_{n})\) in \(\mathcal{H}\) is called a weighted frame2 (resp. weighted lower semi frame) for \(\mathcal{H}\) if there exists \((\omega_{n})\subset(0,+\infty)\) such that \((\omega_{n}f_{n})\) is a frame (resp. lower semi frame) for \(\mathcal{H}\)._
Footnote 2: Weighted frames were previously introduced and studied in [6, 14, 29]. The notion is also connected to scalable frames [31].
Specifically, in this paper we study the following problems
* Under which conditions is a given sequence a weighted lower semi frame for \(\mathcal{H}\)?
* Under which conditions is a given sequence a weighted frame for \(\mathcal{H}\)?
Further introductory definitions and results are given in Sect. 2. Weighted lower semi frames and Problem P1 are studied in Sect. 3. Similarly, we dedicate Sect. 4 to weighted frames and to Problem P2. At the end of Sect. 4 we relate the topic to unconditionally convergent multipliers. In particular, we establish the equivalence between two conjectures put forward in [38]. We show the surprising fact that if it is true that any unconditionally convergent invertible multiplier is a multiplier of frames by shifts of weights, then it is also true that any unconditionally convergent multiplier is a multiplier of Bessel sequences by shifts of weights.
## 2 Preliminaries
Given an operator \(T\) on a vector space we write \(\mathcal{D}(T)\) for its domain and \(R(T)\) for its range. As usual, the symbol \(\mathcal{B}(\mathcal{H})\) stands for the set of all bounded operators with domain \(\mathcal{H}\) and \(GL(\mathcal{H})\) denotes the set of all bounded bijective operators from \(\mathcal{H}\) onto \(\mathcal{H}\). A sequence \((f_{n})\) of elements of \(\mathcal{H}\) is _complete in \(\mathcal{H}\)_ if its linear span is dense in \(\mathcal{H}\). A sequence \((f_{n})\) in \(\mathcal{H}\) is called _minimal_ if \(f_{j}\notin\overline{\operatorname{span}}(f_{n})_{n\neq j}\) for all \(j\). As it is well known, \((f_{n})\) is minimal if and only if it is _biorthogonal_ to some sequence \((g_{n})\), i.e. \(\langle f_{n},g_{m}\rangle=\delta_{n,m}\); a minimal sequence that is complete in \(\mathcal{H}\) has a unique biorthogonal in \(\mathcal{H}\). Throughout the paper, we will make use of two operators associated to a sequence \(\mathcal{F}:=(f_{n})\)
The _analysis operator_\(C_{\mathcal{F}}:\mathcal{D}(C_{\mathcal{F}})\subseteq\mathcal{H}\to\ell^{2}\) is given by \(\mathcal{D}(C_{\mathcal{F}})=\{f\in\mathcal{H}:\sum_{n}|\langle f,f_{n}\rangle| ^{2}<\infty\}\) and \(C_{\mathcal{F}}f=(\langle f,f_{n}\rangle)\), for \(f\in\mathcal{D}(C_{\mathcal{F}})\). The _synthesis operator_\(D_{\mathcal{F}}:\mathcal{D}(D_{\mathcal{F}})\subseteq\ell^{2}\to\mathcal{H}\) is given by \(\mathcal{D}(D_{\mathcal{F}})=\{(c_{n})\in\ell^{2}:\sum_{n}c_{n}f_{n}\text{ converges in }\mathcal{H}\}\) and \(D_{\mathcal{F}}(c_{n})=\sum_{n}c_{n}f_{n}\), for \((c_{n})\in\mathcal{D}(D_{\mathcal{F}})\). The operator \(C_{\mathcal{F}}\) is always closed [15] and \(C_{\mathcal{F}}=D_{\mathcal{F}}^{*}\)[11]. Detailed properties of the analysis and synthesis operators (alongside other operators associated to sequences) can be found in [11].
Let us recall some basic results from frame theory (that can be found e.g. in [17]). For a Bessel sequence \((f_{n})\) and \((c_{n})\in\ell^{2}\) the series \(\sum_{n=1}^{\infty}c_{n}f_{n}\) is always convergent. A _Riesz basis_\((f_{n})\) for \(\mathcal{H}\) is a complete sequence in \(\mathcal{H}\) satisfying
\[A\sum_{n}|c_{n}|^{2}\leq\left\|\sum_{n}c_{n}f_{n}\right\|^{2}\leq B\sum_{n}|c _{n}|^{2},\qquad\forall(c_{n})\in\ell^{2},\]
for some positive constants \(A\) and \(B\). A sequence is a Riesz basis for \(\mathcal{H}\) if it is a frame for \(\mathcal{H}\) that has a biorthogonal sequence. As done for frames and lower semi frames, we say that a sequence \((f_{n})\) is a _weighted Riesz basis_ for \(\mathcal{H}\) if there exists \((\omega_{n})\subset(0,+\infty)\) such that \((\omega_{n}f_{n})\) is a Riesz basis for \(\mathcal{H}\). Note that Problem P2 with respect to weighted Riesz bases is related to the following well-known characterization of Riesz bases.
**Theorem 2.1**.: _[_17_, Lemma 3.6.9]_ _A sequence \((f_{n})\) is a Riesz basis for \(\mathcal{H}\) if and only if it is an unconditional basis for \(\mathcal{H}\) and it is norm-semi-normalized (i.e. \(0<\inf\|f_{n}\|\leq\sup\|f_{n}\|<\infty\)). In other words, a sequence \((f_{n})\) is a weighted Riesz basis for \(\mathcal{H}\) if and only if it is an unconditional basis for \(\mathcal{H}\)._
In [9, 32] reproducing pairs have been introduced and studied as a way to go beyond the frame reconstruction formulas. A pair of sequences \(((f_{n}),(g_{n}))\) is called a _reproducing pair for \(\mathcal{H}\)_ if the operator \(T\) defined weakly by \(\langle Tf,g\rangle=\sum_{n}\langle f,f_{n}\rangle\langle g_{n},g\rangle\) belongs to \(GL(\mathcal{H})\). Reproducing pairs are related to the more general notion of sesquilinear forms. For study on sesquilinear forms, we refer to [18, 22, 30].
## 3 Weighted lower semi frames
Since a weighted frame is, in particular, a weighted lower semi frame, we start by analyzing sequences that can be transformed into lower semi frames. First, observe that if a sequence is a weighted lower semi frame for \(\mathcal{H}\), then it is complete in \(\mathcal{H}\). We conjecture that completeness is furthermore a sufficient condition for the lower semi frame property:
**Conjecture 3.1**.: _Let \((f_{n})\) be a complete sequence in \(\mathcal{H}\) and \(A\in(0,\infty)\). There exists \((\omega_{n})\subset(0,\infty)\) such that \((\omega_{n}f_{n})\) is a lower semi frame for \(\mathcal{H}\) with a bound \(A\)._
In this section we determine some classes of sequences for which Conjecture 3.1 holds. First of all, we prove that the corresponding problem for Bessel sequences can be easily solved and it is helpful for dealing with special cases of the above conjecture.
**Lemma 3.2**.: _Let \((f_{n})\) be a sequence in \(\mathcal{H}\). For any \(B>0\) there exists a sequence \((\lambda_{n})\subset(0,\infty)\) such that \((\lambda_{n}f_{n})\) is a Bessel sequence with bound \(B\)._
Proof.: Let \(B>0\) and let \((\tau_{n})\subset(0,\infty)\) be a sequence such that \(\sum_{n}\tau_{n}^{2}=1\). Define \(\lambda_{n}=\tau_{n}\sqrt{B}\|f_{n}\|^{-1}\) if \(f_{n}\neq 0\) and \(\lambda_{n}=1\) (or any other positive number) if \(f_{n}=0\). Then, by the Cauchy-Schwarz inequality, we have for all \(f\in\mathcal{H}\) that
\[\sum_{n}|\langle f,\lambda_{n}f_{n}\rangle|^{2}\leq\sum_{n}\tau_{n}^{2}B\|f\|^ {2}=B\|f\|^{2}.\qed\]
With this lemma we can then prove our first result.
**Proposition 3.3**.: _Let \(((f_{n}),(g_{n}))\) be a reproducing pair for \(\mathcal{H}\). Then there are weights \((\lambda_{n})\subset(0,\infty)\) and \((\beta_{n})\subset(0,\infty)\) such that \((\frac{1}{\lambda_{n}}f_{n})\) and \((\frac{1}{\beta_{n}}g_{n})\) are lower semi-frames for \(\mathcal{H}\), and \((\lambda_{n}g_{n})\) and \((\beta_{n}f_{n})\) are upper semi-frames for \(\mathcal{H}\). In particular, Conjecture 3.1 is true for \((f_{n})\) and \((g_{n})\)._
Proof.: Let \(T\in GL(\mathcal{H})\) be the operator such that \(\langle Tf,g\rangle=\sum_{n}\langle f,f_{n}\rangle\langle g_{n},g\rangle\) for all \(f,g\in\mathcal{H}\). Then
\[\langle f,g\rangle=\sum_{n}\langle f,f_{n}\rangle\langle h_{n},g\rangle,\qquad \forall f,g\in\mathcal{H}, \tag{3}\]
where \((h_{n})=(T^{-1}g_{n})\).
Let \(A>0\). By Lemma 3.2, there exists a sequence \((\lambda_{n})\subset(0,+\infty)\) such that \((\lambda_{n}h_{n})\) is Bessel with bound \(A^{-1}\). Since
\[\langle f,g\rangle=\sum_{n}\langle f,\lambda_{n}^{-1}f_{n}\rangle\langle \lambda_{n}h_{n},g\rangle,\qquad\forall f,g\in\mathcal{H},\]
using the Cauchy-Schwarz inequality, similar to [17, Lemma 6.3.2] we get that for every \(f\in\mathcal{H}\),
\[\|f\|^{4}\leq(\sum_{n}|\langle f,\lambda_{n}^{-1}f_{n}\rangle|^{2})(\sum_{n}| \langle f,\lambda_{n}h_{n}\rangle|^{2})\leq A^{-1}\|f\|^{2}\sum_{n}|\langle f,\lambda_{n}^{-1}f_{n}\rangle|^{2}.\]
Thus, \((\lambda_{n}^{-1}f_{n})\) is a lower semi frame for \(\mathcal{H}\) with bound \(A\). Since \((\lambda_{n}h_{n})\) is a Bessel sequence, clearly \((\lambda_{n}g_{n})\) is also Bessel.
Further, by Lemma 3.2, take a sequence \((\beta_{n})\subset(0,+\infty)\) such that \((\beta_{n}f_{n})\) is Bessel with bound \(B=A^{-1}\|T^{-1}\|^{-2}\). Similar to the above paragraph, using the Cauchy-Schwarz inequality, we get that the sequence \((\beta_{n}^{-1}h_{n})\) is a lower semi frame for \(\mathcal{H}\) with bound \(B^{-1}\). Then \((\beta_{n}^{-1}g_{n})\) is a lower semi frame for \(\mathcal{H}\) with bound \(B^{-1}\|T^{-1}\|^{-2}=A\).
Since \((\lambda_{n}^{-1}f_{n})\) and \((\beta_{n}^{-1}g_{n})\) are complete in \(\mathcal{H}\), clearly \((\lambda_{n}g_{n})\) and \((\beta_{n}f_{n})\) are also complete in \(\mathcal{H}\).
As particular case, Proposition 3.3 applies to sequences \((f_{n})\) and \((g_{n})\) which are _weakly dual_ to each other in the sense that
\[\langle f,g\rangle=\sum_{n}\langle f,f_{n}\rangle\langle g_{n},g\rangle,\qquad \forall f,g\in\mathcal{H}, \tag{4}\]
or when the operator \(M\) given by
\[Mf=\sum_{n}\langle f,g_{n}\rangle f_{n} \tag{5}\]
is well defined on \(\mathcal{H}\) and belongs to \(GL(\mathcal{H})\). Note that an operator \(M\) determined by (5) is a multiplier with a constant symbol (see Section 4.4 for the definition of multipliers); see [36, 37, 42] for results related to the question when a multiplier is in \(GL(\mathcal{H})\). For studies of representations of unbounded operators like (5) we refer to [12].
In particular, by the proofs of Proposition 3.3 and Lemma 3.2, the following can be stated for the important case of weakly dual sequences:
**Corollary 3.4**.: _Let \(((f_{n}),(g_{n}))\) be a weakly dual pair for \(\mathcal{H}\), i.e., (4) holds. Then there are weights \((\lambda_{n})\subset(0,\infty)\) and \((\beta_{n})\subset(0,\infty)\) such that \((\frac{1}{\lambda_{n}}f_{n},\lambda_{n}g_{n})\) and \((\beta_{n}f_{n},\frac{1}{\beta_{n}}g_{n})\) are weakly dual pairs for \(\mathcal{H}\), \((\frac{1}{\lambda_{n}}f_{n})\) and \((\frac{1}{\beta_{n}}g_{n})\) are lower semi-frames for \(\mathcal{H}\), and \((\lambda_{n}g_{n})\) and \((\beta_{n}f_{n})\) are upper semi-frames for \(\mathcal{H}\). The weights can be chosen e.g. as \(\lambda_{n}=\tau_{n}\|g_{n}\|^{-1}\) if \(g_{n}\neq 0\), \(\lambda_{n}=1\) if \(g_{n}=0\), \(\beta_{n}=\tau_{n}\|f_{n}\|^{-1}\) if \(f_{n}\neq 0\), and \(\beta_{n}=1\) if \(f_{n}=0\), where \((\tau_{n})\subset(0,\infty)\) is a sequence such that \(\sum_{n}\tau_{n}^{2}=1\)._
As another direct application of Proposition 3.3 we have the following statement for Schauder bases3.
Footnote 3: Recall that a sequence \((f_{n})\) in \(\mathcal{H}\) is a Schauder basis for \(\mathcal{H}\) if there exists a unique sequence \((g_{n})\) in \(\mathcal{H}\) such that \(f=\sum_{n}\langle f,g_{n}\rangle f_{n}\) for every \(f\in\mathcal{H}\).
**Corollary 3.5**.: _Let \((f_{n})\) be a sequence in \(\mathcal{H}\) containing a Schauder basis for \(\mathcal{H}\) or a frame for \(\mathcal{H}\). Then Conjecture 3.1 is true for \((f_{n})\)._
Recall that when \((f_{n})\) is a sequence in \(\mathcal{H}\) whose synthesis operator is closed and surjective, then there is a Bessel sequence \((g_{n})\) such that \(f=\sum\langle f,g_{n}\rangle f_{n}\) for every \(f\in\mathcal{H}\) and \((f_{n})\) satisfies the lower frame condition for \(\mathcal{H}\), see [17, Theorem 8.4.1]. Let us now consider the following (more general) case using closed and surjective operator of the form (5):
**Proposition 3.6**.: _Let \((f_{n})\) and \((g_{n})\) be sequences in \(\mathcal{H}\). Define the operator \(M\) by (5) with domain determined by those \(f\) for which the series in (5) converges. If \(M\) is closed and \(R(M)=\mathcal{H}\), then Conjecture 3.1 is true for \((f_{n})\)._
Proof.: By [13, Lemma 1.1, Corollary 1.2], the operator \(M\) admits a pseudo-inverse \(M^{+}\) which belongs to \(\mathcal{B}(\mathcal{H})\) since \(R(M)=\mathcal{H}\). This means, in particular, that \(MM^{+}h=h\) for every \(h\in\mathcal{H}\). Then
\[h=MM^{+}h=\sum_{n}\langle M^{+}h,g_{n}\rangle f_{n}=\sum_{n}\langle h,M^{+*}g_ {n}\rangle f_{n},\qquad\forall h\in\mathcal{H}.\]
By Corollary 3.4, Conjecture 3.1 is true for \((f_{n})\).
We proceed with one more class of sequences for which Conjecture 3.1 holds.
**Proposition 3.7**.: _Any complete sequence in \(\mathcal{H}\) whose analysis operator has finite dimensional domain is a lower semi frame for \(\mathcal{H}\) (thus, Conjecture 3.1 trivially holds for it)._
Proof.: Let \((f_{n})\) be a complete sequence in \(\mathcal{H}\) such that its analysis operator \(C\) has finite dimensional (closed) domain \(\mathcal{D}(C)\). Then for \(f\in\mathcal{D}(C)\) we have
\[\sum_{n=1}^{\infty}|\langle f,f_{n}\rangle|^{2}=\sum_{n=1}^{\infty}|\langle f,Pf_{n}\rangle|^{2}, \tag{6}\]
where \(P\) is the orthogonal projection onto \(\mathcal{D}(C)\). The sequence \((Pf_{n})\) is complete in \(\mathcal{D}(C)\) and since \(\dim\mathcal{D}(C)=d<\infty\), there exist \(f_{n_{1}},f_{n_{2}},\ldots,f_{n_{d}}\) such that \(Pf_{n_{1}},Pf_{n_{2}},\ldots,Pf_{n_{d}}\) are linearly independent and thus a Riesz basis for \(\mathcal{D}(C)\). This implies that \((Pf_{n})\) is a lower semi frame for \(\mathcal{D}(C)\). By (6), \((f_{n})\) satisfies the lower frame condition for \(\mathcal{D}(C)\) and hence \((f_{n})\) is a lower semi frame for \(\mathcal{H}\).
Note that if the analysis operator \(C\) of \((f_{n})\) has finite dimensional domain then also the orthogonal complement of \((f_{n})\) is finite dimensional. This follows by the fact that if \(h\perp f_{n}\) for every \(n\), then \(h\in\mathcal{D}(C)\).
_Remark 3.8_.: As it can be expected, the above propositions do not cover all the possible classes of sequences for which Conjecture 3.1 holds. As an illustration, consider for example the sequence \((f_{n})_{n=2}^{\infty}=(n(e_{1}+e_{n}))_{n=2}^{\infty}\) where \((e_{n})_{n=1}^{\infty}\) denotes an orthonormal basis for \(\mathcal{H}\). It is shown in [33] that \((f_{n})_{n=2}^{\infty}\) satisfies the lower frame condition for \(\mathcal{H}\), but not all the elements of \(\mathcal{H}\) can be represented in the form \(\sum_{n}c_{n}f_{n}\) (hence, Proposition 3.6 does not apply) and the representation (2) can not be extended to the entire space \(\mathcal{H}\). In a similar way as in [33] it can be shown that \((f_{n})\) does not generate a reproducing pair \((f_{n},g_{n})\) for \(\mathcal{H}\) and thus Proposition 3.3 does not apply. Clearly, the domain of the analysis operator of \((f_{n})\) is not finite dimensional, so Proposition 3.7 does not apply either.
The above remark motivates interest to further consideration of Conjecture 3.1. Let us end this section showing that we can reduce the conjecture studying only sequences with densely defined analysis operators.
**Theorem 3.9**.: _The following statements are equivalent._
1. _Conjecture_ 3.1 _is true._
2. _Conjecture_ 3.1 _is true for every complete sequence in_ \(\mathcal{H}\) _whose analysis operator is densely defined_4_. Footnote 4: i.e., whose synthesis operator is closable [11]; in such case one also has \(D\subseteq C^{*}\)[16].
Proof.: \(1.\implies 2.\) Clear.
\(2.\implies 1.\) Let \(\mathcal{F}:=(f_{n})\) be a complete sequence in \(\mathcal{H}\) with analysis operator \(C_{\mathcal{F}}\). We will prove that Conjecture 3.1 holds for \(F\) based on the assumption for validity of \(2.\) Let \(W=\overline{\mathcal{D}(C_{\mathcal{F}})}\) be the closure of \(\mathcal{D}(C_{\mathcal{F}})\) in \(\mathcal{H}\) and let \(P_{W}\) be the orthogonal projection onto \(W\). If \(\dim W<\infty\), then the conjecture is true for \(\mathcal{F}\) by Proposition 3.7. Assume that \(W\) has infinite dimension. Recall that \(\mathcal{H}\) is separable, therefore \(W\) is separable and there exists a unitary operator \(J:W\to\mathcal{H}\) (i.e., \(J^{*}=J^{-1}\)). Define \((h_{n}):=(P_{W}f_{n})\) and \(\mathcal{G}:=(g_{n}):=(Jh_{n})\). The sequences \((h_{n})\) and \((g_{n})\) are complete in \(W\) and \(\mathcal{H}\), respectively. Let \(C_{\mathcal{G}}:\mathcal{D}(C_{\mathcal{G}})\subset\mathcal{H}\to\ell^{2}\) be the analysis operator of \(\mathcal{G}\). Observe that for \(f\in W\) the following relations hold
\[\sum_{n}|\langle Jf,g_{n}\rangle|^{2}=\sum_{n}|\langle f,h_{n}\rangle|^{2}= \sum_{n}|\langle P_{W}f,f_{n}\rangle|^{2}=\sum_{n}|\langle f,f_{n}\rangle|^{2},\]
which implies that \(Jf\in\mathcal{D}(C_{\mathcal{G}})\) if and only if \(f\in\mathcal{D}(C_{\mathcal{F}})\). Therefore, \(J\mathcal{D}(C_{\mathcal{F}})=\mathcal{D}(C_{\mathcal{G}})\), leading to the conclusion that \(\mathcal{D}(C_{\mathcal{G}})\) is dense in \(\mathcal{H}\).
Fix an arbitrary \(A>0\). Since \((g_{n})\) is complete in \(\mathcal{H}\) with \(\mathcal{D}(C_{\mathcal{G}})\) being dense in \(\mathcal{H}\), by assumption there is a sequence \((\omega_{n})\subset(0,\infty)\) such that \((\omega_{n}g_{n})\) is a lower semi frame for \(\mathcal{H}\) with bound \(A\). By making the substitution \(\omega_{n}\to\omega_{n}+1\), we can assume that \(\omega_{n}>1\) for every \(n\). It remains to prove that \((\omega_{n}f_{n})\) is a lower semi frame for \(\mathcal{H}\) with bound \(A\).
Let \(f\in\mathcal{H}\). If \(\sum_{n}|\langle f,\omega_{n}f_{n}\rangle|^{2}=\infty\), then the lower frame inequality trivially holds. Now consider the remaining case. In this case we have
\[\sum_{n}|\langle f,f_{n}\rangle|^{2}\leq\sum_{n}|\langle f,\omega_{n}f_{n} \rangle|^{2}<\infty,\]
so \(f\) is necessarily in \(\mathcal{D}(C_{\mathcal{F}})\subset W\). Therefore,
\[\sum_{n}|\langle f,\omega_{n}f_{n}\rangle|^{2}=\sum_{n}|\langle P_{W}f,\omega_ {n}f_{n}\rangle|^{2}=\sum_{n}|\langle f,\omega_{n}h_{n}\rangle|^{2}=\sum_{n}| \langle Jf,\omega_{n}g_{n}\rangle|^{2}\geq A\|Jf\|^{2}=A\|f\|^{2}.\qed\]
In [8] a similar approach looking at the closure of the domain of the analysis operator is used for results of lower frames sequences and Riesz-Fisher sequences.
## 4 Weighted frames
Note that in the literature [6, 14, 29] Definition 1.1 was used to define the concept of \(w\)-frame while weighted frames were used either with the meaning of a \(w\)-frame or with the meaning of a sequence which can be transformed to a frame via a weight \((w_{n})\subset\mathbb{C}\). Trivially, Definition 1.1 is equivalent to a definition which requires \((\omega_{n})\subset\mathbb{C}\) and \(\omega_{n}\neq 0\) for every \(n\) (taking \((|\omega_{n}|)\) in consideration). Notice that Lemma 3.2 can bring conclusion for equivalence of Definition 1.1 to a definition that furthermore allows zero elements in \((w_{n})\) (so, one can also use \([0,+\infty]\) or the entire set \(\mathbb{C}\) for an equivalent definition). Indeed, assume that \((f_{n})\) is a sequence such that \((\omega_{n}^{\prime}f_{n})\) is a frame for \(\mathcal{H}\) for some \((\omega_{n}^{\prime})\subset[0,\infty)\). Denote \(I=\{n\in\mathbb{N}:\omega_{n}^{\prime}\neq 0\}\) and split \((f_{n})=(f_{m})_{m\in I}\cup(f_{k})_{k\notin I}\). By Lemma 3.2, there exists a sequence \((\tau_{k})_{k\notin I}\subset(0,\infty)\) such that \((\tau_{k}f_{k})_{k\notin I}\) is a Bessel sequence. Now, define \(\omega_{n}=\omega_{n}^{\prime}\) if \(n\in I\) and \(\omega_{n}=\tau_{n}\) if \(n\notin I\). By construction, \((\omega_{n})\subset(0,+\infty)\). Since \((\omega_{m}^{\prime}f_{m})_{m\in I}\) is a frame for \(\mathcal{H}\) and \((\tau_{k}f_{k})_{k\notin I}\) is a Bessel sequence, the sequence \((\omega_{n}f_{n})\) is a frame for \(\mathcal{H}\).
### Necessary conditions
We begin with some necessary conditions that are easy consequences of known results about frames:
**Corollary 4.1**.: _If \((f_{n})\) is a weighted frame for \(\mathcal{H}\), then \((f_{n})\) is complete in \(\mathcal{H}\) and there exists a sequence \((g_{n})\) in \(\mathcal{H}\) such that_
\[f=\sum_{n}\langle f,f_{n}\rangle g_{n}=\sum_{n}\langle f,g_{n}\rangle f_{n}, \qquad\forall f\in\mathcal{H},\]
_with unconditional convergence._
As in the case with frames, where completeness is necessary and not sufficient condition when the space is infinite dimensional, here completeness is also necessary and not sufficient for the weighted frame property. An illustration is given in the following example.
**Example 4.2**.: _Let \((f_{n})_{n=2}^{\infty}=(e_{1}+e_{n})_{n=2}^{\infty}\), where \((e_{n})_{n=1}^{\infty}\) is an orthonormal basis for \(\mathcal{H}\). It is easy to see that \((f_{n})_{n=2}^{\infty}\) is complete in \(\mathcal{H}\). However, \((f_{n})_{n=2}^{\infty}\) is not a weighted frame for \(\mathcal{H}\). Indeed, with an argument similar to the one used in [33], there does not exist a sequence \((g_{n})_{n=2}^{\infty}\) such that_
\[f=\sum_{n}\langle f,f_{n}\rangle g_{n},\qquad\forall f\in\mathcal{H}.\]
_Then, by Proposition 4.1, \((f_{n})_{n=2}^{\infty}\) is not a weighted frame for \(\mathcal{H}\)._
The next statement gives a class of complete sequences that are not weighted frames. For recent results on completeness in relation to the Bessel and Riesz basis properties we reef to [35].
**Proposition 4.3**.: _Let \((f_{n})\) be a complete minimal sequence in \(\mathcal{H}\) whose biorthogonal sequence is not complete in \(\mathcal{H}\). Then \((f_{n})\) is not a weighted frame for \(\mathcal{H}\)._
Proof.: Suppose that \((f_{n})\) is a weighted frame for \(\mathcal{H}\) and a minimal sequence. In particular, this means that there exists \((\omega_{n})\subset(0,+\infty)\) such that \((\omega_{n}f_{n})\) is a Riesz basis for \(\mathcal{H}\). Then the biorthogonal sequence to \((\omega_{n}f_{n})\) is a Riesz basis for \(\mathcal{H}\) too. This implies that the biorthogonal sequence to \((f_{n})\) is complete in \(\mathcal{H}\), which leads to a contradiction.
_Remark 4.4_.: The sequence \((f_{n})_{n=2}^{\infty}\) in Example 4.2 is minimal and complete in \(\mathcal{H}\), and its biorthogonal sequence is \((e_{n})_{n=2}^{\infty}\), which is not complete in \(\mathcal{H}\). Hence, Proposition 4.3 gives another proof that \((f_{n})_{n=2}^{\infty}\) is not a weighted frame for \(\mathcal{H}\).
Further necessary conditions for the weighted frame property are given in the next proposition:
**Proposition 4.5**.: _Let \((f_{n})\) be a sequence in \(\mathcal{H}\) and \((\omega_{n})\subset(0,+\infty)\) be such that \((\omega_{n}f_{n})\) is a Bessel sequence in \(\mathcal{H}\). Then_
1. \(\sup_{n}(\omega_{n}\|f_{n}\|)<\infty\)_;_
2. _if_ \(\inf_{n}\|f_{n}\|>0\)_, then_ \(\sup_{n}(\omega_{n})<\infty\)_;_
3. _if_ \((f_{n})\) _is not a Bessel sequence, then_ \(\inf_{n}(\omega_{n})=0\)_._
Proof.: Statement 1. comes from a well known result that the elements of a Bessel sequence are norm-bounded from above. Statement 2. follows trivially from 1.
3. Assume that \((f_{n})\) is not a Bessel sequence and \(\inf_{n}(\omega_{n})=c>0\). In this case there exists \(f\in\mathcal{H}\) such that \(\sum_{n}|\langle f,f_{n}\rangle|^{2}=+\infty\), which leads to a contradiction: \[\infty=c^{2}\sum_{n}|\langle f,f_{n}\rangle|^{2}\leq\sum_{n}|\langle f,\omega_ {n}f_{n}\rangle|^{2}<\infty.\qed\]
_Remark 4.6_.: Let us come back again to the sequence \((f_{n})_{n=2}^{\infty}\) from Example 4.2. As it was proved above in different ways, \((f_{n})_{n=2}^{\infty}\) is not a weighted frame for \(\mathcal{H}\). Here, we give another proof by applying Proposition 4.5. Assume that \((\omega_{n}f_{n})_{n=2}^{\infty}\) is a frame for \(\mathcal{H}\) for some \((\omega_{n})\subset(0,+\infty)\). Since \((f_{n})_{n=2}^{\infty}\) is not a Bessel sequence, \(\inf_{n}(\omega_{n})_{n=2}^{\infty}=0\) by Proposition 4.5, and thus there exists a subsequence \((\omega_{n_{k}})\) such that \(\lim\limits_{k\to+\infty}\omega_{n_{k}}=0\). This leads to the following contradiction:
\[1=\|e_{n_{k}}\|^{2}\approx\sum_{n}\omega_{n}^{2}|\langle e_{n_{k}},f_{n} \rangle|^{2}=\omega_{n_{k}}^{2}\to 0\text{ as }k\to\infty.\]
### Sufficient conditions
**Proposition 4.7**.: _If a sequence \((f_{n})\) contains a proper subsequence which is a weighted frame for \(\mathcal{H}\), then \((f_{n})\) is a weighted frame for \(\mathcal{H}\)._
Proof.: Let us assume that \((f_{n})=(f_{n_{k}})_{k\in I}\cup(g_{m})_{m\in J}\) with some (not necessarily infinite) sequence \((g_{m})_{m\in J}\) in \(\mathcal{H}\) such that \((\omega_{k}f_{n_{k}})_{k\in I}\) is a frame for \(\mathcal{H}\) with \((\omega_{k})\subset(0,+\infty)\). Then, by Lemma 3.2, there exists a sequence \((\lambda_{m})_{m\in J}\subset(0,+\infty)\) such that \((\lambda_{m}g_{m})_{m\in J}\) is a Bessel sequence. Hence, \((\omega_{k}f_{n_{k}})_{k\in I}\cup(\lambda_{m}g_{m})_{m\in J}\) is a frame for \(\mathcal{H}\) and \((f_{n})\) is a weighted frame for \(\mathcal{H}\).
We recall that the excess of a sequence \((f_{n})\) in \(\mathcal{H}\) is defined as
\[e((f_{n}))=\sup\{|J|:J\subseteq I\text{ and }\overline{\operatorname{span}}\{f_{n} \}_{n\in I\setminus J}=\overline{\operatorname{span}}\{f_{n}\}_{n\in I}\},\]
and, moreover, a frame has null excess if and only if it is a Riesz basis (see, e.g., [26, Section 8.7])5. The next statement gives a description of the class of weighted frames among all sequences with finite positive access.
Footnote 5: The excess of frames have been studied also in [3, 27, 28, 4].
**Proposition 4.8**.: _If a sequence in \(\mathcal{H}\) has finite positive excess, then it is a weighted frame for \(\mathcal{H}\) if and only if it contains a proper subsequence which is a weighted Riesz basis for \(\mathcal{H}\)._
Proof.: One of the directions follows from Proposition 4.7. For the other direction, assume that \((f_{n})\) is a weighted frame for \(\mathcal{H}\) with finite positive excess. Then there exists \((\omega_{n})\subset(0,+\infty)\) such that \((\omega_{n}f_{n})\) is a frame for \(\mathcal{H}\). Clearly, \((\omega_{n}f_{n})\) has finite positive excess. Hence, by [26, Lemma 8.42], there exists a proper subsequence of \((\omega_{n}f_{n})\) which is a Riesz basis for \(\mathcal{H}\). Thus, \((f_{n})\) contains a weighted Riesz basis.
### Some characterizations of weighted frames
Here, we consider more general operators than the analysis and synthesis ones. Given a sequence \(\mathcal{F}=(f_{n})\) in \(\mathcal{H}\), we consider the operator \(\mathsf{C}_{\mathcal{F}}:\mathcal{H}\to\mathbb{C}^{\mathbb{N}}\) defined by \(\mathsf{C}_{\mathcal{F}}f=(\langle f,f_{n}\rangle)\), and the operator \(\mathsf{D}_{\mathcal{F}}:\mathcal{D}(\mathsf{D}_{\mathcal{F}})\subseteq \mathbb{C}^{\mathbb{N}}\to\mathcal{H}\) determined by \(\mathsf{D}_{\mathcal{F}}(c_{n})=\sum_{n}c_{n}f_{n}\) for those complex sequences \((c_{n})\) for which \(\sum_{n}c_{n}f_{n}\) converges in \(\mathcal{H}\). Note that \(\mathsf{C}_{\mathcal{F}}\) is a linear operator between vector spaces and it is equal to the analysis operator of \((f_{n})\) if and only if \((f_{n})\) is a Bessel sequence. The following characterizations of the weighted frame property are inspired and based on known characterizations of the frame property [17, Corollary 5.5.3 and Theorem 5.5.1].
**Proposition 4.9**.: _Given a sequence \(\mathcal{F}=(f_{n})\) in \(\mathcal{H}\), the following statements are equivalent._
_1. \((f_{n})\) is a weighted frame for \(\mathcal{H}\)._
_2. There exists \((\omega_{n})\subset(0,+\infty)\) such that \(R(\mathsf{C}_{\mathcal{F}})\) is a Hilbert space with inner product \(\langle\cdot,\cdot\rangle_{\omega^{2}}\) defined by_
\[\langle(c_{n}),(d_{n})\rangle_{\omega^{2}}=\sum_{n}\omega_{n}^{2}c_{n}\overline {d_{n}},\]
_and \(\mathsf{C}_{\mathcal{F}}\) is a bounded and bijective operator between \(\mathcal{H}\) and \(R(\mathsf{C}_{\mathcal{F}})\)._
_3. There exists \((\omega_{n})\subset(0,+\infty)\) such that the operator \(\mathsf{D}_{\mathcal{F}}\) is well-defined on the Hilbert space \(\ell^{2}_{\omega^{-2}}\) determined by_
\[\ell^{2}_{\omega^{-2}}=\left\{(c_{n})\in\mathbb{C}^{\mathbb{N}}:\sum_{n}\frac{ |c_{n}|^{2}}{\omega_{n}^{2}}<\infty\right\},\quad\langle(c_{n}),(d_{n})\rangle _{\omega^{-2}}=\sum_{n}\frac{c_{n}\overline{d_{n}}}{\omega_{n}^{2}},\]
_and \(R(\mathsf{D}_{\mathcal{F}})=\mathcal{H}\)._
### A conjecture about weighted frames and relation to unconditionally convergent multipliers
Proposition 4.1 suggests the following conjecture:
**Conjecture 4.10**.: _A sequence \((f_{n})\) in \(\mathcal{H}\) is a weighted frame for \(\mathcal{H}\) if and only if there is a sequence \((g_{n})\) so that for every \(f\in\mathcal{H}\) one has \(f=\sum_{n}\langle f,f_{n}\rangle g_{n}\) with unconditional convergence._
First note that the representation \(f=\sum_{n}\langle f,f_{n}\rangle g_{n}\) in the above conjecture can equivalently be replaced by \(f=\sum_{n}\langle f,g_{n}\rangle f_{n}\), because validity of \(f=\sum_{n}\langle f,f_{n}\rangle g_{n}\) with unconditional convergence for every \(f\in\mathcal{H}\) is equivalent to validity of \(f=\sum_{n}\langle f,g_{n}\rangle f_{n}\) with unconditional convergence for every \(f\in\mathcal{H}\) (see [38, Lemmas 3.1 and 2.3]).
As it was noticed in Prop. 4.1, the "only if" direction of Conjecture 4.10 holds true, so only the "if" direction is still open. The "if" direction can be related to conjectures and results on invertible unconditionally convergent multipliers in [38]. Let us first recall that a _multiplier_ of two sequences \(\Phi=(\varphi_{n})\) and \(\Psi=(\psi_{n})\) in \(\mathcal{H}\) and a symbol (complex sequence) \(m=(m_{n})\) is the operator \(M_{m,\Phi,\Psi}\) given by
\[M_{m,\Phi,\Psi}f=\sum_{n}m_{n}\langle f,\psi_{n}\rangle\varphi_{n} \tag{7}\]
for those \(f\) from \(\mathcal{H}\) for which the series in (7) converges. Multipliers were extensively studied in [5, 10, 20, 21, 23, 36, 37, 38, 39, 40, 41, 42] and generalized in different ways [7, 19]. Note that a multiplier which is defined on the entire space \(\mathcal{H}\) is automatically bounded on \(\mathcal{H}\)[38]. A multiplier \(M_{m,\Phi,\Psi}\) is said to be _unconditionally convergent_ if the series in (7) is unconditionally convergent for every \(f\in\mathcal{H}\). The multiplier of two Bessel sequences and bounded symbol belongs to \(\mathcal{B}(\mathcal{H})\) and it is unconditionally convergent [5]. In [38], a main topic was a converse question, namely, the question whether an unconditionally convergent multiplier can be written as a multiplier of two Bessel sequences and symbol (1) just by shifting of weights. This question was affirmatively answered for some classes of multipliers [25, 38] and the general case was posed as a conjecture:
**Conjecture (M1)** ([38]).: _Let \(M_{m,\Phi,\Psi}\) be an unconditionally convergent multiplier. Then there exist \((\alpha_{n})\subset\mathbb{C}\) and \((\beta_{n})\subset\mathbb{C}\) such that \((\alpha_{n}\varphi_{n})\) and \((\beta_{n}\psi_{n})\) are Bessel sequences and \(\alpha_{n}\overline{\beta_{n}}=m_{n}\) for all \(n\)._
A question in a similar spirit, related to representation of an invertible multiplier as a multiplier of frames by shifting of weights, was also investigated in [38] and it can be stated as a conjecture too:
**Conjecture (M2)** ([38]).: _Let \(M_{m,\Phi,\Psi}\) be an unconditionally convergent multiplier which is invertible on \(\mathcal{H}\). Then there exist \((\alpha_{n})\subset\mathbb{C}\) and \((\beta_{n})\subset\mathbb{C}\) such that \((\alpha_{n}\varphi_{n})\) and \((\beta_{n}\psi_{n})\) are frames for \(\mathcal{H}\) and \(\alpha_{n}\overline{\beta_{n}}=m_{n}\) for all \(n\)._
The two conjectures (M1) and (M2) are actually related. That validity of (M1) implies validity of (M2) was observed already in [38]. Surprisingly, the converse implication turned out to also be true.
**Theorem 4.11**.: _Conjectures (M1) and (M2) are equivalent._
Proof.: (M1) \(\implies\) (M2) See [38, Section 4] for a proof.
(M2) \(\implies\) (M1) Let \(M_{m,\Phi,\Psi}\) be an unconditionally convergent multiplier and thus a bounded operator on \(\mathcal{H}\) as well. Write \(I-M_{m,\Phi,\Psi}=M_{(1),\mathcal{F},\mathcal{G}}\) for some multiplier \(M_{(1),\mathcal{F},\mathcal{G}}\) with symbol (1) and Bessel sequences \(\mathcal{F}=(f_{n})\) and \(\mathcal{G}=(g_{n})\) (for instance, take \(\mathcal{G}\) to be an orthonormal basis for \(\mathcal{H}\) and
\(f_{n}=(I-M_{m,\Phi,\Psi})g_{n}\) for every \(n\)). Define \(\Xi=(\xi_{n})=(\varphi_{1},f_{1},\varphi_{2},f_{2},\dots)\), \(\Theta=(\theta_{n})=(\psi_{1},g_{1},\psi_{2},g_{2},\dots)\), and \(m^{\prime}=(m^{\prime}_{n})=(m_{1},1,m_{2},1,\dots)\). It is easy to see that \(M_{m^{\prime},\Xi,\Theta}\) is unconditionally convergent and equals to \(I\). Then, if Conjecture (M2) is true, there exist \((\alpha_{n})\subset\mathbb{C}\) and \((\beta_{n})\subset\mathbb{C}\) such that \((\alpha_{n}\xi_{n})\) and \((\beta_{n}\theta_{n})\) are frames for \(\mathcal{H}\) and \(\alpha_{n}\overline{\beta_{n}}=m^{\prime}_{n}\) for every \(n\), so \((\alpha_{2n-1}\varphi_{n})\) and \((\beta_{2n-1}\psi_{n})\) are Bessel sequences and \(\alpha_{2n-1}\overline{\beta_{2n-1}}=m^{\prime}_{2n-1}=m_{n}\) for all \(n\).
Let us now return to Conjecture 4.10 and its relation to the above conjectures. Using the equivalence of definitions of weighted frames discussed at the beginning of Section 4, it is clear that validity of Conjecture (M2) implies validity of the "if"-direction of Conjecture 4.10 and thus one can state the following:
**Proposition 4.12**.: _Conjecture (M2) implies Conjecture 4.10._
It is still an open question whether the converse holds, i.e., whether validity of Conjecture 4.10 implies validity of Conjecture (M2).
We end the section with some classes of sequences for which Conjecture 4.10 is true.
**Proposition 4.13**.: _Let \((f_{n})\) and \((g_{n})\) be sequences in \(\mathcal{H}\) such that \(\inf_{n}\|f_{n}\|\|g_{n}\|>0\) or \((g_{n})\) is minimal or \((f_{n})\) is minimal. The following statements are equivalent._
_1. For every \(f\in\mathcal{H}\), \(f=\sum\limits_{n}\langle f,f_{n}\rangle g_{n}\) with unconditional convergence._
_2. \((\|g_{n}\|f_{n})\) and \((\|g_{n}\|^{-1}g_{n})\) are dual frames for \(\mathcal{H}\)._
_3. \((\|f_{n}\|g_{n})\) and \((\|f_{n}\|^{-1}f_{n})\) are dual frames for \(\mathcal{H}\)._
Proof.: The implication \(2.\Rightarrow 1.\) (resp. \(3.\Rightarrow 1.\)) follows from well known results in frame theory for any two sequences \((f_{n})\) and \((g_{n})\) with \(g_{n}\neq 0\) for every \(n\) (resp. \((g_{n})\) and \((f_{n})\) with \(f_{n}\neq 0\) for every \(n\)).
For the converse implications, assume that \(1\). holds. If \(\inf_{n}\|f_{n}\|\|g_{n}\|>0\), then [38, Corollary 3.6] implies that \((\|g_{n}\|f_{n})\) and \((\|g_{n}\|^{-1}g_{n})\) are Bessel sequences and then [17, Lemma 6.3.2] completes the proof that \(2\). holds. If \((g_{n})\) is minimal, then the proof of [38, Proposition 4.8] implies that \(\inf_{n}\|f_{n}\|\|g_{n}\|>0\) and thus \(2\). holds by what is already proved. As mentioned at the beginning of Section 4.4, statement \(1\). is equivalent to validity of \(f=\sum_{n}\langle f,g_{n}\rangle f_{n}\) with unconditional convergence for every \(f\in\mathcal{H}\). Thus, \(3\). holds again from what is already proved. Finally, if \((f_{n})\) is minimal, then \(2\). and \(3\). hold with similar arguments.
## Acknowledgments
The first author thanks M. Speckbacher and D. Freeman for related discussions in the course of preparing "Quantitative bounds for unconditional pairs of frames", P. Balazs, D. Freeman, R. Popescu, M. Speckbacher, arXiv:2212.00947, and acknowledges the support by the project P 34624 "Localized, Fusion and Tensors of Frames" (LoFT) of the Austrian Science Fund (FWF). The second author thanks P. Balazs and D. Stoeva for the hospitality at the Acoustics Research Institute in 2019 and acknowledges partial supports by the Universita degli Studi di Palermo (through FFR2023 "Corso") and by the "Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni" (INdAM). The third author acknowledges support from the Austrian Science Fund (FWF) through Project P 35846-N "Challenges in Frame Multiplier Theory". |
2310.09682 | Bound states in the continuum induced via local symmetries in complex
structures | Bound states in the continuum (BICs) defy conventional wisdom that assumes a
spectral separation between propagating waves, that carry energy away, and
spatially localized waves corresponding to discrete frequencies. They can be
described as resonance states with infinite lifetime, i.e., leaky modes with
zero leakage. The advent of metamaterials and nanophotonics allowed the
creation of BICs in a variety of systems. Mainly, BICs have been realized by
destructive interference between outgoing resonant modes or exploiting
engineered global symmetries that enforce the decoupling of a
symmetry-incompatible bound mode from the surrounding radiation modes. Here, we
introduce theoretically BICs relying on a different mechanism, namely local
symmetries that enforce a field concentration on a part of a complex system
without implying any global symmetry. We experimentally implement such BICs
using microwaves in a compact one-dimensional photonic network and show that
they emerge from the annihilation of two topological singularities, a zero and
a pole, of the measured scattering matrix. Our alternative for achieving BICs
in complex wave systems may be useful for applications like sensing, lasing,
and enhancement of nonlinear interactions that require high-$Q$ modes. | Cheng-Zhen Wang, Ulrich Kuhl, Adin Dowling, Holger Schanz, Tsampikos Kottos | 2023-10-14T23:41:06Z | http://arxiv.org/abs/2310.09682v2 | # Bound states in the continuum induced via local symmetries in complex structures
###### Abstract
Bound states in the continuum (BICs) defy conventional wisdom that assumes a spectral separation between propagating waves, that carry energy away, and spatially localized waves corresponding to discrete frequencies. They can be described as resonance states with infinite lifetime, i.e., leaky modes with zero leakage. The advent of metamaterials and nanophotonics allowed the creation of BICs in a variety of systems. Mainly, BICs have been realized by destructive interference between outgoing resonant modes or exploiting engineered global symmetries that enforce the decoupling of a symmetry-incompatible bound mode from the surrounding radiation modes. Here, we introduce theoretically BICs relying on a different mechanism, namely _local_ symmetries that enforce a field concentration on a part of a complex system without implying any global symmetry. We experimentally implement such BICs using microwaves in a _compact_ one-dimensional photonic network and show that they emerge from the annihilation of two topological singularities, a zero and a pole, of the measured scattering matrix. Our alternative for achieving BICs in complex wave systems may be useful for applications like sensing, lasing, and enhancement of nonlinear interactions that require high-\(Q\) modes.
## I Introduction
Quantum mechanics books, typically, distinguish between two type of states: bound states whose discrete energy lies below the continuum threshold (identified by the asymptotic value of the potential at infinity) and unbounded scattering states with an energy inside the continuum. Examples where these two categories of states appear include electrons in the presence of finite potential wells, quantum dots, or atomic potentials. An exception to the quantum _communis intellectus_ are bound states in the continuum (BICs) [1; 2; 3; 4]. These are spatially bounded solutions of the Schrodinger equation, with discrete eigenvalues lying inside the continuum of states that propagate to infinity. They were originally introduced a century ago by von Neumann and Wigner, using an inverse potential engineering approach [5]. The method assumed a square-integrable BIC wavefunction with a spatially decaying envelope and, using this as a starting point, they tailored a suitable 3D potential where this wavefunction is an eigenmode. Such "custom-made" potentials are unrealistic as they are oscillatory in space while decaying to infinity as a power law and they have never been implemented experimentally.
BICs are not a quantum phenomenon but rather pertain to all wave systems. This observation extended the search for BICs to a variety of other platforms including electromagnetic, acoustic, water or elastic waves (for a review see [1; 4]). Among the various areas, optics and photonics have undoubtedly been the tip of the spear as far as novel realizations of BICs are concerned [1; 2; 3; 6; 7; 8; 9; 10; 11]. For example, an inverse design scheme based on supersymmetric transformations has been implemented in coupled optical waveguide arrays to engineer the hopping rate between nearest resonators in order to support BICs [12].
Other, more efficient, BIC schemes have been also developed and experimentally implemented thanks to the advent of metamaterials and nanotechnology [1; 2; 3]. Perhaps the most prominent approach that also highlights the wave nature of the phenomenon, is associated with "accidental" BICs [13; 14]. In this case, the parameters of the target system are fine-tuned to achieve cancellation of outgoing waves to the continuum. A special case of accidental BIC is associated with Fabry-Perot (FP) configurations [15; 8; 16] of two identical resonances which interact strongly through the same radiation channel (e.g. a waveguide). A third mechanism that leads to BICs is invoking structures with geometric symmetries [1; 4; 17]. In this case, a trapped mode with a given symmetry can be embedded into a continuum of states with a distinct symmetry, ensuring the decoupling of the trapped mode and thus the suppression of its leakage.
Most of the above BIC proposals have been realized in extended structures. In fact, there is a non-existence theorem for BICs in compact structures [1] (see, however, Refs [18; 19] for exceptions). Finding a genuine BIC in compact systems is a challenging fundamental problem, and its solution would allow the implementation of high-\(Q\) resonators having a broad range of potential applications including lasers, sensors, filters and low-loss fibers [1; 4].
Here we provide an alternative approach for the implementation of BICs which is based on _local_ symmetries (see Fig. 1a). As opposed to the symmetry-protected BICs, we study structures lacking geometric symmetries. The proposed mechanism relies on the existence of an embedded symmetric subdomain and reveals another backdoor in the non-existence theorem for BICs in compact structures. In contrast to accidental BICs [1; 4], our con
struction relies on a stringent blueprint and allows to analytically predict and control the position of BICs in the spectrum. Moreover, it leads to an infinite ladder of BIC states which occur periodically in \(k-\)space. We experimentally demonstrate the new scheme using a complex network of coaxial microwave cables coupled together via T-junctions, see Figs. 1b,c. The analysis of the experimental scattering matrix demonstrates that the formation of these BICs originates from the coalescence of two topological defects with opposite charges: a zero (with charge \(+1\)) and a pole (with charge \(-1\)) of the scattering matrix. Experimental implementations of such networks have been reported in a variety of frameworks including acoustics [20], microwaves [21; 22; 23; 24; 25; 26], photonic crystal waveguides [27] and optics [28; 29].
## Physical mechanism for local-symmetry protection of BICs
Let us explain the _local-symmetry_ protection mechanism using complex photonic networks as an example (Fig. 1a): the subdomain (subnet) that possesses a local symmetry is formed by a closed loop (ring) of \(N_{l}\) equilateral edges such that mirror symmetry (with respect to the vertices defining the subnet) and discrete rotation invariance is guaranteed (see, for example, the violet pentagon in Fig. 1a with \(N_{l}=5\) or the violet triangle in Fig. 1b with \(N_{l}=3\) having a discrete rotational symmetry \(C_{5v},C_{3v}\), respectively). Note that in the case of a network that is formed by photonic waveguides, the geometric shape of the edges and the angles between them are typically irrelevant, see Fig. 1c. In this case the discrete rotation must not be interpreted in physical space. For example, in case of Fig. 1b, an angle is defined as \(2\pi x/3\ell\) where \(x\) is a continuous path length along the cycle and \(x=0\) coincides with a vertex. Then the subnet that supports a BIC is invariant under the transformations \(x\to x+\ell\) and \(x\to-x\), generating the symmetry group \(C_{3v}\).
Consider the ring first without any connections to the remaining network. Then, choosing the appropriate symmetry class, there are rotation invariant eigenfunctions which are antisymmetric with respect to the vertices. That is, we have eigenfunctions vanishing at all vertices of the subnet. Thus, when the remaining network is coupled to some or all vertices of the subnet via coupling constants (not necessarily equal at all vertices), these eigenstates remain unaltered. In particular, if the remaining network is open and has a continuous spectrum, the constructed eigenfunctions will be BICs. This conclusion applies only to the subset of eigenstates pertaining to the appropriate symmetry class. All other states of the subnet will be strongly mixed with the states of the remaining network and in the large network limit the overwhelming majority of states will be ergodically distributed over the whole network, as expected for typical systems with wave chaos [30].
Note that the term _local_ symmetry does not imply a subnet with a small total bond-length; rather it refers to the fact that it involves a subset of connected edges. In particular, the BIC might be supported by a subnet that connects distant parts of the total network either by involving a few long edges or many short ones.
While pure BICs are completely decoupled and do not contribute to transport across the network, any small perturbation of the subnet will create quasi-BICs which act as channels across the network and thus strongly affect its transport properties. The above BIC-mechanism is independent of specific properties of the network like the precise boundary conditions at the vertices or their valency (= the number of connected edges). Most im
Figure 1: **Experimental setup of a microwave graph detecting bound states in the continuum (BICs) based on local symmetries.** (a) Schematic representation of a complex network supporting a BIC due to the presence of a subnetwork (cycle) with a local symmetry (red-shaded equilateral pentagon). (b) Schematic representation of the network used in our experiment. A ”tetrahedron” contains an equilateral triangle where one side length can be varied. The network is opened by two attached scattering channels. (c) The microwave tetrahedron network used in our experiments. Coaxial cables are connected by T-junctions at each of the vertices \(n=1,2,\ldots,6\). Between vertices 5 and 6, we replace the cable with a phase shifter allowing us to vary the effective length between these vertices. The reflection amplitudes \(r_{11}\), \(r_{22}\) and the transmission amplitudes \(t_{12}\), \(t_{21}\) are measured using a vector network analyzer (VNA) connected to the vertices 1 and 2, respectively.
portantly it does not require any geometric symmetry of the network as a whole.
## I Transport in complex networks and BIC formation
Scattering on Complex Networks --We study the scattering on a complex network of \(n=1,\cdots,N\) vertices, where two vertices \(n,m\) may be connected by an edge \(E=(n,m)\) with length \(l_{E}\). The position \(x_{E}=x\) on edge \(E\) is \(x=0(l_{E})\) on vertex \(n(m)\). The wave \(\psi_{E}(x)\) on the edge \(E\) satisfies the Helmholtz equation
\[\frac{d^{2}}{dx^{2}}\psi_{E}(x)+k^{2}\psi_{E}(x)=0\,, \tag{1}\]
where \(k=\omega n_{r}/c_{0}\) is the wave number, \(\omega\) is the angular frequency, \(c_{0}\) is the speed of light, and \(n_{r}\) is the complex-valued relative index of refraction that includes the losses of the coaxial cables. The solution of Eq. (1) is \(\psi_{E}(x)=\phi_{n}\frac{\sin k(l_{E}-x)}{\sin kl_{E}}+\phi_{m}\frac{\sin k_{ E}}{\sin kl_{E}}\), where \(\psi_{E}(0)=\phi_{n}\) and \(\psi_{E}(l_{E})=\phi_{m}\) are the values of the field at the vertices. We turn the compact network to a scattering set-up by attaching transmission lines (TLs) \(\alpha=1,\cdots,L\) to a subset of the vertices. The field on the \(\alpha\)th TL takes the form \(\psi_{\alpha}(x)=\mathcal{I}_{\alpha}e^{-ikx}+\mathcal{O}_{\alpha}e^{+ikx}\) for \(x\geq 0\) where \(x=0\) is the position of the vertex. The coefficients \(\mathcal{I}_{\alpha}(\mathcal{O}_{\alpha})\) indicate the amplitude of the incoming (outgoing) wave on the TLs. At each vertex \(n\), continuity of the wave and current conservation are satisfied. These conditions can be expressed in a compact form as [31]
\[(M+iW^{T}W)\Phi=2iW^{T}\mathcal{I}\,, \tag{2}\]
where \(\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{N})^{T}\). The \(L\) dimensional vector \(\mathcal{I}\) contains the amplitudes \(\mathcal{I}_{\alpha}\) of the incident field, while \(W\) is a \(L\times N\) matrix describing the connection between the TLs and the vertices. A matrix element \(W_{\alpha,n}\) is \(1\), if the \(\alpha\)th TL is attached to vertex \(n\) and zero otherwise. The \(N\times N\) matrix \(M\)
\[M_{nm}=\left\{\begin{array}{ll}-\sum_{l\neq n}\mathcal{A}_{nl}\cot kl_{nl},& \mbox{if $n=m$}\\ \mathcal{A}_{nm}\csc kln,&\mbox{if $n\neq m$}\end{array}\right. \tag{3}\]
incorporates information about the metric (length of edges) and the connectivity of the network, where \(\mathcal{A}\) is the adjacency matrix having elements \(\mathcal{A}_{nm}=1\) whenever two vertices \(m,n\) are connected and \(\mathcal{A}_{nm}=0\) otherwise.
The scattering field \(\Phi\) on the compact part of the graph can be evaluated by solving Eq. (2) for \(\Phi\). The same expression, together with the continuity condition at the vertices, where the TLs are attached, can be used for deriving the scattering matrix [31]
\[S(k)=-\hat{I}+2iW\frac{1}{M(k)+iW^{T}W}W^{T}\,. \tag{4}\]
For wavenumbers which are integer multiples of \(\pi/l_{nl}\) the terms \(M_{nm}\) diverge; this can be rectified by appropriate manipulation of the divergent terms, see [32].
The poles of the scattering matrix in the complex \(k\)-plane are related to resonances i.e. purely outgoing solutions of the wave equation and they are found from the condition \(\det(M(k_{p})+iW^{T}W)=0\). Of interest are also the zeroes of the scattering matrix defined via the secular equation \(\det S(k_{z})=0\). For lossless structures, causality implies \(k_{z}=k_{p}^{*}\). The zeroes \(S(k)=0\) correspond to a special type of wavefronts with time-modulated amplitude, known as coherent virtual absorption, which are temporarily trapped inside the structure without any leakage [33; 34]. When Ohmic losses are also included, one can find parameters of the structure for which the complex zeroes cross the real axis. In this case, there are stationary, perfectly impedance-matched input wavefronts which are completely absorbed by the (weakly) lossy elements in the structure which acts as an interferometric trap, known as coherent perfect absorber [35]. The exceptional case where the poles are equal to the zeroes of the \(S\)-matrix corresponds to BIC states which contain neither an incoming nor an outgoing radiation component and exist at a real frequency of passive structures. Thus a BIC is invariant under time reversal and it does not affect the on-shell scattering matrix \(S(k)\) since it is decoupled from the far-field radiation. In the topological-defect picture in the complex \(k\)-plane a BIC implies that a charge \(+1\) (\(S\)-matrix zero) annihilates with a charge \(-1\) (resonance) on the real axis. We have confirmed experimentally this topological feature (see below).
BIC and quasi BIC states -We now identify the conditions that are required for the realization of locally symmetric BIC states on a complex network. To this end, we consider a subgraph consisting of \(C\) edges that form a closed loop within the network (for example, in Fig. 1a, \(C=5\) for the violet pentagon). We assume that they have equal lengths \(\ell\) and construct a state \(\psi_{c}(x)=\sqrt{(2/C\ell)}\sin(k_{M}x)\) which is restricted to this cycle. This requires \(k_{M}=M\pi/\ell\) and \(k_{M}=2M\pi/\ell\) (\(M=1,2\dots\)) for even and odd \(C\), respectively. The above wave function is zero at the vertices along the subgraph. Therefore the state is not affected by further edges attached to these vertices which couple the subgraph to the surrounding network and to the continuum. This argument is completely independent of the topology of the rest of the network, the number of extra edges that are attached to the vertices of the cycle, and the number of attached TLs. Further considerations reveal that it is possible to relax the assumption of equal edge lengths \(\ell\) on the subgraph to rationally related lengths. Importantly, the above construction is just a special case of the symmetry argument given in the introduction which may apply also in different situations.
In an experiment, the BICs are manifested as long-lived quasi-bound (resonant) states that disappear and reappear in a characteristic manner when the edge lengths along the subgraph are changed by a small
amount \(\delta\). The values of the wave field at the connections to the remaining network and the TLs are now \(\psi_{c}(0)\sim k\delta\). Due to this coupling, a BIC gives rise to a Feshbach resonance with width \(\gamma\sim|\delta|^{2}\). The latter reveals itself as a sharp feature in the transmittance \(T(f)\) and left (right) reflectance \(R_{1}(f)\) (\(R_{2}(f)\)), where \(R_{1}=R_{2}\) if there are no losses inside the network. This feature is absent when \(\delta=0\) and this is an indirect, but experimentally observable, signature of a BIC in scattering data.
## III Experimental implementation
We consider the tetrahedron network shown in Figs. 1b,c. This network is relatively simple and it does not have any geometrical symmetries. At the same time, the dynamics is rich enough to show typical features of wave chaos [30; 31; 36; 37]. Its microwave implementation is done using coaxial cables (Huber+Suhner S 04272) connected by 6 T-junctions (vertices). The electrical permittivity of the cables was found to be \(\epsilon\approx 1.56(\pm 0.07)+i0.0015(\pm 0.0005)\) indicating the presence of uniform losses. Two TLs have been attached at the vertices labeled 1 and 2, see Figs. 1b,c.
We choose the triangle consisting of the vertices 4, 5, and 6 as the closed loop that supports BICs. The electrical lengths \(l_{45}=l_{46}=\ell=346.6\,\mathrm{mm}\) are fixed while the length \(l_{56}\) of the third edge varies such that \(l_{56}=\ell+\delta\) with \(\delta\in[-14,30]\,\mathrm{mm}\) (see Supplementary Material). The length-variation is done using a phase shifter whose effective length is controlled electronically. Following the theoretical arguments of the previous section we expect that BICs will appear at \(\delta=0\), for wavenumbers \(k_{M}=2M\pi/\ell\), \(M\in\mathbb{N}^{+}\) corresponding to frequencies \(f_{M}=M\frac{c_{0}}{\ell}=M\times 0.86547\,\mathrm{GHz}\). The other lengths of the network are \(l_{13}=l_{25}=18.2\,\mathrm{mm}\), \(l_{23}=424.2\,\mathrm{mm}\), \(l_{16}=926.5\,\mathrm{mm}\), and \(l_{34}=831.4\,\mathrm{mm}\).
In Figs. 2a1, b1, and c1, we report the measured transmittance \(T\) and reflectances \(R_{1}\) and \(R_{2}\) versus the input frequency and the tuning parameter \(\delta\) in the proximity of the 2nd BIC frequency (\(2\times 0.86547\,\mathrm{GHz}\approx 1.73\,\mathrm{GHz}\)) indicated by the arrow. The second (third) column reports
Figure 2: **The transmittance and reflectance spectrum versus the phase shifter length variation \(\delta\) and frequency \(f\).** (a1) The experimental measurements of the transmittance through the graph (input from lead 1, transmittance is the same as that from lead 2 due to reciprocity) versus the input frequency and phase-shifter length variation \(\delta\). (a2) and (a3) the same as in (a1) but for the theoretical modeling with adjusted loss and without loss, respectively. (b1)-(b3) The reflectance from lead 1 versus the input frequency and phase-shifter variation \(\delta\) for experimental measurement, theoretical modeling with loss, and theoretical modeling without loss, respectively. (c1)-(c3) The same plot as in (b1)-(b3) for reflectance from lead 2. (d)-(f) The cross-section plot of transmittance \(T\) for \(\delta=6\), 0, and \(-6\,\mathrm{mm}\) respectively. The blue solid, red dotted and black solid lines are for the experimental measurement, theoretical modeling with fitted loss and theoretical modeling without loss, respectively.
our calculations for the network of Fig. 1b,c in the presence (absence) of Ohmic losses at the coaxial cables. In the frequency range \(f\in[1.6,1.8]\,\)GHz, a BIC is predicted at \(f_{2}\approx 1.73\,\)GHz. In all cases we find a resonance moving linearly from around \(1.7\,\)GHz at \(\delta=15\,\)mm to \(1.76\,\)GHz at \(\delta=-10\,\)mm. At \(\delta=0\,\)mm a BIC is formed and the resonance feature disappears from all three spectra (\(T(f)\) and \(R_{1,2}(f)\)). This is the expected indirect signature of the BIC: since the state is completely decoupled from the rest of the network and the TLs, any incident radiation cannot excite it and therefore there are no signatures of its existence in the scattering matrix elements. Instead, for small \(\delta\neq 0\), the transmittance \(T(f\sim f_{M})\) and reflectance \(R(f\sim f_{M})\) show a narrow resonance structure in their frequency-dependence whose width is controlled by the parameter \(\delta\). All these features of the transmittance and reflectance spectra are present in our measurements and in both sets of calculations (with and without losses). The presence of losses, however, smooths out some of the sharp characteristics of the (quasi-)BIC resonance (first and second columns) which are much more pronounced in the calculations shown in the third column where we have considered the same network without any losses of the coaxial cables.
To further investigate the variations of the resonance features in \(T(f)\) as the tuning parameter changes around \(\delta=0\) we plot in the right column of Fig. 2, the transmittance spectrum for three different \(\delta\) values around the BIC value. For \(\delta=6\,\)mm (Fig. 2d) a narrow resonance dip (indicated by the red arrow) is evident in both, measurements (blue solid line) and calculations (red dotted line), where the Ohmic losses of the cables are taken into account. This dip becomes very sharp in the case of a lossless network modeling (black solid line). When \(\delta=0\,\)mm, (Fig. 2e), the resonance dip disappears in all cases and reappears again when \(\delta=-6\,\)mm, (Fig. 2f), after acquiring a small blue-shift.
We have also confirmed experimentally the appearance of a cascade of bound states at predetermined \(k\)-values which are multiples of a "fundamental" BIC frequency and distinguish our approach from accidental BICs. To demonstrate this feature, we report in Fig. 3 the transmittance versus frequency and \(\delta\) in different frequency regions where we expect the BIC to occur, i.e., \(f=3.35-3.55\,\)GHz, and \(f=4.24-4.44\,\)GHz. The same behavior as the one found for the BIC presented in Fig. 2 is observed here as well. Namely, we observe the appearance and disappearance of a quasi-BIC mode as \(\delta\) varies, signifying the formation of BICs at \(\delta=0\).
Figure 3: **Transmittance spectra at frequencies that are multiples of the fundamental BIC frequency.** (a1)-(a3) The transmittance as a function of input frequency and bond variation \(\delta\) for the experimental measurement (a1), the theoretical modeling with loss (a2) and without loss (a3) for the 4th BIC state at \(4\times 0.86547\,\)GHz \(\approx 3.46\,\)GHz). (b1)-(b3) The same as in (a1)-(a3) but for the 5th BIC state at \(4.33\,\)GHz.
## IV POLES, Zeros, and the topological structure of BICs
Finally, we analyze the formation of BICs from a topological perspective [38] by analyzing the parametric evolution of the poles and zeros in the complex frequency plane as \(\delta\) is changing. In Fig. 4a the symbols represent the poles (red crosses) and zeros (blue-filled circles) of the \(S\)-matrix that have been extracted from the measured \(S\)-matrix using the harmonic inversion method [39]. The method was applied to the off-diagonal elements of the \(S\)-matrix for the poles and to the matrix \(S^{-1}\) for the zeroes. Our data indicate their coalescence at \(\delta=0\) and at the BIC-frequency \(f_{BIC}\) as discussed above. The solid lines correspond to the simulations. The formation of the BIC in the complex \(\vec{k}=(\text{Re}(k);\text{Im}(k))\) plane, is a consequence of the annihilation of two topological defects of the secular function \(\det S(\vec{k})\)[38]: a zero \(\vec{k}_{z}\) and a pole \(\vec{k}_{p}\) of the scattering matrix which collides as the perturbation parameter \(\delta\) vanishes (see Fig. 4a). The zero (pole) is characterized by the topological charge \(q\equiv\frac{1}{2\pi}\oint d\vec{k}\cdot\nabla_{k}\phi(\vec{k})=+1\) (\(q=-1\)) which describes how many times the total phase of the scattering matrix \(\phi(k)=\arg(\det S(k))\) winds by \(2\pi\) along a counterclockwise simple closed path that encloses the topological defect (see Fig. 4b). A non-zero charge \(q\neq 0\) indicates that a zero or a pole cannot suddenly disappear when the parameter \(\delta\) slightly varies, although they can move in the complex \(\vec{k}\)-plane. Only a collision of a \(+1\) charge with a \(-1\) charge can result in their mutual annihilation, which signifies that at this parameter value \(\delta\) a resonance mode and a zero mode coexist.
## V Conclusion
We have revealed a physical mechanism based on local symmetries that leads to the creation of BICs in compact photonic networks without any geometrical symmetry. The resulting BICs are formed as a consequence of the collision of two topological defects (a pole and zero of the scattering matrix) which leads to their annihilation. The proposed BICs differ from scenarios, where BICs are formed due to the presence of a global symmetry or where they are the "accidental" result of a parameter variation. In particular, they do not require the degeneracy of two (or more) resonant modes and they are based on a precise rule that allows to construct and control a ladder of BIC states at multiples of a fundamental frequency. Our microwave demonstration can be extended to other wave platforms ranging from optics to acoustics and water waves and provides a new way of achieving and utilizing BICs in complex systems.
|
2308.13490 | TpuGraphs: A Performance Prediction Dataset on Large Tensor
Computational Graphs | Precise hardware performance models play a crucial role in code
optimizations. They can assist compilers in making heuristic decisions or aid
autotuners in identifying the optimal configuration for a given program. For
example, the autotuner for XLA, a machine learning compiler, discovered 10-20%
speedup on state-of-the-art models serving substantial production traffic at
Google. Although there exist a few datasets for program performance prediction,
they target small sub-programs such as basic blocks or kernels. This paper
introduces TpuGraphs, a performance prediction dataset on full tensor programs,
represented as computational graphs, running on Tensor Processing Units (TPUs).
Each graph in the dataset represents the main computation of a machine learning
workload, e.g., a training epoch or an inference step. Each data sample
contains a computational graph, a compilation configuration, and the execution
time of the graph when compiled with the configuration. The graphs in the
dataset are collected from open-source machine learning programs, featuring
popular model architectures, e.g., ResNet, EfficientNet, Mask R-CNN, and
Transformer. TpuGraphs provides 25x more graphs than the largest graph property
prediction dataset (with comparable graph sizes), and 770x larger graphs on
average compared to existing performance prediction datasets on machine
learning programs. This graph-level prediction task on large graphs introduces
new challenges in learning, ranging from scalability, training efficiency, to
model quality. | Phitchaya Mangpo Phothilimthana, Sami Abu-El-Haija, Kaidi Cao, Bahare Fatemi, Mike Burrows, Charith Mendis, Bryan Perozzi | 2023-08-25T17:04:35Z | http://arxiv.org/abs/2308.13490v3 | # PupGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs
###### Abstract
Precise hardware performance models play a crucial role in code optimizations. They can assist compilers in making heuristic decisions or aid autotuners in identifying the optimal configuration for a given program. For example, the autotuner for XLA, a machine learning compiler, discovered 10-20% speedup on state-of-the-art models serving substantial production traffic at Google. Although there exist a few datasets for program performance prediction, they target small sub-programs such as basic blocks or kernels. This paper introduces _TpuGraphs_, a performance prediction dataset on full tensor programs, represented as computational graphs, running on Tensor Processing Units (TPUs). Each graph in the dataset represents the main computation of a machine learning workload, _e.g._, a training epoch or an inference step. Each data sample contains a computational graph, a compilation configuration, and the execution time of the graph when compiled with the configuration. The graphs in the dataset are collected from open-source machine learning programs, featuring popular model architectures, _e.g._, ResNet, EfficientNet, Mask R-CNN, and Transformer. _TpuGraphs_ provides 25x more graphs than the largest graph property prediction dataset (with comparable graph sizes), and 770x larger graphs on average compared to existing performance prediction datasets on machine learning programs. This graph-level prediction task on large graphs introduces new challenges in learning, ranging from scalability, training efficiency, to model quality.
## 1 Introduction
Compilers often use performance models to solve optimization problems [28; 48], as collecting performance measurements from real hardware can be expensive, limited, or infeasible. A performance model can also be used by a compiler autotuner to evaluate candidate configurations in a search space [2; 14; 37; 53; 55]. However, developing an accurate analytical model of program performance on a modern processor is challenging and time-consuming because the underlying processor architecture, the compiler, and their interactions are complex and difficult to model analytically.
Many recent methods [2; 14; 41; 51; 66; 65; 5; 79; 45; 3; 24; 37] apply machine learning (ML) to learn performance prediction models. However, there exist only a few datasets for program performance prediction, and they all target small sub-programs. BHive [15] targets small basic blocks of assembly instructions. TenSet [81] targets ML kernels consisting of a small number of tensor operations. The database-query dataset [31] contains larger query programs, but they are still relatively small, most with fewer than 100 nodes.
Unlike prior datasets, TpuGraphs is a performance prediction dataset on full tensor programs, represented as computational graphs. Each graph represents the main computation of an ML program, which is usually one or many training steps or one inference step. The graphs in the dataset are collected from open-source ML programs, featuring popular models (_e.g._, ResNet, EfficientNet, Mask R-CNN, and a large variety of Transformer) for a wide range of tasks, _e.g._, vision, NLP, speech, audio, recommendation, and generative AI. Each data sample contains a computational graph, a compilation configuration, and the execution time when executing the graph when compiled with the given configuration on a Tensor Processing Unit (TPU) v3 [39], an accelerator for ML workloads. A compilation configuration controls how the XLA compiler [70] transforms the graph for a specific optimization pass. In particular, the TpuGraphs dataset consists of two collections: (i) _layout_ and (ii) _tile_. _Layout_ configurations control how tensors are laid out in the physical memory, by specifying the dimension order of each input and output of an operation node. A _tile_ configuration controls the tile size of each fused subgraph. We primarily focus on layout and tile configurations because tuning them offers the highest performance gain on average, compared to tuning other compiler optimizations.
The layout collection contains 31 million pairs of graphs and configurations, averaging over 7,700 nodes per graph. The tile collection contains 13 millions pairs of kernels and configurations, averaging 40 nodes per kernel subgraph. The layout collection is unique among existing graph datasets, in that it provides data for graph-level predictions on very large graphs. In contrast, most of the existing graph datasets fall into two categories: graph-level prediction on small graphs [11; 72; 32; 4; 32; 14; 81; 67; 83], and node-level or edge-level prediction on large graphs [29; 12; 34; 77; 86; 9; 49]. TpuGraphs provides 25x more graphs than MalNet [27] -- the largest graph property prediction dataset with comparable graph sizes -- and 770x larger graphs on average compared to TenSet [80] -- the only existing large-scale ML program performance dataset -- as depicted in Figure 1. The scale of TpuGraphs poses several new research challenges:
* How to train a neural network model that can perform graph-level predictions when the memory required to train the model on a single graph may not fit on a single device?
* How to make a model generalize well to unseen graphs when they are diverse, and the training data may be imbalanced?
* How to improve the efficiency of a training pipeline when multiple data points contain a large amount of redundant data (same core graph but different graph configurations)?
We provide baseline models based on a Graph Neural Network (GNN) [11], following the techniques from the most recent works on TPU learned cost models [41; 10]. We verify that the previously proposed techniques -- such as training on graph segments and using pairwise ranking loss -- are also effective on our dataset. The dataset and accompanying code can be found at [https://github.com/google-research-datasets/tpu_graphs](https://github.com/google-research-datasets/tpu_graphs).
## 2 Background & Challenges
ML compilers solve multiple optimization problems to translate an ML program, typically represented as a tensor computation graph, to an efficient executable for a hardware target. Recent works have demonstrated that search-based _autotuning_ techniques can be used to generate code with close to optimal performance [13; 82; 2; 3; 71; 36; 65; 55]. However, autotuning requires a relatively large amount of resources to find quality candidates compared to traditional heuristics-based compilers. Therefore, many methods develop a learned cost model to accelerate autotuning [14; 41; 65; 79; 45; 3].
Figure 1: Scale of TpuGraphs compared to other graph property prediction datasets.
### XLA and Autotuner
XLA [70] is a production-grade heuristics-based compiler for ML programs, capable of generating code for various hardware targets, including CPUs, GPUs, and notably TPUs [38; 39]. Figure 2 depicts important optimizations that are featured in XLA and most ML compilers. Graph-level optimizations require the context of the entire program graph to make good decisions, while kernel-level optimizations can be done independently within each kernel. A tensor computation graph is represented as High Level Operations (HLO) in XLA. Each optimization pass transforms an HLO graph into a functionally-equivalent one. The output of graph-level optimizations are a collection of kernels (represented as fused subgraphs). XLA has an accompanying autotuner [55] that can tune both graph-level and kernel-level configurations for TPUs, unlike most search-based compilers [13; 14; 79; 82; 2; 3; 71; 45; 3], which focus on kernel-level optimizations.
Kernel-Level Optimizations.Each node in a tensor computation graph represents a tensor operation, such as matrix multiplication, convolution, element-wise addition, _etc_. A kernel, represented as a fused subgraph, is then a fusion of multiple tensor operations. For example, Convolution-BatchNormNorm is a common fused kernel that appears in Convolutional Neural Networks. The most important optimization at the kernel level is tile size selection: selecting the shape of a tile of the output tensor to maximize compute efficiency of the hardware, while the required regions of input, output, and intermediate data fit in the local cache or scratchpad memory. The XLA tile size autotuner has been deployed in production to optimize the most heavily executed kernels on Google TPU fleet on a daily basis, saving approximately 2% of the total TPU compute time overall [54]. The learned cost model based on a Graph Neural Network (GNN) is used to select the top K most promising tile sizes to execute on real hardware [41], reducing the autotuning search time by approximately 20x.
Graph-Level Optimizations.At the graph level, the XLA autotuner supports tuning layout assignment, fusion, and memory space assignment passes, as well as compiler flags that control multiple optimization passes. The XLA graph-level autotuner has delivered 10-20% speedup state-of-the-art models serving substantial production traffic at Google. However, it often takes at least a few hours for the autotuner to converge when tuning one optimization pass of a single graph, and much longer for larger computation graphs. Therefore, a learned cost model would significantly reduce the search time. This motivates us to release the dataset collected from the autotuning process to advance research in developing learned performance prediction models, by addressing challenges outlined in Section 2.2, and ultimately accelerate the autotuning process for production ML workloads.
We focus on layout tuning because it offers the most speedup in general. The layout assignment pass chooses the physical layouts of the input and output tensors of each node that satisfy constraints, while minimizing program's execution time. A layout determines the order of (minor-to-major) tensor dimensions. Figure 3 displays valid input layouts in blue and the chosen layout in red. If an edge connects an output to an input with a different layout, the compiler inserts a copy (transpose) operator to convert the layout. In Figure 3 (left), layout of \(\{1,0,2\}\) is assigned to the output of add but \(\{0,1,2\}\) to the first input of conv, causing a layout mismatch, unless a copy operator is inserted. The compiler must trade off between selecting the best layouts for each specific operator and the overhead of copy operators. The autotuner tunes the input-output layouts of the most layout-performance-critical nodes
Figure 2: Important optimizations in ML compilers include graph-level and kernel-level optimizations. A graph-level optimization requires the context of the entire graph to make optimal decisions and transforms the entire graph accordingly. A kernel-level optimization transforms each kernel (a fused subgraph) at a time, independently of other kernels.
(_i.e._, convolution and reshape because they are common operations and have the most constrained implementations for TPUs), and propagates layouts from these nodes to others. The autotuner picks one input-output layout combination from the valid options for each configurable node.
### Learning Challenges
TpuGraphs is non-trivial because training a neural network model to make an accurate prediction on a large graph comes with multiple challenges, as follows.
Scalability.Existing efforts to scale GNNs have mostly focused on node-level and edge-level prediction using sampled subgraphs [29; 12; 34; 77; 86; 49; 26; 1] or graph transformations followed by local models [8; 9]. However, there is a lack of research on how to train scalable models for _property prediction of large graphs_. Training on sampled subgraphs alone is insufficient as they may not contain all the necessary information for accurate predictions. Aggregating information from the entire graph is essential for graph property prediction, but it poses challenges due to memory limits on a training device, as the memory required scales at least linearly with the size of the graph [78]. Our layout collection contains graphs of up to 44,000 nodes, so training a GNN model on an entire graph (or a batch of thereof) using a single device may run out of memory.
Diversity and Imbalance of Graphs.We want to learn a model that generalizes well to unseen graphs. However, this is non-trivial because the model must be trained on diverse types of graphs with enough samples for each type. TpuGraphs consists of graphs for all kinds of important ML workloads, including both inference and training, from past to present. While the dataset may be imbalanced -- containing graphs from some types of architectures more than others -- each graph has at least 10,000 samples of data from different configurations on average.
Redundancy.Another unique property of our dataset is that many samples share the same graph, which represents a large amount of redundant data. An efficient training pipeline should leverage this knowledge and reduce the redundant computation when possible. Additionally, there is another aspect of redundancy within each graph. A tensor computation graph, representing an ML workload, often consists of repeated blocks of neural network layers. The repeated blocks appear as repeated subgraphs. One may leverage this knowledge to improve the learning algorithm.
The baselines accompanying this dataset attempt to address some of these challenges, but are not close to fully solving them.
## 3 The TpuGraphs Dataset
The TpuGraphs dataset contains execution time data points, where each data point contains an HLO graph, its configuration, and its execution time on a single core of TPU v3. The HLO graph in each data point is a partially optimized graph before being fed into the corresponding optimization pass. For example, in the _layout_ collection, an HLO graph is the input graph to the layout assignment pass. The layout configuration of a graph is a collection of per-node layout decisions on configurable nodes (i.e., convolution and reshape). For the _tile_ collection, an HLO graph in each data point is a
Figure 3: A node represents a tensor operator, annotated with its output tensor shape \([n_{0},n_{1},...]\), where \(n_{i}\) is the size of dimension \(i\). Layout \(\{d_{0},d_{1},...\}\) represents minor-to-major ordering in memory. Applied configurations are highlighted in red, and other valid configurations are highlighted in blue. A layout configuration specifies the layouts of inputs and outputs of influential operators (_i.e._, convolution and reshape). A copy operator is inserted when there is a layout mismatch.
fused subgraph representing a kernel. The tile configuration of a subgraph is a configuration for the entire subgraph, not specific to any particular node.
### Data Generation
Within our dataset, there are multiple collections of data, differing in terms of (1) the compiler optimization (i.e., layout and tile), (2) the source of graphs, and (3) the search strategy.
Graphs Collection.We collect HLO graphs from two sources. The first source, called _XLA_, is the combination of the XLA regression benchmark -- from where we collect all open-source models -- and the MLPerf benchmark [50, 35]. The _XLA_ graphs span diverse types of popular ML training and inference models, such as vision, NLP, speech, audio, and recommendation. The second source, called _NLP_, contains a variety of BERT for training and inference, with varying number of layers, attention heads, and hidden sizes. For each model, we run the program -- written in TensorFlow, PyTorch, or JAX -- and collect the largest HLO graph compiled by XLA, which represents the model's main computation. Note that a typical way that XLA handles a graph with dynamic shapes is to bucketize the graph into multiple static-shape graphs. During execution, the runtime will pad the input to match the static-shape graph with the larger closet shape. Our dataset includes graphs -- for varying sequence length, batch size, model size, etc -- some of which are used for dynamic shape workloads. The TpuGraphs dataset is similar to the internal datasets used for prior TPU learned cost models [41, 10], but it exclusively contains graphs from open source-programs, while the internal datasets also include production models that we cannot release publicly.
Configurations Generation.Once we have the graphs, we use the XLA autotuner to generate data samples. The set of configurations being generated depends on how the autotuner explores the search space. For the layout collections, we ran the autotuner in two modes. The first mode explores the search space using a genetic algorithm starting from the default configuration, chosen by the compiler's heuristic. Data collected from this mode is labeled _default_. The second mode explores the search space by picking random candidates. Data collected from this mode is labeled _random_. We keep data collected in different modes in separate collections; the default collection tends to contain configurations that are not too different from the default, and have similar execution times, while the random collection includes very different configurations with very different execution times.
For the tile size tuning, the autotuner first invokes the compiler to run the graph-level optimizations and obtain fused subgraphs (kernels). For each subgraph, the autotuner enumerates all possible tile sizes for the kernel in a random order, limited by a timeout. Note that the tile size search space is much smaller than the layout search space, so we can enumerate all possible tile sizes. Therefore, there is one data collection for tile sizes.
Appendix A.2 describes how we measure the execution time of a given graph and configuration.
### Dataset Statistics and Related Datasets
Table 1 summarizes the details of the different data collections, where the collection name follows the pattern _optimization:source:search_. Table 3 in Appendix A.1 compares properties of the TpuGraphs dataset (all collections) against existing graph property prediction datasets.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Collection**} & \multicolumn{2}{c}{**Core**} & \multirow{2}{*}{**Top Graphs**} & \multirow{2}{*}{**Total Graphs**} & \multirow{2}{*}{**Samples**} \\ & **(Sub) Graphs** & & & & \(+\)**Conigs** \\ \hline Layout:XLA:Default & 78 & 14,105 (372–43,615) & 10,147 (681–71,574) & **771,496** & 1,272,538 \\ Layout:XLA:Random & & 11,648 (109–99,783) & **908,561** & 1,115,709 \\ Layout:NLP:Default & 244 & 5,659 (876–21,919) & 56,534 (9032–90,985) & **13,285,415** & 15,479,038 \\ Layout:NLP:Random & & & 66,089 (8,843–100,001) & **16,125,781** & 16,135,731 \\ Tile:XLA & 6,988 & 40 & 1,842 & **12,870,077** & 12,870,077 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of TpuGraphs collections. The collection name follows the pattern _optimization:source:search_. The search may explore the same configuration multiple times, so the same pair of graph and configuration may appear multiple times with slightly different execution time from multiple measurements. The total number of samples is thus higher than the number of unique pairs.
ML Program Performance.The TpuGraphs layout collections provide more than 770x larger graphs on average compared to TenSet [80], the only existing large-scale dataset on ML program performance. Our tile size collection is similar to TenSet as the configuration controls the optimization at the kernel (fused subgraph) level. However, it compliments TenSet nicely as it provides data points on different hardware. Halide Auto-scheduler [2] releases their evaluation dataset of Halide programs mainly consisting of image processing benchmarks with a few ML benchmarks.
Other Program Performance.Beyond ML programs, the performance prediction dataset with largest graphs is on database queries [31], whose graphs are still more than a few orders of magnitudes smaller than ours. Another popular performance prediction dataset is BHive [15], consisting of x86 basic blocks sourced from multiple open source programs, with runtime measurements on different Intel hardware platforms. However, the basic blocks are quite small, including four instructions on average. CompilerGym [18] releases a collection of LLVM IR code datasets that can be evaluated in their environment. The largest datasets in their collection includes AnghaBench [19] and CSmith [75]. AnghaBench provides a large number of relatively small real-world programs. CSmith programs are large (comparable to ours), but they are randomly generated programs. Additionally, CompilerGym's datasets do not come with performance measurements, so one would have to execute the programs and configurations in the CompilerGym's environment themselves to obtain program execution time.
Program Analysis.Other closely related datasets are on programming tasks. CodeNet [56] is a large dataset to teach AI to code, in which each code sample is a solution to one of the coding problems. OBGB-CODE2 [33] is for code summarization, containing Abstract Syntax Trees obtained from Python functions. TBCNN [52] releases its dataset on program classification from a pedagogical programming open judge system. CuBERT [40] uses Python files extracted from the ETH Py150 dataset [59] for fine-tuning and uses github_repos dataset under BigQuery's public-data project for pre-training. CodeBERT [25] releases its multi-programming-lingual dataset used for pre-training. Works such as inst2vec [7] and ProGraML [17] uses datasets of code in LLVM compiler intermediate representation to learn generic code representation for various program analyses and optimizations.
Other.Apart from code datasets, there are many other graph datasets. Open Graph Benchmark [33] suite presents graphs that are used for machine learning tasks such as GNN inference and training. GAP [6] and Graph Based Benchmark Suite (GBBS) [21] provide large-scale curated sets of graphs, primarily for evaluating traditional graph problems. SuiteSparse [43] consists of a wide variety of sparse matrices, which can be viewed as graphs. Most of these datasets are for node-level or edge-level prediction tasks. TpuGraphs is by far one of the largest graph property prediction datasets. TpuGraphs' average graph size is comparable to that of MalNet [27] -- the largest scale graph property prediction dataset to date -- while offering 25x more combinations of graphs and configurations. Other popular graph property prediction datasets include small molecule [58; 61], bioinformatic [22; 33], and social network datasets [62; 74].
### Dataset Split
We split the data using 80-10-10 ratio by graphs in each collection. Splitting data by graphs ensures that graphs in the validation and test sets do not appear in the training set to evaluate the generalization of the model on unseen graphs. The validate and test graphs stay the same across different XLA collections; the same applies to NLP collections. We deliberately holdout the target labels of samples in the test set for competition purposes.
## 4 Learning a Performance Prediction Model
The goal of a learned cost model is to rank the performance of different configurations of a given graph. This section explains the baseline models we provide and how we train them, primarily based on the TPU learned cost model papers [41; 10].
### Feature Extraction
TpuGraphs provides data in two formats: raw protobuf format and numpy arrays similar to the OGBG format [33]. The autotuner produces output results in protobuf format. A data pre-processing
script converts data from the protobuf format to the numpy format. The main function of the data pre-processor is feature extraction. Node features describe the node's properties, such as output tensor shape, tensor layout, striding, padding, and operation-specific parameters. Our feature extraction is minimal. To extract a node feature vector, we either copy values from various fields in an HLO instruction (a node in an HLO graph) as they are, or convert categorical values using one-hot encoding. To convert an unbounded list of numbers (_e.g._, tensor shape) to a fixed-size vector, we truncate the list to six elements and include the summation and/or product of all elements in the list (_e.g._, the product of dimension sizes represents the volume of the tensor) because the tensors appearing our dataset do not contain more than six dimensions. A per-node layout configuration and tile size can be represented as a nested list with some unbounded dimensions. Similarly, we truncate these unbounded dimensions to six elements.
We provide code for training a variety of models over the numpy format. Nonetheless, the raw format can allow researchers to experiment with different feature extractions and measure impacts on the quality of a learned model.
### Model Architecture
Figure 4 shows the model architecture we use for our baseline models. Our baseline models are based on a GNN since the input program is represented as a graph. Node features consist of two parts as shown in Figure 4. The first part is an opcode id, _i.e._, type of tensor operation (such as matrix multiplication). Our baseline models map an opcode id to an opcode embedding via an embedding lookup table. The opcode embedding is then concatenated with the rest of the node features as inputs to a GNN. We combine the node embeddings produced by the GNN to create the embedding of the graph using a simple pooling reduction. The resulting graph embedding is then linearly transformed into the final scalar output by a feedforward layer. Prior work [41] has studied alternative models, including LSTM and Transformer, and shown that GNNs offer the best performance. We provide baseline models with GCN [42] and GraphSAGE [30].
### Loss Functions and Evaluation Metrics
The primary use case of the model is to rank configurations within a given graph and select top candidates to evaluate on real hardware. Thus, we can train the model using regression losses (_e.g._, Mean Square Error (MSE)) or ranking losses (_e.g._, pairwise hinge loss and ListMLE [73]). A ranking loss is computed among sample pairs within the same graph in the same batch, and the losses from different graphs in the batch are reduced to get the total loss. We use Ordered Pair Accuracy (OPA) as a validation metric to select the best model checkpoint, and top-K error as an evaluation metric as they evaluate the quality of ranking. We define a top-K error to reflect how much slower the top-K configurations predicted by the model is from the actual fastest configuration as follows:
\[\frac{\text{The best runtime of the top-k predictions}}{\text{The best runtime of all configurations}}-1=\frac{\min_{i\in K}y_{i}}{\min_{i\in A}y_{i}}-1 \tag{1}\]
where \(K\) is the top-K predictions, \(A\) is all configurations of the given graph from the dataset collection, and \(y\) is the measured execution time.
### Implementation
Layout model.Our default baseline model is a 3-layer GraphSAGE. We concatenate node features and per-node configuration features as inputs to the GNN. If a node is non-configurable (having no
Figure 4: Model architecture.
layout configuration), we use a zero vector as configuration features. To address the memory issue when training on the full program graphs in the layout dataset, we apply the Graph Segment Training (GST) method [10]. GST divides large graphs into smaller segments. During training, a random segment is chosen for model updates at each step. This approach allows us to store intermediate activations for only one segment during backpropagation. The embeddings of all segments are merged to generate an embedding for the original large graph, which is used for prediction. As a result, the memory usage during training is bounded for each large graph, irrespective of its size. By default, we use the maximum segment size of 1,000 nodes and the keep probability \(p=0.5\) for the stale embedding dropout. We consider MSE and pairwise hinge loss as a loss function. Model training takes approximately 2-3 days on a single NVIDIA A100 GPU with 80GB HBM2e Memory. The baseline models are implemented using the GraphGPS framework [57] on top of PyTorch 1.10.
Tile size model.For the tile collection, we implement three baselines using TensorFlow-2 and TF-GNN: an MLP model and two GNNs (GraphSAGE and GCN with residual connections). The MLP model embeds all opcodes, concatenates with node features, sums across all nodes, then concatenates with kernel configuration features, feeding into 3-layer MLP. The GNN variants follow Figure 4. We experiment with two options to combine the graph-level features with the node-level information (orange in the figure): either _late-join_ or _early-join_. The first runs the GNN only on node features, reduces the node embeddings, and then concatenates with the graph (configuration) features. As such, multiple configurations over the same graph share the forward and backward pass. The second replicates the graph features onto every node. Here, we group configurations per graph and therefore execute sparse-ops only once per graph (on cube-tensors rather than matrices). The early-join GraphSAGE model closely resembles the original TPU learned cost model [41]. We consider MSE and ListMLE as a loss function. These models take a few minutes (MLP), to less than an hour (late-join GNN), to a couple of hours (early-join GNN), to train on Intel Xeon CPUs..
The details of hyperparameters can be found in Appendix B.
## 5 Evaluation Results
Table 2 reports the average top-K error of the best model on across programs all the dataset collections. The Layout:XLA:Random and Layout:NLP:Random are by far the most difficult collections. If we use the learned cost model to select the top configuration, we will be on average 24-25% slower than the known optimal configuration. Even if we consider top 10 candidates, we will still be on average 8-10% off. This is likely because the configurations from the random search are very different from each other. Even when the NLP collection contains only graphs with the transformer architecture, the best learned model still has relatively low accuracy on the random search collections.
In contrast, Layout:XLA:Default and Layout:NLP:Default are much easier than the random collections. This is expected because configurations generated from a genetic algorithm starting from the default configuration should have relatively similar performance to the default configuration's. However, the learned model is not always accurate on all graphs. If we look at the model's accuracy per each graph (program), the top-1 error ranges between 0-9%. For more details, please refer to Table 6 in Appendix C.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline
**Collection** & \multicolumn{2}{c}{**Top-\(K_{1}\) Error \%**} & \multicolumn{2}{c}{**Top-\(K_{2}\) Error \%**} & \multicolumn{2}{c}{**Top-\(K_{3}\) Error \%**} \\ \cline{2-6} & Val & Test & Val & Test & Val & Test \\ \hline Layout:XLA:Random & 24.3 & 25.3 & 6.4 & 10.4 & 0.4 & 1.2 \\ Layout:XLA:Default & 1.7 & 1.6 & 0.5 & 0.5 & 0.1 & 0.1 \\ Layout:NLP:Random & 23.7 & 24.5 & 7.4 & 8.0 & 1.8 & 2.7 \\ Layout:NLP:Default & 2.3 & 3.0 & 0.2 & 0.2 & 0.1 & 0.1 \\ Tile:XLA & 10.5 & 10.8 & 3.9 & 3.4 & 2.7 & 2.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prediction errors (Eq. 1) of our best baseline model on different dataset collections. The values of (\(K_{1}\), \(K_{2}\), \(K_{3}\)) are (1, 10, 100) for the layout collections, and (1, 5, 10) for the tile collection.
On the Tile:XLA collection, the average top-1 error of the best model is very similar to the original TPU learned cost model paper's [41]. Note that our dataset is not exactly the same as the internal dataset used in the original paper, but they share a large number of overlapping graphs.
### Ablation Study on Layout Collection
Figure 5 compares alternative choices in terms of the model architecture and the training method on the Layout:XLA:Random collection. Our results agree with the results from the prior paper [10] that the quality of the model improves significantly with the Graph Segment Training method (Best) over a typical full graph training (Full Graph), as GST potentially introduces a better hierarchical graph pooling mechanism that leads to better generalization. Note that since an A100 GPU has plenty of memory, training on full graphs does not run out of memory, but one would encounter the problem when running on a smaller device. Similar to the prior paper, we also show that the quality of the model drops when the graph segment size is too small. Here, we compare segment sizes of 100 (Smaller Segment) and 1,000 (Best). Using fewer GNN layers (2 layers), however, does not affect the accuracy significantly compared to the best model (3 layers). The choice of a partition algorithm also does not matter so much, where we compare METIS (Best) and random cutting from topological sort (Topo Partition) because our graphs are sparely connected. Another crucial factor is the loss function, where using pairwise hinge loss (Best) is significantly better than using Mean Squared Error (MSE), similar to the finding in the original TPU learned cost model paper [41]. Additionally, we compare our baseline models with a random model that selects K candidates at random, and show that all of the learned cost models are better than random.
### Ablation Study on Tile Size Collection
Figure 6 compares alternative choices on the tile size collection. Similar to the prior work [41], our results show that combining configuration features with node features early (_early-join_) is superior than combining configuration features with a reduced graph embedding later (_late-join_). Similar to the layout collection, using a ranking loss (ListMLE) is much more effective than using MSE. Additionally, we compare the choice of a GNN between GraphSAGE and GCN, and find that GraphSAGE is slightly better than GCN on this dataset collection. We also provide an MLP baseline without a GNN, and confirm that a GNN is essential to achieve good accuracy.
Appendix C reports additional top-K errors including per-graph errors.
## 6 Discussion and Future Directions
There are many potential improvements to be made on top of the baseline models we provide, and which have not been done by prior work. First, we observe that a tensor computation graph typically contains repeated subgraphs, representing repeated blocks of neural network layers. One direction is to leverage this repeated structure to devise a more compact representation that is easier to learn. Second, as mentioned earlier, the dataset may contain some types of graphs, _e.g._, ResNet, significantly more than others. This skew may make the learned model perform well on common types of graphs, but poorly on uncommon types. One may investigate how to address this data imbalance problem to improve the quality of the learned model. Finally, while we know that developing a purely analytical
cost model is extremely difficult, training an accurate learned cost model is not easy either, especially when graphs are large. One idea is to combine the best of both worlds, using analytical modeling when easy to do and letting the learned model make corrections to the analytical estimates.
We plan to continue improving our dataset in multiple aspects. First, we would like to include more diverse graphs. Prior approaches generate random programs for training data [2; 14; 5]. We deliberately avoid randomly generated programs in our dataset because we would like a model trained on the dataset to achieve high performance on realistic programs, used in production, instead of achieving moderate performance on both real-word and randomly generated programs. However, we acknowledge that the diversity of graphs in the dataset is extremely important for the generalizability of the model. One way to generate more realistic tensor programs is to leverage Neural Architecture Search [84; 68; 69; 85; 47; 46; 60; 64; 76; 23]. We leave this as future work, potentially the next version of the dataset. Second, we would like to include data measured on other hardware platforms beyond TPUs, such as CPUs and GPUs. Nonetheless, we believe that the general techniques of training an accurate learned performance model (_e.g._, improvements on GNNs, Graph Segment Training method, etc) are applicable to other hardware targets, so the improvements coming out from experimenting with the current version of the dataset should also benefit other hardware platforms as well. Many compiler optimizations are also common across multiple hardware backends. For example, the tile size selection has shown to be one of the most important optimizations across all widely used hardware (_i.e._, CPUs, GPUs, and TPUs) and even custom accelerators. Layout optimizations are also applicable on CPUs and GPUs, but the layout options on CPUs and GPUs may be limited if the compiler depends on pre-optimized library kernels.
We hope that TpuGraphs will propel advances in compilers. In particular, researchers may be able to extract insights on how to improve code generation for tensor programs. For example, which information in a tensor computation graph is important to make various optimization decisions? How can we build an accurate analytical cost model for an important class of hardware architectures? Can a learned representation for tensor programs guide various tensor compiler optimizations?
## Acknowledgement
We would like to thank Sai Ganesh for a pointer to generate open-source XLA HLO graphs, Mark Daoust for help on the dataset's hosting location, Mike Burrows for feedback on writing, and Amit Sabne and David Majnemer for reviewing the dataset's security information.
|
2307.03006 | Predicting a noisy signal: the costs and benefits of time averaging as a
noise mitigation strategy | One major challenge for living cells is the measurement and prediction of
signals corrupted by noise. In general, cells need to make decisions based on
their compressed representation of noisy, time-varying signals. Strategies for
signal noise mitigation are often tackled using Wiener filtering theory, but
this theory cannot account for systems that have limited resources and hence
must compress the signal. To study how accurately linear systems can predict
noisy, time-varying signals in the presence of a compression constraint, we
extend the information bottleneck method. We show that the optimal integration
kernel reduces to the Wiener filter in the absence of a compression constraint.
This kernel combines a delta function at short times and an exponential
function that decays on a timescale that sets the integration time. Moreover,
there exists an optimal integration time, which arises from a trade-off between
time averaging signal noise and dynamical error. As the level of compression is
increased, time averaging becomes harder, and as a result the optimal
integration time decreases and the delta peak increases. We compare the
behaviour of the optimal system with that of a canonical motif in cell
signalling, the push-pull network, finding that the system reacts to signal
noise and compression in a similar way. | Jenny Poulton, Age Tjalma, Lotte Slim, Pieter Rein ten Wolde | 2023-07-06T14:10:05Z | http://arxiv.org/abs/2307.03006v1 | # Predicting a noisy signal: the costs and benefits of time averaging as a noise mitigation strategy
###### Abstract
One major challenge for living cells is the measurement and prediction of signals corrupted by noise. In general, cells need to make decisions based on their compressed representation of noisy, time-varying signals. Strategies for signal noise mitigation are often tackled using Wiener filtering theory, but this theory cannot account for systems that have limited resources and hence must compress the signal. To study how accurately linear systems can predict noisy, time-varying signals in the presence of a compression constraint, we extend the information bottleneck method. We show that the optimal integration kernel reduces to the Wiener filter in the absence of a compression constraint. This kernel combines a delta function at short times and an exponential function that decays on a timescale that sets the integration time. Moreover, there exists an optimal integration time, which arises from a trade-off between time averaging signal noise and dynamical error. As the level of compression is increased, time averaging becomes harder, and as a result the optimal integration time decreases and the delta peak increases. We compare the behaviour of the optimal system with that of a canonical motif in cell signalling, the push-pull network, finding that the system reacts to signal noise and compression in a similar way.
## 1 Introduction
Autonomous or self-perpetuating systems such as cells typically exist in dynamic environments. A general requirement for self-perpetuating systems to thrive in such environments is the ability to respond to changing conditions. Ideally, a system would make an instantaneous change to respond to an environmental change. In reality, mounting a response takes time. Given this, an optimal response requires systems to predict an environmental change [1, 2]. Intriguingly, experiments have revealed that even single-celled organisms can predict environmental change [3, 4]. For example, cells can use the arrival of one sugar to predict that the next one will arrive [3]. In this work, we consider the optimal prediction of time-varying signals. We consider biological sensing systems, but our ideas can be applied to any system predicting a time-varying signal.
Living cells live in rich sensory environments and can sense and react to many different external signals. These include light, motion and chemical concentrations. In this work, we consider the trajectory of the changing concentration of ligand molecules in the environment as a function of time. These concentrations are measured via receptors, which are typically located on the surface of the cell. The ligand molecules bind to these receptors, which transmit information to a downstream system within the cell. Receptor-ligand binding, like all processes at the cellular scale, is noisy. As a result, the signal that is propagated to the downstream system is corrupted by signal noise, also called input noise. Living cells, like any signal detection system, are thus inevitably affected by signal noise. This work is interested in understanding how systems can mitigate the effect of this signal noise.
How cells can maximize their sensing precision by minimizing the propagation of signal noise has been studied extensively. In their pioneering paper, Berg and Purcell [5] pointed out that cells can reduce the sensing error via the mechanism of time integration. In this mechanism, the cell does not infer the ligand concentration from the current concentration but rather from its average over some given integration time. Following the work of Berg and Purcell, many studies have addressed the question of how accurately living cells can measure ligand concentrations via the mechanism of time integration [6, 7, 8, 9, 10, 11, 12, 13, 14]. Importantly, these studies assume that the signal is constant on the timescale of the response and that the different signal values are averaged uniformly in time. However, when the integration time is comparable to the correlation time of receptor-ligand binding, the optimal weighting becomes non-uniform [9]. Moreover, ligand concentrations often fluctuate on a timescale that is comparable to the response time of the system, as, for example, in chemotaxis [15, 16]. Predicting these signals optimally requires a non-uniform time average [17, 18]. Another sensing strategy, which can reach a higher sensing precision, is that of maximum-likelihood sensing [19, 20] or Bayesian filtering [21].
Since systems cannot generally respond instantaneously to changes in their environment, it becomes beneficial to anticipate the change and mount a response ahead of time. How accurately this can be done is determined
not by how accurately they can predict the current signal but rather the future signal. For a system to predict the future, it must extract characteristics of the past signal that are informative about the future signal. The amount of predictive information stored in the past signal trajectory about the future signal is the mutual information \(I(\vec{\textbf{s}},s(\tau))\) between the past signal trajectory \(\vec{\textbf{s}}\) and the signal value \(s(\tau)\) at a future timepoint \(\tau\). This predictive information puts a fundamental lower bound on the prediction error. However, signal noise means that this bound can, in general, not be reached. Wiener filtering theory [22, 23, 24] makes it possible to derive, for linear systems, the optimal integration function that minimizes the prediction error for time-varying signals in the presence of signal noise, and it has been applied to cellular systems [17, 18].
Wiener filtering theory, however, does not recognize that systems are built with finite resources. In general, and as assumed in Wiener filtering theory, systems do not predict the future signal \(s(\tau)\) from the input signal trajectory \(\vec{\textbf{s}}\) directly, but rather indirectly, from the output of the signalling system, \(x\). This output depends on the past input signal trajectory \(\vec{\textbf{s}}\). Wiener filtering theory assumes that the input trajectory can be reliably mapped onto the output \(x\). In general, however, the output trajectory \(x(t)\) is a noisy and compressed representation of the input trajectory \(s(t)\) because resources such as protein copy numbers and energy are finite. The data processing inequality implies that the mutual information between the compressed output \(x\) and the future of the signal \(s(\tau)\) is less than that between the uncompressed signal and the future: \(I(x;s(\tau))\leq I(\vec{\textbf{s}},s(\tau))\). In this work, we go beyond Wiener filtering theory to study systems which have limited resources.
Here, we study the optimal compression of the input signal into the output for prediction under resource constraints. We define the optimal compression as that which maximises the predictive information in the compressed output \(I_{\text{pred}}=I(x;s(\tau))\) subject to the constraint that the information the output has about the past, \(I_{\text{past}}=I(x;\vec{\textbf{s}})\), is limited. We will confine ourselves to linear systems, and derive the optimal compression via the information bottleneck method [25, 26].
The information bottleneck method has been applied to a wide range of biological systems. The method has led to a greater understanding of optimal gene expression patterns for fly development and has identified the optimal sensors associated with this process [27, 28]. It has been used to analyse retinal ganglion cells [1, 29], finding that the retina provides a nearly optimal representation of the predictive information contained in the visual input [29]. A related work calculates whether position or velocity information is more useful to the retina for predicting a moving image [1]. Yet, none of these studies has directly considered signals that are corrupted by signal noise. In this work, we will extend the Gaussian information bottleneck presented by Tishby et al. [26] to systems with signal noise. Using this approach, we will derive the optimal integration function, which captures the characteristics of the past signal that are most informative about the future signal in the presence of signal noise and a compression constraint.
In section 2, we will outline the discrete information bottleneck for systems with Gaussian signal noise. This method combines the information bottleneck method and the Wiener filter, considering both signal noise and compression. Previous attempts to link the information bottleneck method and the Wiener filter have not included signal noise on which the kernel acts [30], without which the Wiener filter does not straightforwardly apply.
In section 3, we introduce a discrete Markovian signal modelled with an autoregressive model of order 1. A Markovian signal is the simplest signal in which the past is predictive of the future. We will then add correlated Gaussian noise to that signal, also modelled with an autoregressive model of order 1.
In section 4, we address the optimal prediction of this signal in the presence of signal noise and resource constraints. We derive optimal kernels for compressing the past signal and calculate the amount of predictive information these compressed representations contain. We find that the optimal kernel combines a \(\delta\) peak at zero with a decaying exponential, which allows for time averaging over the signal noise. The relative importance of these two contributions, as well as the integration time (the timescale on which the exponential contribution decays), depends on the compression level. When the resources are limited, and the compression level is high, the \(\delta\) peak is relatively large, and the integration time is short because the system cannot time average. In the other limit, the system time averages over an optimal integration time, which arises from the interplay between time averaging and the dynamical error or signal distortion [17, 18, 2]. Additionally, the relative contribution of the \(\delta\) peak reduces. Finally, we examine the effect of changing the variance and the correlation time of the noise. When the noise variance is larger, more priority is given to the exponential part of the kernel, and its range, the integration time, also increases because this allows for more time averaging. When the correlation time of the noise is larger, the exponential part of the kernel widens to enable effective time averaging, while the importance of the \(\delta\) peak increases because time averaging becomes less successful.
In section 5, we will compare our optimal kernels with the kernel of a well-known biological signalling motif, the push-pull network [31]. Push-pull systems are omnipresent in prokaryotic and eukaryotic cells [32]. Examples are phosphorylation cycles, as in MAPK cascades; GTPase cycles, such as the Ras system; and two-component systems, including the chemotaxis system of Escherichia coli. Push-pull networks constitute a simple exponential filter [17, 18, 2], and hence do not contain a contribution with a \(\delta\) peak implying that the push-pull motif is not optimally compressing the signal.
This work develops a very general method which can be used to study optimal compression for prediction in
noisy systems with resource constraints. While we use it to study biological systems, the effect of compression on systems predicting any number of noisy signals, from financial data to robotic sensing data, can be studied using this method.
## 2 Deriving the information bottleneck for a system with signal noise
This work seeks to find the optimal scheme for compressing a signal to predict the future given constrained resources. To start, we must define a general process that captures the essence of the problem, and that can be optimised. We consider signals that obey Gaussian statistics, which are corrupted by noise that also obeys Gaussian statistics. It has been shown that the optimal response systems for these signals are linear[26]. We, therefore, consider systems that respond linearly over the range of input fluctuations:
\[x=\mathbf{\vec{A}}(\mathbf{\vec{s}}+\vec{\eta})+\xi. \tag{1}\]
Here \(\mathbf{\vec{s}}\) is a vector representing a discretised signal trajectory, \(\vec{\eta}\) is a vector representing a noise trajectory. The linear kernel \(\mathbf{\vec{A}}\) is a vector and \(\mathbf{\vec{A}}(\mathbf{\vec{s}}+\vec{\eta})\) is a scalar representing a weighted average over all timepoints of the signal corrupted by signal noise. The compression noise \(\xi\) is a scalar. Thus our compressed output \(x\) is a scalar. This output is correlated with the value of the signal in the future, and we are interested in its correlation with the value at one particular timepoint \(\tau\) into the future, the scalar \(s(\tau)\).
The information bottleneck finds the optimal kernel \(\mathbf{\vec{A}}\) over the signal trajectory to maximise predictive information while also compressing the signal. This optimal compression is found by maximising the information bottleneck Lagrangian with respect to \(\mathbf{\vec{A}}\):
\[\max_{\mathbf{\vec{A}}}\mathcal{L}=\max_{\mathbf{\vec{A}}}\left(I_{\text{pred }}-\gamma I_{\text{past}}\right). \tag{2}\]
Here \(\gamma\) is a Lagrange multiplier that dictates the compression level. When this Lagrangian is maximised, the mutual information between the compressed output and the signal value in the future is maximised subject to the constraint that the mutual information between the compressed output and the signal trajectory in the past is limited. The compression level \(\gamma\) runs from zero to one. Recall that \(x\) is a compression of \(\vec{s}\), so \(I_{\text{pred}}\leq I_{\text{past}}\). Given this, at \(\gamma=1\) optimal \(I_{\text{pred}}=I_{\text{past}}=0\). At lower \(\gamma\), the system is allowed to increase \(I_{\text{past}}\) via \(\mathbf{\vec{A}}\) to make a better prediction of the future. We can rewrite the information bottleneck Lagrangian in terms of entropy using \(I(a;b)=H(a)-H(a|b)\). The entropy for a Gaussian system in one dimension is \(H(\Sigma_{x})=\frac{1}{2}\log|\Sigma_{x}|\) where \(\Sigma_{x}\) is the covariance of \(x\) (where \(x\) is a vector, \(\Sigma_{x}\) is a covariance matrix). The conditional entropy \(H(\Sigma_{y|z})=\frac{1}{2}\log|\Sigma_{y|z}|\) where \(\Sigma_{y|z}\) is the conditional covariance of variable \(y\) given variable \(z\) where one or both of \(y\) and \(z\) can be vectors. Combining these, we rewrite the information bottleneck Lagrangian as
\[\max_{\mathbf{\vec{A}}}\mathcal{L} =H(x)-H(x|s(\tau))-\gamma H(x)+\gamma H(x|\mathbf{\vec{s}}) \tag{3}\] \[\max_{\mathbf{\vec{A}}}\mathcal{L} =(1-\gamma)\frac{1}{2}\log|\Sigma_{x}|-\frac{1}{2}\log|\Sigma_{x|s (\tau)}|+\gamma\frac{1}{2}\log|\Sigma_{x|\vec{s}}|. \tag{4}\]
The information bottleneck absent signal noise eliminates \(x\) from \(\Sigma_{x}\), \(\Sigma_{x|\vec{s}}\) and \(\Sigma_{x|s(\tau)}\), then differentiates with respect to \(\mathbf{\vec{A}}\), resulting in an eigenvalue equation [26].
We follow the same method, but because of the addition of signal noise, the definition of the covariances has changed. Given \(x=\mathbf{\vec{A}}(\mathbf{\vec{s}}+\vec{\eta})+\xi\) and noting that here there are no correlations between the signal \(\mathbf{\vec{s}}\), the signal noise \(\vec{\eta}\), and the compression noise \(\xi\), respectively, we find that:
\[\Sigma_{x} =\langle\delta x\delta x\rangle \tag{5}\] \[=\langle\delta(\mathbf{\vec{A}}(\mathbf{\vec{s}}+\vec{\eta})+\xi) \delta((\mathbf{\vec{s}}^{T}+\vec{\eta}^{T})\mathbf{\vec{A}}^{T}+\xi)\rangle\] (6) \[=\mathbf{\vec{A}}\langle\delta\mathbf{\vec{s}}\delta\mathbf{ \vec{s}}^{T}\rangle\mathbf{\vec{A}}^{T}+\mathbf{\vec{A}}\langle\vec{\eta} \delta\vec{\eta}\vec{\eta}^{T}\rangle\mathbf{\vec{A}}^{T}+\langle\delta\xi \delta\xi\rangle\] (7) \[=\mathbf{\vec{A}}\Sigma_{\mathbf{\vec{s}}}\mathbf{\vec{A}}^{T}+ \mathbf{\vec{A}}\Sigma_{\vec{\eta}}\mathbf{\vec{A}}^{T}+\mathbf{\vec{\Sigma}}_ {\xi} \tag{8}\]
If \(\mathbf{\vec{s}}\) is known, the remaining uncertainty in \(x\) is \(\mathbf{\vec{A}}\vec{\eta}+\xi\). Hence, \(\Sigma_{x|\vec{s}}=\mathbf{\vec{A}}\Sigma_{\vec{\eta}}\mathbf{\vec{A}}^{T}+ \Sigma_{\xi}\). Finally, to find \(\Sigma_{x|s(\tau)}\) we use the Schur complement formula: \(\Sigma_{x|s(\tau)}=\Sigma_{x}-\Sigma_{xs(\tau)}\Sigma_{s(\tau)x}^{-1}\Sigma_{s( \tau)x}\). Now \(\Sigma_{xs(\tau)}=\langle\delta x\delta s(\tau)\rangle=\mathbf{\vec{A}}\langle \delta\mathbf{\vec{s}}\delta s(\tau)\rangle=\mathbf{\vec{A}}\Sigma_{\mathbf{ \vec{s}}\mathbf{\vec{s}}(\tau)}\) and similarly \(\Sigma_{s(\tau)x}=\Sigma_{s(\tau)\mathbf{\vec{s}}}\mathbf{\vec{A}}^{T}\). Thus
\[\Sigma_{x|s(\tau)} =\mathbf{\vec{A}}\Sigma_{\vec{s}}\mathbf{\vec{A}}^{T}+\mathbf{ \vec{A}}\Sigma_{\vec{\eta}}\mathbf{\vec{A}}^{T}+\Sigma_{\xi}-\mathbf{\vec{A}} \Sigma_{\mathbf{\vec{s}}s(\tau)}\Sigma_{s(\tau)\mathbf{\vec{s}}}^{-1}\Sigma_{s( \tau)\mathbf{\vec{s}}}\mathbf{\vec{A}}^{T} \tag{9}\] \[=\mathbf{\vec{A}}\left(\Sigma_{\vec{s}}-\Sigma_{\mathbf{\vec{s}}s( \tau)}\Sigma_{s(\tau)\mathbf{\vec{s}}}^{-1}\Sigma_{s(\tau)\mathbf{\vec{s}}} \right)\mathbf{\vec{A}}^{T}+\mathbf{\vec{A}}\Sigma_{\vec{\eta}}\mathbf{\vec{A}} ^{T}+\Sigma_{\xi}\] (10) \[=\mathbf{\vec{A}}\Sigma_{\mathbf{\vec{s}}s(\tau)}\mathbf{\vec{A}} ^{T}+\mathbf{\vec{A}}\Sigma_{\vec{\eta}}\mathbf{\vec{A}}^{T}+\Sigma_{\xi}. \tag{11}\]
The information bottleneck Lagrangian (equation 4) can be rewritten as
\[\max_{\vec{\mathbf{A}}}\mathcal{L} =(1-\gamma)\frac{1}{2}\log\left(\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{ z}}}\vec{\mathbf{A}}^{T}+\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{z}}}\vec{\mathbf{A}}^{T}+ \Sigma_{\xi}\right)+\gamma\frac{1}{2}\log\left(\vec{\mathbf{A}}\Sigma_{\vec{ \mathbf{z}}}\vec{\mathbf{A}}^{T}+\Sigma_{\xi}\right) \tag{12}\] \[-\frac{1}{2}\log\left(\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{z}}|s( \tau)}\vec{\mathbf{A}}^{T}+\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{z}}}\vec{ \mathbf{A}}^{T}+\Sigma_{\xi}\right).\]
Differentiating and setting equal to zero gives
\[\frac{d\mathcal{L}}{dA} =(1-\gamma)\frac{\vec{\mathbf{A}}(\Sigma_{\xi}+\Sigma_{\vec{ \mathbf{z}}})}{\vec{\mathbf{A}}(\Sigma_{\xi}+\Sigma_{\vec{\mathbf{z}}})\vec{ \mathbf{A}}^{T}+\Sigma_{\xi}}+\gamma\frac{\vec{\mathbf{A}}\Sigma_{\vec{ \mathbf{z}}}}{\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{z}}}\vec{\mathbf{A}}^{T}+ \Sigma_{\xi}} \tag{13}\] \[-\frac{\vec{\mathbf{A}}(\Sigma_{\vec{\mathbf{z}}|s(\tau)}+\Sigma_ {\vec{\mathbf{z}}})}{\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{z}}|s(\tau)}\vec{ \mathbf{A}}^{T}+\vec{\mathbf{A}}\Sigma_{\vec{\mathbf{z}}}\vec{\mathbf{A}}^{T}+ \Sigma_{\xi}}=0.\]
Unlike the system without signal noise [26], this equation no longer reduces to an eigenvalue equation and must be solved numerically.
Can this method be compared to the Wiener filter? The Wiener filter minimises the mean squared error between the filter output (here \(x\)) and the signal at a present or future time (here, the signal at a future point \(s(\tau)\)). For a Gaussian system, minimising the mean squared error, \(\left|\left(x^{2}-s(\tau)\right)^{2}\right|\), is equivalent to maximising the mutual information, \(I_{\text{pred}}\). Maximising this mutual information is equivalent to maximising the information bottleneck Lagrangian, \(\mathcal{L}=I_{\text{pred}}-\gamma I_{\text{past}}\), for \(\gamma=0\). Thus, as \(\gamma\to 0\), the optimal kernels found by the information bottleneck method converge to that which optimally filters out signal noise, given by the Wiener filter. Convergence to the Wiener filter will generally be true for kernels found using this method. This explicit link has only been made possible by including signal noise on which the kernel acts, a vital component of the Wiener filter problem. Some attempt has been made to link the information bottleneck method and the Wiener filter before [30]. However, this work fails to include a noise source acted on by the kernel. Since the Wiener filter traditionally mitigates noise via the kernel, this rendered this comparison between the IBM and the Wiener filter somewhat confusing.
## 3 A discrete signal modelled by an autoregressive model
Since our method of calculating the information bottleneck is discrete, we need a discrete signal. We consider a discrete Markovian input signal given by an autoregressive model. We choose a Markovian process as the simplest example of a signal in which the future depends on the past and can therefore be predicted. The autoregressive model is a time-series model, where each value is regressed upon previous values in sequence [33]. This work will focus on an order 1 autoregressive model with a zero mean, which models a Markovian process:
\[S_{t}=\phi_{1}S_{t-1}+\sigma_{AR}^{2}\eta. \tag{14}\]
Here \(\phi_{1}\) is the weighting of how element 1 in the past affects the current value, \(\eta\) is a white noise process of variance one and mean zero and \(\sigma_{AR}^{2}\) sets the full variance of the white noise term. The covariance function of an order 1 autoregressive model with zero mean is
\[\langle\delta_{S}(0)\delta_{S}(t)\rangle=\frac{\sigma_{AR}^{2}\phi_{1}^{\frac{ 1}{2t}}}{1-\phi_{1}^{2}}. \tag{15}\]
Here the current time \(t\) must be an integer multiple of the timestep \(dt\). At non-integer multiples of \(dt\), the function is not defined. Where the function is defined, we want this discrete covariance function to take the same form as that for continuous Markovian signal with covariance function \(\langle\delta_{S}(t_{1})\delta_{S}(t_{2})\rangle=\sigma_{s}^{2}e^{-\frac{1}{ \tau_{s}}\left|t_{1}-t_{2}\right|}\). Here \(\sigma_{s}^{2}\) is the variance of the signal, and \(\tau_{s}\) is the correlation time of the signal. To give the autoregressive function the same correlation function, we take \(\phi_{1}=e^{-\frac{dt}{\tau_{s}}}\) and \(\sigma_{AR}^{2}=\sigma_{s}^{2}\left(1-e^{-\frac{2dt}{\tau_{s}}}\right)\).
Our signal is corrupted by correlated signaling noise with covariance \(\langle\delta_{\vec{\mathbf{z}}}(t_{1})\delta_{\vec{\mathbf{z}}}(t_{2})\rangle= \sigma_{\eta}^{2}e^{-\frac{1}{\tau_{\eta}}\left|t_{1}-t_{2}\right|}\), where \(\sigma_{\eta}^{2}\) is the variance of the noise and \(\tau_{\eta}\) is the correlation time of the noise. This is once again modelled by an autoregressive function with \(\phi_{1}=e^{-\frac{dt}{\tau_{\eta}}}\) and \(\sigma_{AR}^{2}=\sigma_{\eta}^{2}\left(1-e^{-\frac{2dt}{\tau_{\eta}}}\right)\).
## 4 Optimal kernels and the information bottleneck limits
How does the optimal kernel compress the trajectory of the signal and the noise into a prediction of the future? How much information about the past and the future are retained in the compressed representation \(x\)? In this
Figure 1: In both graphs, \(\sigma_{s}^{2}=1\) and \(\tau_{s}=2\)s. **As the noise variance \(\sigma_{\eta}^{2}\) (a) or noise correlation time \(\tau_{\eta}\) (b) increases, the maximum amount of \(I_{\rm pred}\) and \(I_{\rm past}\) available to the system decreases.** We calculate optimal kernels at fixed \(\tau_{\eta}\) and \(\sigma_{\eta}^{2}\) for varying \(\gamma\). These kernels result in a given \(I_{\rm pred}\) and \(I_{\rm past}\), which we plot parametrically. \(\gamma\) is zero for the top right part of each curve and increases towards one at the bottom left part of the curve. The curve for \(\sigma_{\eta}^{2}=0\) is identical to that found using the information bottleneck without signal noise developed in ref. [26]. This limit is the maximum amount of predictive information per bit of past information that can be extracted from a given signal trajectory in the absence of signal noise. When signal noise is present, the system can extract less predictive information from the trajectory, and this effect increases as the variance of the noise increases (a). We also note that the maximum amount of past information about the trajectory decreases as \(\sigma_{\eta}^{2}\) increases. The amount of information about the past and the present also decreases as the correlation time of the noise increases (b). This is because a kernel requires a longer integration time to mitigate correlated noise, which increases dynamical error. In a) \(\tau_{\eta}=0.02\)s, in b) \(\sigma_{\eta}^{2}=2\), in both, \(\sigma_{s}^{2}=1\), \(\tau_{s}=2\), \(\sigma_{\xi}^{2}=1\), \(\tau=1\)s. \(dt=0.01\)s and \(T=0.5\)s. Dots mark optimal kernels from fig. 3.
Figure 2: We plot optimal kernels for various \(\sigma_{\eta}^{2}\), \(\tau_{\eta}\) and \(\gamma\). \(\gamma\) defines the level of compression. Low \(\gamma\) corresponds to the high \(I_{\rm pred}\) and \(I_{\rm past}\) region and high \(\gamma\) corresponds to the low \(I_{\rm pred}\) and \(I_{\rm past}\) region. The kernels are discrete, but we plot a continuous line joining the discrete points. a), b) and c) **As compression decreases, the variance of the signal noise increases or the correlation time of the noise decreases, the width and relative height of the exponential part of the kernel increases.** The optimal kernels consist of a \(\delta\) and an exponentially decaying curve. When \(\gamma\) is high (low \(I_{\rm pred}\) and \(I_{\rm past}\) regime) or \(\sigma_{\eta}^{2}=0\), the exponential part disappears and the optimal kernel is a \(\delta\) function. Equally, when \(\tau_{s}\) is sufficiently low, the \(\delta\) function part diminishes, and the kernel becomes a decaying exponential. a) \(\sigma_{\eta}^{2}=2\) and \(\gamma=0.15\), in b) \(\tau_{\eta}=0.02\) and \(\gamma=0.15\) and in c) \(\tau_{\eta}=0.02\)s and \(\sigma_{\eta}^{2}=2\). In all \(\sigma_{s}^{2}=1\), \(\tau_{s}=2\)s, \(\sigma_{\xi}^{2}=1\), \(\tau=1\)s, \(dt=0.01\)s and \(T=1\)s. For all kernels except \(\sigma_{\eta}^{2}=0\), \(A(0)>10\) and compression noise is negligible. For \(\sigma_{\eta}^{2}=0\), \(A(0)\approx 0.23\).
Figure 3: **Dependence of predictive and past information on the integration kernel.** The form of the optimal kernel is: \(A_{\rm opt}(-t)=a_{\rm opt}\left(b_{\rm opt}\delta(t)+\frac{(1-b_{\rm opt})}{ \tau_{A}^{\rm opt}}e^{-t/\tau_{A}^{\rm opt}}\right)\). Here, we calculate \(I_{\rm pred}\) and \(I_{\rm past}\) for non-optimal kernels \(A(-t)=a\left(b\delta(t)+\frac{(1-b)}{\tau_{A}}e^{-t/\tau_{A}}\right)\) by varying one parameter at the time, the amplitude \(a\), the integration time of the kernel \(\tau_{A}\), or the relative height of the exponential part of the kernel \(b\), fixing the other parameters at their optimal values \(a_{\rm opt}=41.23\), \(b_{\rm opt}=0.984\) and \(\tau_{A}^{\rm opt}=0.049\)s. The optimal integration kernel corresponds to the black dots on the blue lines and to the coloured dots in figs. 1a and b. In all graphs \(\gamma=0.2\). a) **As the amplitude \(a\) increases, both \(I_{\rm pred}\) and \(I_{\rm past}\) increase sharply to a plateau.** Recall the form of the compression process, \(x=\vec{\bf A}(\vec{\bf s}+\vec{\eta})+\xi\). When the amplitude of the kernel \(a\) is small, the compression process is dominated by compression noise \(\xi\). As the amplitude increases, the effect of \(\xi\) is reduced and \(I_{\rm pred}\) and \(I_{\rm past}\) increase. After the effect of compression noise \(\xi\) is diminished, the signal noise dominates. Because increasing \(a\) amplifies both the signal and the signal noise, it cannot diminish the effect of signal noise and increasing \(a\) further cannot increase \(I_{\rm pred}\) and \(I_{\rm past}\). In this regime, the optimal amplitude is sufficiently high that compression noise \(\xi\) is negligible. The optimisation of the objective function \(\mathcal{L}\) therefore returns an arbitrary value greater than the point where \(I_{\rm pred}\) and \(I_{\rm past}\) plateau, \(a\gtrsim 1\). b) **As the integration time \(\tau_{A}\) increases, \(I_{\rm past}\) and \(I_{\rm pred}\) increase to a maximum.** To mitigate signal noise, a system must time average, taking a weighted average over previous points on the trajectory. The integration time of the kernel \(\tau_{A}\) defines how much a system time averages. When \(\tau_{A}\) is zero, the system takes an instantaneous reading of the current value of the signal corrupted by signal noise. Here both \(I_{\rm pred}\) and \(I_{\rm past}\) are low. As \(\tau_{A}\) increases, the effect of signal noise is reduced and \(I_{\rm pred}\) and \(I_{\rm past}\) both increase. As \(\tau_{A}\) increases further, the dynamical error increases and reduces the ability of the system to predict. \(I_{\rm pred}\) and \(I_{\rm past}\) therefore peak. c) **Both \(I_{\rm pred}\) and \(I_{\rm past}\) decrease with \(b\)** Both \(I_{\rm pred}\) and \(I_{\rm past}\) decrease monotonically with the relative weighting of the \(\delta\) function \(b\), initially very slowly and then sharply as \(b\to 1\). Decreasing \(b\) decreases \(I_{\rm past}\) marginally slower than \(I_{\rm pred}\), so the system chooses a high \(b_{\rm opt}\), prioritising the \(\delta\) peak over the exponential part of the kernel. In all panels, \(\sigma_{\eta}^{2}=2\), \(\sigma_{s}^{2}=1\), \(\sigma_{\xi}^{2}=1\), \(\tau_{s}=2\)s, \(\tau_{\eta}=0.02\)s, \(\tau=1\)s, \(dt=0.02\)s, \(T=2\)s.
Figure 4: **The parameters of the optimal integration kernel as the system moves along the information bound.** The optimal kernel is \(A_{\text{opt}}(-t)=a_{\text{opt}}\left(b_{\text{opt}}\delta(t)+\frac{(1-b_{ \text{opt}})}{\tau_{A}^{\text{opt}}}e^{-t/\tau_{A}^{\text{opt}}}\right)\). In these graphs, we plot the optimal amplitude \(a\), integration time \(\tau_{A}\) and relative height of the exponential part of the kernel \(b\) against \(\gamma\). \(\gamma\) defines the level of compression. Low \(\gamma\) corresponds to the high \(I_{\text{pred}}\) and \(I_{\text{past}}\) regime and high \(\gamma\) corresponds to the low \(I_{\text{pred}}\) and \(I_{\text{past}}\) region. a) **Amplitude quickly increases as \(\gamma\) decreases up to a plateau at \(\gamma\approx 0.32\)**. At high compression \(\gamma\), the optimal amplitude is small, and \(I_{\text{past}}\) and \(I_{\text{pred}}\) are low because the signal is dominated by the compression noise \(\xi\). As the compression level \(\gamma\) decreases, the system increases \(I_{\text{past}}\) and \(I_{\text{pred}}\) by increasing the amplitude. However, beyond compression level \(\gamma\approx 0.32\), reducing the amplitude has little effect on \(I_{\text{pred}}\) and \(I_{\text{past}}\) (see fig. 3a), and the optimal amplitude increases so slowly that the numerical method can no longer find the optimum, instead giving a value which is bounded from below. Concomitantly, the amplitude is only well defined for compression level \(\gamma\gtrsim 0.32\). b) **The integration time of the kernel increases as the compression level \(\gamma\) decreases.** At high compression level \(\gamma\), \(\tau_{A}^{\text{opt}}\) is low, and the system does not time integrate, leading to low \(I_{\text{past}}\) and \(I_{\text{pred}}\). As the compression level \(\gamma\) decreases, \(\tau_{A}^{\text{opt}}\) increases, the system time averages more with greater resources. c) **The relative height of the \(\delta\) peak increases as \(\gamma\) decreases.** Inspecting fig. 3c, we observe that as \(b\) increases, \(I_{\text{past}}\) decreases marginally faster than \(I_{\text{pred}}\) at all \(\gamma\). This means that as \(\gamma\) increases and the compressive effect of \(I_{\text{past}}\) increases, the optimal \(b\) decreases towards zero. In all three graphs, the values of \(\tau_{A}\), \(a\) and \(b\) at zero compression \(\gamma=0\) correspond to their values in the Wiener filter. The amplitude of the Wiener kernel is therefore bounded from below rather than exactly defined. At sufficiently high compression \(\gamma\gtrsim 0.26\), the value of \(b\) becomes sufficiently close to one that the kernel becomes a \(\delta\) function and \(\tau_{A}\) becomes unstable. Similarly in the high \(\gamma\) region, the value of \(\tau_{A}\) becomes sufficiently small that \(e^{\frac{\theta L}{A}}\) becomes smaller than machine precision. In this case, the kernel is identical to a \(\delta\) function, and since amplitude is unbounded from above, \(b\) becomes unstable. There is, therefore, a region \(0.26\leq\gamma\leq 0.32\) where all three quantities are poorly defined. In a), b) and c) \(\sigma_{s}^{2}=1\), \(\sigma_{\xi}^{2}=1\), \(\tau_{s}=2\)s, \(\tau_{\eta}=0.02\)s, \(\tau=1\)s, \(dt=0.02\)s, \(T=2\)s.
Figure 5: In these graphs, we compare the optimal kernel to an identical kernel omitting the \(\delta\) peak but leaving the rest of the kernel unchanged. a) **Compared to the equivalent kernel with no \(\delta\) peak, the optimal kernel acquires less information about the past.** Recall that as the relative height of the \(\delta\) peak, \(b\), increases, the information collected about the past increases monotonically (fig. 3d). Therefore, removing the \(\delta\) function always increases the information acquired about the past. b) **Compared to the equivalent kernel with no \(\delta\) peak, the optimal kernel acquires less information about the future when the system is compressed (\(\gamma\) is large).** Only in the near uncompressed limit does adding a \(\delta\) peak increase \(I_{\text{pred}}\). It is in this limit that the kernel is time averaging the most, so the \(\delta\) function is required to reduce dynamical error. As compression increases, the kernel time averages less, so the dynamical error goes down. This means adding the \(\delta\) no longer increases \(I_{\text{pred}}\). c) **The ratio \(\frac{I_{\text{pred}}}{I_{\text{past}}}\) is always higher for the system with a \(\delta\) peak**. Regardless of whether \(I_{\text{pred}}\) increases or decreases with the addition of a \(\delta\) peak, the ratio \(\frac{I_{\text{pred}}}{I_{\text{past}}}\) always increases. Thus the information bottleneck always selects a kernel with a \(\delta\) peak.
section, we examine the forms of optimal kernels \(\mathbf{\tilde{A}}\) for discrete input signals with Markovian statistics. Once we have found the optimal kernels, we calculate the corresponding predictive information \(I_{\mathrm{pred}}\) and past information \(I_{\mathrm{past}}\). Because these kernels are optimal, they will maximise the ratio of the predictive to the past information, \(I_{\mathrm{pred}}/I_{\mathrm{past}}\), for a given system. For illustrative purposes, we will also calculate the predictive information \(I_{\mathrm{pred}}\) and past information \(I_{\mathrm{past}}\) for an arbitrary kernel \(\mathbf{\tilde{A}}\), but \(I_{\mathrm{pred}}/I_{\mathrm{past}}\) will be lower for such non-optimal kernels.
For a given signal and signal noise, there is an absolute limit on the amount of predictive information that can be extracted from a given amount of past information. Figure 1a is a parametric plot of \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\) for varying values of our compression variable \(\gamma\) (see Eq. 2). Here, each curve is the fundamental bound on \(I_{\mathrm{pred}}\) for a given \(I_{\mathrm{past}}\), and for a given set of signals statistics: \(\sigma_{s}^{2}\), \(\tau_{s}\), \(\sigma_{\eta}^{2}\) and \(\tau_{\eta}\). At the top right of this information bound, the compression level \(\gamma\) is zero, and the system has maximum \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\). Moving down the information bound, the system is compressed, and the system has access to reduced \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\). Optimal kernels will result in values of \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\) on the information bound, while arbitrary non-optimal kernels acting on signals with the same statistics will result in values of \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\) below these bounds. The curves for \(\sigma_{\eta}^{2}=0\) have been calculated before [26], but the other limits are new. We see that increasing the variance of the noise \(\sigma_{\eta}^{2}\) reduces the amount of predictive information \(I_{\mathrm{pred}}\) and past information \(I_{\mathrm{past}}\) a system can extract at a given \(\gamma\) (fig. 1a). At high \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\), the predictive information extracted from a given amount of past information is indeed significantly lower when \(\sigma_{\eta}^{2}\) is higher. Moreover, the maximum \(I_{\mathrm{past}}\) also decreases as \(\sigma_{\eta}^{2}\) becomes larger. Perhaps surprisingly, for lower \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\), the predictive information that can be extracted for a given amount of past information is nearly independent of \(\sigma_{\eta}^{2}\). Similarly, for \(\sigma_{\eta}^{2}>0\), increasing the correlation time of the noise also reduces the amount of predictive information \(I_{\mathrm{pred}}\) and past information \(I_{\mathrm{past}}\) the system can extract from the signal trajectory. The ratio \(I_{\mathrm{pred}}/I_{\mathrm{past}}\) is once again reduced with increased correlation time for high \(I_{\mathrm{past}}\) while the ratio is constant at low \(I_{\mathrm{past}}\) (fig. 1b).
Examining the optimal kernel provides information about which characteristics of the signal are most important for predicting the future of the signal. As shown in fig. 2a, b and c, the optimal kernels take the form of a \(\delta\) function at \(t=0\) with a decaying exponential for \(t<0\):
\[A_{\mathrm{opt}}(-t)=a_{\mathrm{opt}}\left(b_{\mathrm{opt}}\delta(t)+\frac{(1 -b_{\mathrm{opt}})}{\tau_{A}^{\mathrm{opt}}}e^{-t/\tau_{A}^{\mathrm{opt}}} \right). \tag{16}\]
The \(\delta\) peak prioritises the signal's current value, but the exponential function averages over the trajectory, with lower weight given to time points further back into the past. Here \(a_{\mathrm{opt}}\) is the amplitude of the entire kernel, \(b_{\mathrm{opt}}\) is the weighting of the \(\delta\) peak relative to the exponential part of the kernel, and \(\tau_{A}^{\mathrm{opt}}\) is the decay rate of the kernel. We normalise the exponential part of the kernel with the decay rate. We note that while we write down a continuous form of the kernel, the kernel itself is discrete and only defined at integer multiples of the timestep \(dt\).
The shape of the kernel changes along the information bound. Consider the system where \(\sigma_{\eta}^{2}=2\) and \(\tau_{\eta}=0.02\)s (fig. 1a, middle blue line, fig. 1b, middle red line). At the top right of the information bound, the compression level \(\gamma=0\), the system is uncompressed and can access maximum \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\). In this limit, the kernel is a slowly decaying exponential supplemented by a \(\delta\) function (fig.2c, dashed line). Initially, as we move down the information bound, increasing \(\gamma\), the width and relative height of the exponential function decrease until the function becomes a \(\delta\) function (fig.2c, light yellow line). Only then does the amplitude of the whole kernel decrease to zero.
To understand the optimal shape of the kernel, we need to understand the origins of the fluctuations in the output because these fluctuations limit the accuracy of prediction. Two of these we have already discussed: signal noise and compression noise, modelled by \(\eta\) and \(\xi\) in Eq. 1, respectively. Signal noise causes errors in the signal at the point of detection. Compression noise corrupts the output of the compression process. The final source of fluctuations in the output is known as the "dynamical error" [2]. It arises from time integration. Due to time integration, the output depends on input values further back into the past, which are less correlated with the current input [2].
To understand how a system can mitigate these sources of error, we note that the compressed output is given by \(x=\mathbf{\tilde{A}}(\mathbf{\tilde{s}}+\vec{\eta})+\xi\). Increasing the amplitude of the kernel can mitigate the effect of the compression noise \(\xi\) by amplifying the signal over the compression noise. Changing the amplitude cannot reduce the effect of the signal noise \(\vec{\eta}\) because the signal and noise will be amplified together. Signal noise must be mitigated by time averaging. By using more independent time points further into the past, the system can better estimate the current value of the signal. The integration time \(\tau_{A}\) sets the width of the kernel and the window over which time averaging is performed. However, using time points further back into the past introduces dynamical error, which is mitigated by prioritising more recent values over values further into the past. Mitigating signal noise and dynamical error thus put opposing requirements on the integration time, leading to an optimum in \(\tau_{A}\)[18, 2].
We next ask how varying the key parameters of the kernel: \(a\), \(b\) and \(\tau_{A}\), affects these error sources. Answering this question will clarify how these parameters affect the past and predictive information \(I_{\mathrm{pred}}\) and \(I_{\mathrm{past}}\), which
in turn helps us understand how the optimal kernel's shape varies along the information shown in fig. 1a and b. We generate a set of non-optimal kernels for a given signal, \(A(t)=a\left(b\delta(t)+\frac{(1-b)}{\tau_{A}}e^{-\frac{b}{\tau_{A}}t}\right)\), by varying \(a\), \(b\) and \(\tau_{A}\) away from the optimum. To separate the effects of varying these three quantities, we fix two of the three quantities \(a=a_{\rm opt}\), \(b=b_{\rm opt}\) and \(\tau_{A}=\tau_{A}^{\rm opt}\), and vary the other.
How does varying the amplitude, \(a\), allow the system to mitigate our three error types? Recall the expression for the compressed output: \(x=\mathbf{\bar{A}}(\mathbf{\bar{s}}+\mathbf{\bar{\eta}})+\xi\). When the amplitude of \(\mathbf{\bar{A}}\), \(a\), is small, the compression noise \(\xi\) dominates the signal. In this case, both \(I_{\rm pred}\) and \(I_{\rm past}\) are small (fig. 3a). As \(a\) increases, both \(I_{\rm pred}\) and \(I_{\rm past}\) increase as the kernel amplifies the corrupted signal over the compression noise. Eventually, the compression noise becomes negligible compared to the propagated input noise, and \(I_{\rm pred}\) and \(I_{\rm past}\) plateau as a function of \(a\). Indeed, while changing \(a\) can lift the signal above the compression noise \(\xi\), it cannot mitigate the effect of signal noise \(\eta\) because the kernel amplifies the signal and input noise together. Similarly, as increasing the amplitude of the kernel does not affect how different points in the trajectory are weighted relatively in the kernel, it cannot decrease dynamical error.
Since varying the amplitude cannot mitigate signal noise, it can only be mitigated by varying the relative height and width of the exponential part of the kernel. The exponential part of the kernel takes a non-uniform time average over those time points in the past, mitigating signal noise. Consider first the integration time of the kernel, set by \(\tau_{A}\). As \(\tau_{A}\) increases and the kernel widens, both \(I_{\rm pred}\) and \(I_{\rm past}\) initially increase as the system averages out the signal noise (fig. 3c). They then peak at two different optimal integration times, which arise from the trade-off between minimizing the dynamical error and time averaging[2].
We next ask how \(I_{\rm pred}\) and \(I_{\rm past}\) change with the relative importance of the \(\delta\) peak: \(b\). Initially, \(I_{\rm pred}\) and \(I_{\rm past}\) decrease very slowly as the relative importance of the \(\delta\) peak increases. As \(b\) approaches one, both quantities drop sharply. For all values of the compression level \(\gamma\), having a \(\delta\) peak decreases the amount of past information the system obtains with the kernel (fig. 5a). For all but the lowest values of the compression level \(\gamma\), having a \(\delta\) peak also decreases the amount of predictive information the system obtains with the kernel (fig. 5b). Only in the zero compression limit does adding a \(\delta\) peak increase \(I_{\rm pred}\); in the SI, we prove that this is true even as \(dt\to 0\). In this limit, the system finds the optimal trade-off between minimising signal noise via a wide integration kernel and minimising dynamical error via a \(\delta\) peak (the compression noise is negligible). The \(\delta\) peak emphasises the most recent signal value, the signal value most correlated with the future point the system is trying to predict. In this limit, \(I_{\rm pred}\) peaks at \(b=b_{\rm wiener}\) (fig. 5d, dashed lines).
Since both \(I_{\rm past}\) and \(I_{\rm pred}\) (except for the uncompressed limit) decrease upon adding a \(\delta\) peak, a pertinent question rises is why the optimal kernel of the system at the information bound features a \(\delta\) peak at all. The answer is that \(I_{\rm past}\) decreases more than \(I_{\rm pred}\) upon adding a \(\delta\) peak, so that the ratio \(I_{\rm pred}/I_{\rm past}\) increases. This effect is strongest in the compressed regime (fig. 5c), which explains why the \(\delta\) peak is most pronounced in the high \(\gamma\) regime of strong compression.
We can now understand the shape of the kernel along the information bottleneck curves (fig. 4). We start in the highly compressed region where \(I_{\rm past}\) and \(I_{\rm pred}\) are low, because the amplitude of the kernel \(a\) is low (Fig 4a) and the compression noise is relatively large. Because of the latter, the effect of the signal noise is relatively small. This means that time averaging is not important. The optimal integration time will be short because that minimizes the dynamical error (Fig 4c). The \(\delta\) term will be relatively large (fig. 4b) because increasing the \(\delta\) peak maximises the objective function by decreasing \(I_{\rm past}\) more than \(I_{\rm past}\).
To increase \(I_{\rm past}\) and \(I_{\rm pred}\) (corresponding to decreasing \(\gamma\)), the amplitude of the kernel must rise so that signal is lifted above the compression noise (fig. 4). Because the kernel acts on both the signal and the signalling noise but not the compression noise, this inevitably makes the effect of the signal noise stronger than the compression noise. This means that time averaging becomes more important, which in turn necessitates a longer integration time (fig. 4c). Since increasing \(\tau_{A}\) also increases the magnitude of the kernel, amplifying the signal and signalling noise over the compression noise, the relative importance of the \(\delta\)-peak contribution falls.
In the regime of high \(I_{\rm pred}\) and \(I_{\rm past}\) (low \(\gamma\)), the compression noise has become negligible, and the output noise is caused by a combination of signal noise and dynamical error. The optimal integration time in the uncompressed limit arises from the trade-off between the two error types. Similarly, since adding a \(\delta\) peak reduces dynamical error, this trade-off also sets the optimal relative height of the exponential part of the kernel and the \(\delta\) peak. Hence the numerical procedure no longer finds a unique solution for the amplitude, it only ensures that it is large enough. In the limit \(\gamma\to 0\), the kernel becomes identical to that given by the Wiener filter, as we show in Appendix 2. The Wiener filter has been used to analyse optimal kernels for Markovian signals [18], although that study did not address the effect of correlations in the noise.
Now that we understand the optimal shape of the integration kernel, we are in a position to understand the effects of varying the magnitude and the correlation time of the input noise, \(\sigma_{\eta}^{2}\) and \(\tau_{\eta}\), respectively. The correlation time of the exponential part of the kernel increases and the relative weight of the \(\delta\) peak decreases with \(\sigma_{\eta}^{2}\) because more signal noise requires more time averaging (fig. 6a and b). In the absence of noise, \(\sigma_{\eta}^{2}=0\), the kernel takes the form of a \(\delta\) function because, for a Markovian signal, all the predictive power is stored in the current signal value. Indeed, in all of our systems, time averaging is performed to better estimate the
current signal, which is then maximally predictive of the future signal.
The kernel also changes with the noise's correlation time. To mitigate the effects of correlated noise, the system must time average over periods longer than the correlation time of the noise, but shorter than the correlation time of the signal: \(\tau_{s}>\tau_{A}>\tau_{\eta}\). Initially, as \(\tau_{\eta}\) increases, the width of the exponential part of the kernel \(\tau_{A}\) increases (fig. 6c). As \(\tau_{\eta}\to\tau_{s}\), the width of the kernel decreases because the system can no longer average out the noise without averaging out the signal. This also explains why the relative importance of the exponential filter decreases and that of the \(\delta\) peak increases as \(\tau_{\eta}\to\tau_{s}\) (fig. 6d). Conversely, decreasing the input correlation time prioritises the exponential filter. Indeed, Becker _et al._ derived using Wiener filtering theory the optimal integration function for signals with \(\delta\) correlated input noise, corresponding to \(\tau_{\eta}\to 0\), and found that the optimal kernel is a simple exponential filter [18]. Lastly, we note that when the correlation time of the signal noise becomes comparable to the correlation time of the signal itself, \(\tau_{\eta}\sim\tau_{s}\), the system cannot time average out the signal noise without time averaging out the signal itself. The system cannot do better than taking an instantaneous kernel, and the relative height of the exponential part of the kernel goes to zero (fig. 6d). This behaviour reflects that observed for cellular signaling systems [2].
## 5 The push-pull network
Having calculated the properties of optimal kernels, we now wish to compare our results to a standard signal-processing motif in biology: the push-pull motif. The cell must detect and predict the concentration of ligand
Figure 6: a) **The system integrates over a longer time interval as the noise variance increases.** The width of the kernel at compression level \(\gamma=0\) (max \(I_{\text{pred}}\) and \(I_{\text{past}}\)) is a tradeoff between minimising the effect of signalling noise and minimising dynamical error. Widening the kernel increases dynamical error but decreases the effect of signalling noise. As \(\sigma_{\eta}^{2}\) increases, the effect of signalling noise increases relative to dynamical error, and so the optimal kernel will be wider. b) **The system prioritises time averaging over instantaneous measurement as the noise variance increases.** The tradeoff between minimising signalling noise and dynamical error also sets the optimal ratio between the exponential part of the kernel and the \(\delta\) peak. Decreasing the height of the \(\delta\) peak by decreasing \(b\) decreases the effect of signalling noise and so is prioritised as the noise variance increases. c) **The system integrates over a longer time interval as the correlation time of the noise increases.** Correlated noise requires a wider window of time averaging relative to uncorrelated noise because of the persistence of the correlations. Thus the integration time of the kernel increases as the correlation time of the noise increases. Yet, when the integration time becomes too long compared to the input correlation time, time integration not only averages out the noise in the signal, but also the signal itself. Beyond this point, the system gives up the mechanism of time integration and instead becomes an instantaneous responder: \(\tau_{A}^{\text{opt}}\) rapidly drops to zero. d) **The system prioritises an instantaneous measurement over time averaging as the correlation time of the noise increases.** Since the kernel widens as the correlation time of the noise increases, the dynamical error increases. Thus the system prioritises the \(\delta\) function by decreasing \(b\) as \(\tau_{\eta}\) increases. For all graphs \(\gamma=0\), \(\sigma_{\xi}^{2}=1\), \(\sigma_{s}^{2}=1\), \(\tau_{s}=2\)s, \(\tau=1\)s, \(dt=0.02\)s and \(T=2\)s.
Figure 8: a) **The push-pull network has a wider kernel for larger noise variance (smaller \(R_{T}\))**. We plot the correlation time of the kernels found for the push-pull network using a discretised version of the method from [34] (SI) for varying \(R_{T}\) (corresponding to varying \(\sigma_{\eta}^{2}\)). We see that as \(R_{T}\) increases and signal noise variance decreases, the width of the optimal kernel decreases. Thus the push-pull kernel uses time averaging to suppress signal noise, like the optimal kernel. b) **The push-pull network has a narrower kernel for greater compression (smaller \(X_{T}\)).** We repeat the method increasing \(X_{T}\) which is comparable to decreasing the compression level \(\gamma\). Here \(\tau_{\eta}=0.02\) and \(R_{T}=100\) corresponds to \(\sigma_{\eta}^{2}=0.032\). The correlation time of the optimal kernels decreases as resources decrease and compression increases. Thus noise suppression through time integration is resource intensive for the push-pull network. c) **The push-pull kernel initially widens and then narrows as noise correlation time increases (\(\tau_{\eta}\))**. We repeat the method increasing \(\tau_{\eta}\). We see that as \(\tau_{\eta}\) increases, the width of the optimal kernel initially increases and then decreases. In all graphs the average proportion of ligand-bound receptors and activated output molecules is \(\phi_{l}=\phi_{x}=0.5\) respectively, and the average concentration of ligands \(c=1\). In all, \(\sigma_{s}^{2}=2\times 10^{-4}\). In a) and b) \(\tau_{\eta}=0.02\)s. In b) and c) \(R_{T}=1\times 10^{4}\) and in a) and c) \(X_{T}=1\times 10^{12}\).
Figure 7: a) **A summary of the push-pull motif.** Receptors on the outside of a cell interact with ligand molecules in the environment. ligand molecules bind and unbind to receptors. Inside the cell, output molecules diffuse in and out of contact with the receptors. Output molecules in contact with ligand-bound receptors become activated. Output molecules in the solution spontaneously deactivate. There are a total number \(X_{T}\) output molecules. b) **The push-pull motif lies below the information curves for all receptor number values.** Using equation 48, we can calculate optimal information bounds (solid lines) at constant noise, which can be directly compared to the curves generated using the push-pull kernel (dashed lines). We see that the information curves traced using the push-pull kernels lie below the information curves for all \(\sigma_{\eta}^{2}\propto\frac{1}{R_{T}}\) and \(\sigma_{s}^{2}=2\times 10^{-4}\).
molecules in the environment. The push-pull motif consists of receptors on the surface of a cell that detect the concentration of ligand molecules in the environment by binding to them (fig.7a). Inside the cell, output molecules diffuse in and out of contact with the receptors. Output molecules in contact with bound receptors are activated, using ATP to drive the reaction. These molecules then spontaneously deactivate over time. The number of activated output molecules reflects the number of bound receptors, allowing the cell to estimate the concentration of ligand molecules outside the cell. Intrinsic to the push-pull network is correlated signal noise caused by the binding of ligand molecules to receptors. We are interested in whether the push-pull kernel can mitigate this noise.
The push-pull kernel is an exponential function: \(A(-t)\propto e^{-t/\tau_{A}}\)[34]. Here, \(\tau_{A}\) is the integration time of the kernel. In the supplementary information, we extract the variance of the signal noise from the push-pull system (SI, equation 48), which we simplify to:
\[\sigma_{\eta}^{2}\propto\frac{1}{R_{T}} \tag{17}\]
to aid understanding. Here \(R_{T}\) is the total number of receptors. Additionally, for the push-pull motif, resources, and therefore compression level, are dictated by the number of receptors \(R_{T}\) and the number of output molecules \(X_{T}\).
In what follows, we increase the compression level by reducing \(X_{T}\) while keeping \(R_{T}\) constant to keep the signal noise variance constant. Fig. 7b shows that the information curves traced by the push-pull kernel (dashed lines) fall below those of the optimal kernels (solid line). However, the difference is small, hinting that the push-pull network is nearly optimal. To analyze this further, we study the integration kernels.
We find that, just like the optimal kernels, the optimal kernels for the push-pull motif widen with the variance of the signal noise and narrow with compression. Fig. 8a shows that when \(R_{T}\) decreases, which increases the signal noise variance, the push-pull kernels widen. In contrast, when \(X_{T}\) decreases, which increases the compression level, the kernels narrow (fig. 8b). Thus, the push-pull kernel uses time averaging to mitigate signal noise, and the ability to time average is reduced by compression, like the optimal kernels (fig. 4).
In ref. [2], the authors observe that cells using the push-pull motif can reduce the sensing error by either increasing the number of receptors \(R_{T}\) or by taking more measurements per receptor via the mechanism of time integration (increasing \(\tau_{A}\)). These two statements can now be directly related to signal noise. Increasing the number of receptors reduces the signal noise \(\sigma_{\eta}^{2}\), while time integration corresponds to widening the kernel to average out signal noise. In the same study, the authors observe an optimal integration time which increases as the number of receptors decreases. Moreover, they found that the optimal integration time decreases for larger compression noise, i.e., smaller \(X_{T}\). Our results corroborate and explain these findings (see figs. 6a and 8a).
Additionally, in ref. [2], the authors observe that cells using the push-pull motif respond to the correlation time of the noise increasing by initially increasing the integration time of the kernel (fig. 8c). Then as \(\tau_{\eta}\) approaches \(\tau_{s}\), time integration averages out the signal as well as the noise, lowering the utility of time integration as a strategy. As \(\tau_{\eta}\) increases further, the optimal integration time gradually decreases back to zero. Here, the push-pull network's best strategy is merely to capture the current value of the signal, despite noise corruption. In the limit that \(X_{T}\) is so large as to be effectively infinite, rendering the compression noise negligible, the integration time decreases slowly beyond the peak. For a smaller \(X_{T}\) where compression noise is finite, this drop is sharp (SI, fig. S4), mirroring the equivalent drop found for our optimal system (fig. 6d).
The push-pull kernel does not and cannot manifest a \(\delta\) peak. The system instead has an exponentially decaying kernel alone. As \(\tau_{\eta}\to 0\), the theoretical optimal kernels also tend towards an exponentially decaying kernel without a \(\delta\) peak (fig. 2a). We suggest, therefore, that the push-pull kernel is optimised for signal noise with a short correlation time. Nonetheless, while the system cannot replicate the \(\delta\) peak, the push-pull kernel still attempts to mitigate correlated signal noise in other ways. Specifically, the kernel widens as \(\tau_{\eta}\) increases (fig. 8c), as observed for the optimal kernels (fig. 6c).
What does the push-pull system lose by not being able to implement a \(\delta\) peak? For all but the lowest values of the compression level \(\gamma\), the kernel without a \(\delta\) peak collects more predictive and past information (fig. 5a and b) than a kernel with a \(\delta\) peak. The \(\delta\) peak emerges only when the predictive information is maximized under the specific constraint of limiting past information. As discussed in [35], maximizing predictive information while constraining past information will yield systems that differ from those that maximize predictive information under the constraint of resource cost in terms of protein copies and energy. It is conceivable that the latter would not yield a \(\delta\) peak.
A biological system could hypothetically create a network capable of implementing a kernel much closer in shape to our theoretically optimal kernels. Creating an additional \(\delta\) peak would require coupling two push-pull motifs in parallel, with different turnover rates of the readout [9]. The faster push-pull motif would provide a sharp spike in the kernel close to \(t=0\)s, which would approximate a \(\delta\) function, while the slower one would act as the exponentially decaying part of the kernel. A motif such as this is resource intensive, so the limited potential advantages may explain why two such parallel push-pull networks have not yet been observed in cellular systems.
Conclusions
Time averaging is essential for accurately detecting and predicting the true values of signals corrupted by signal noise. An optimal system will vary the width of the kernel to compensate for different characteristics of this signal noise, widening it for a greater variance or longer correlation times and shortening it if the opposite is true. Where the noise characteristics demand it, most notably when the correlation time of the noise is long, kernels will widen to the extent that dynamical error becomes a concern for the system. In this case, an optimal system will add a \(\delta\) peak in the kernel at the current time. The push-pull kernel replicates the optimal kernel for systems where the noise correlation time is very short but otherwise fails as it cannot replicate a \(\delta\) peak.
Suppose a system has finite resources for prediction. In that case, its ability to time average is reduced--both the theoretically optimal kernels and those of the push-pull motif narrow as their resources are restricted at fixed signal noise. With sufficient compression, the optimal kernels will only collect the most recent time point, omitting time averaging completely. In such cases, the system cannot mitigate the effect of signal noise at all.
We have combined the information bottleneck and the Wiener filter to study these systems. This technique can be applied to more complex signals, such as those described by the generalised Langevin equation [1]. Studying how noise corruption affects techniques for processing more complex signals is the subject of further work.
|
2305.02561 | Beneficence Signaling in AI Development Dynamics | This paper motivates and develops a framework for understanding how the
socio-technical systems surrounding AI development interact with social
welfare. It introduces the concept of ``signaling'' from evolutionary game
theory and demonstrates how it can enhance existing theory and practice
surrounding the evaluation and governance of AI systems. | Sarita Rosenstock | 2023-05-04T05:27:51Z | http://arxiv.org/abs/2305.02561v1 | # Benefence Signaling in AI Development Dynamics
###### Abstract
This paper motivates and develops a framework for understanding how the socio-technical systems surrounding AI development interact with social welfare. It introduces the concept of "signaling" from evolutionary game theory and demonstrates how it can enhance existing theory and practice surrounding the evaluation and governance of AI systems.
## 1 Introduction
To conceive of something as a signal is to attend to the roles it plays in the social dynamics of information (Skyrms, 2010). That is, rather than merely asking what a token means denotatively, we are interested in what meaning emerges from how a community uses that token for communication. This paper explains and explores what it would look like to consider the socio-technical dynamics surrounding AI development through a signaling lens. In particular, I aim to show that this perspective can help us:
* identify functional roles that signals emitted in the course of AI development can/do play in shaping the trajectory of the sector towards more or less socially beneficial outcomes
* articulate more precisely what the goals are for "AI policy interventions" (which often have the form of demands or requests for particular sorts of signals), better assess how well a given strategy addresses that goal, and point towards some interesting novel strategic directions.
To start things off, I'll first explain what I mean by "benefence signaling." I'll then discuss why I think this is a productive avenue of research for those interested in promoting beneficial AI. I'll proceed to sketch some of the core components of the theoretical framework I hope to build around this concept, before giving an outline of the rest of the paper.
### Beneficence signaling
By "benefcience signaling" I mean any of the myriad ways AI developers can express to one another, users, regulators, and the public that their technology and development process are conducive to socially beneficial outcomes.
Explicit signals of this sort include public statements, press releases. But a signal can be any observable output ("token") which carries information about an individual, company, or product that can be used to inform beliefs and behaviors with respect to that entity. This includes what they say and don't say, what they visibly do and don't do, and how they chose to say and do these things. Examples of relevant signals include relatively explicit declarations of ethical commitment such as "AI Ethics Principles", agreeing to independent audits, scores on various "fairness" metrics, and press releases detailing harm mitigation strategies. But companies do not get to choose which of the signals they emit communicate ethically salient information-- every visible output and action they make is a candidate signal for others to evaluate and use to condition their own behavior.
Explicit statements of commitment to behave ethically do not and cannot on their own serve as signals of ethical intent. To do so, they would need to cohere robustly to actual ethical behavior by actors who send these signals in a dynamical social context. This means that to reliably send or receive a signal of something, one needs an operative model of this social context of communication, and possibly even to intervene on those dynamics.
### Motivation
Traditional approaches to promoting safe and beneficial AI development are largely "top-down", centering around governance. These approaches struggle to keep up with rapidly evolving technology, making resulting legal requirements either immediately outdated or else too technologically non-specific to constitute real guarantees. The "bottom-up" approaches that do exist tend to focus on ways companies can unilaterally make their tech more pro-social, but do not attend to the actual incentives that might motivate companies to employ their recommendations. This paper joins a small minority of AI ethics work taking a systems-level perspective that connects and encompasses both approaches.1
Footnote 1: Connections with existing work in this field are discussed in section 3.1.
The starting point of a signaling framework is the full socio-technical system-including companies, individual workers, governments, universities, think tanks, etc. Calls for transparency and accountability from governments and NGOs propagate as signals through this system and influence social and economic behavior surrounding AI, as do visible outputs from companies themselves. The fundamental insight of the signaling framework is that what a signal _means_ depends on the role it plays in this system in terms of sharing information and incentivizing behavior. This perspective can be useful to regulators to identify
signals that are tightly associated with robust assurances of prosocial behavior by companies.
A signaling lens also draws our attention to how these signals can serve to promote better outcomes more broadly through positive feedback loops, rather than merely looking for assessing and incentivising individual companies. For example, one can conceive of beneficial AI development as a _collective action problem_ wherein even well-intentioned industry actors cannot afford the costs associated with better practices (Askell et al., 2019). One class of solutions to this problem can be found by analogizing work from economics and evolutionary biology regarding how signals of cooperative intent can stabilize risky cooperation and enable prosocial strategies to prevail in competitive environments (Rosenstock and O'Connor, 2018).
### Features of a signaling framework
A key feature of what makes a token an effective signal is that the token has community uptake in such a way as to actually adhere to its intended meaning. We can only verify that other tech companies are performing desired behaviors if they are sending the right kinds of signals. So, when we ask (explicitly by laws or social mandates, or implicitly by exemplary conduct) for particular behaviors, we want our ask to include or consist of sending particular sorts of signals. Exploring the features of signals we can develop and/or promote to best serve these aims is thus a promising research direction.
Before diving more deeply into the framework, I want to outline some key considerations to bear in mind throughout the report. I'm classing these under the headings Content, Uptake, Perspective, and Incentives.
#### 1.3.1 Content
A core component of the theoretical contribution I hope to make here is to develop the relevant notion of "content" for signals of interest. In section 2, I attempt to assemble existing literature on signaling games into a unified account of signal content that is readily applicable to signals generated in the course of AI development that can help facilitate broadly beneficial outcomes. Much of the remaining theoretical work can be understood as articulating and responding to various challenges associated with ensuring that a signal latches on to its intended target content.
Considerations include:
* Fakeability: How difficult is it to "successfully" send the signal without performing the associated action?
* Contextuality: How does context shape meaning? How can "the same" signal mean different things depending on context or perspective?
* Dynamics: How does meaning shift over time, especially in response to perverse incentives?
#### 1.3.2 Uptake
What features make signals (and associated behaviors) more likely to get community uptake in the right ways? One potential application of this framework is to consider how companies with prosocial intentions can use their positions as technological leaders to create a norm among their competitors of sending particular sorts of signals that latch onto beneficial action. For this to work, it is not sufficient for the identified or designed signals to correspond to the right material facts and behaviors; the correspondence must be robustly evolvable in the relevant fitness context (primarily capitalistic in this case). This points towards considerations of virality and information flow as not merely a second step after signal selection, but intertwined with constitutive of what a signal does and can mean.
Considerations:
* Cost: "Cheap" signals can be sent without substantial behavioral changes, but "expensive" signals might be incompatible with competitive success.
* Naturalness: Should we be looking for "natural" byproducts of tech development to serve as signals, or should we develop novel signals designed for a particular purpose? The former is "easier" and requires less work to promote uptake, but greatly restricts our option space.
* Simplicity: In what ways does a "good" signal need to be "simple", and how is this a problem when our goal is to signal complex states of affairs?
#### 1.3.3 Perspective
A "signaling game" involves a _sender_ and a _receiver_, each of whom decide what and how to communicate based on their own beliefs and motivations. While this may seem obvious, a clear consideration of who and why a signal is sent, and who is receiving it and how, is largely absent from much of the current literature in ethical AI.
Considerations:
* Legibility: Signaling considerations might motivate reframing our objectives from benefcience to visible benefcience in some circumstances (Benn and Grastien, 2021).
* Multiplicity: Especially for public signals, how might the same token signal different things to different agents? And how can signals effectively serve multiple functions in such contexts?
* Receivers: Consider how signals are perceived from different standpoints, e.g. AI workers, competitors, consumers, and regulators.
* Senders of interest: Consider who the relevant "agents" are, if any, to understand as the "senders" of signals, e.g. organizations, individual developers, and AI systems themselves.
#### 1.3.4 Incentives
How does the inclusion of signaling in a social context influence the individual incentives of participants, and vice versa? As I will elaborate in section 3.1, a signal perspective enriches existing game-theoretic frameworks for understanding social and economic incentives from a policy perspective.
Considerations:
* Cooperation: How can signals help facilitate cooperation?
* Gamification: How does knowledge of the role of a signal in an incentive scheme undermine the association of signal with content, and how can we protect against this?
### Outline
In section 2, I go into more detail outlining the game theoretic underpinnings of the signaling framework. In section 3 I will draw some connections with other concepts and lines of inquiry. I conclude in section 4 by extracting some key lessons.
## 2 How signals acquire meaning
While it's easy enough to acknowledge that the content of signals is substantially determined by context, we're left with the difficult task of accounting for _how_ context determines meaning. While I doubt there is an adequate one-size-fits-all account or determination procedure we can defer to, there are plenty of formal and empirical tools we can bring to bear on particular signaling systems. I'm hopeful that these can be honed into an industry-specific theoretical toolkit that will help guide us in understanding how signals of beneficial AI acquire their meanings. In this section, I make a first attempt to assemble existing game theory literature on signals towards this end.
### The basic model
It's instructive to start with a simple, minimal formal model of how this can work in the form of a sender-receiver game, and slowly add complexity to show how the model can be adapted to suit different contexts.2 We start with two
agents: the _sender_ and the _receiver_, each with their own motivations and beliefs, which we'll examine more deeply in what follows. Keeping things simple, the sender might be in a position to observe that a coin lands heads or tails, while a receiver is not. Upon observing either \(H\) or \(T\), they will choose to send message \(m\) or \(m^{*}\) to the receiver. Upon receiving one or the other signal, the receiver selects an interpretation \(i\) or \(i^{*}\).
As shown in figure 1, the sender decides which signal to send by considering how they expect the receiver to respond- i.e. based on their model of how \(m\) and \(m^{*}\) correspond to \(i\) and \(i^{*}\) for the receiver, and whether the sender hopes to elicit \(i\) or \(i^{*}\). Which interpretation-\(i\) or \(i^{*}\)-is preferable to the receiver depends on whether the coin landed heads or tails, which they can only infer indirectly via their model of how the sender associates \(H\) or \(T\) with \(m\) or \(m^{*}\).
Let's start with the easy case where the sender just wants the receiver to know the truth about whether the coin landed heads or tails. If both agents share a language and trust in one another's honesty, it's easy to see what they should do. Upon observing either Heads or Tails, sender will send the message \(m=\) "The coin landed Heads" or \(m^{*}=\) "The coin landed Tails" respectively. The receiver will interpret the message as meaning \(i=\) The coin _actually_ landed Heads or \(i^{*}=\) The coin _actually_ landed Tails.
If the sender and receiver don't share a language, the sender and receiver will still have to decide on strategies for sending and interpreting messages. A strategy for the sender will be an association between stimuli and messages. In the absence of any clues as to how the receiver will interpret their messages, the sender will be indifferent between the association \(H\mapsto m\) and \(T\mapsto m^{*}\), and the association \(H\mapsto m^{*}\) and \(T\mapsto m\). The receiver will be similarly indifferent between the interpretations [\(m\mapsto H\) & \(m^{*}\mapsto T\)] and [\(m\mapsto T\) & \(m^{*}\mapsto H\)]. This gives us two senses of the "meanings" of the signals \(m\) and \(m^{*}\): one from the sender's encoding, and one from the receiver's interpretation.
### From model to meaning
Both sender and receiver require a _strategy_ for forming this sort of association between states and messages. Since we're starting with the simplifying assump
Figure 1: Illustration of possible solutions to sender-receiver game. Sender has two options to encode message, shown by solid and dotted lines. Receiver has options for decoding message, shown by solid and dotted wavy lines.
tion that the sender and receiver are motivated by the shared goal of conveying accurate information, as long as one or the other is able to access some feedback regarding how well communication is working, they will eventually settle on a shared language [\(m\leftrightarrow H\) & \(m^{*}\leftrightarrow T\)] or [\(m\leftrightarrow T\) & \(m^{*}\leftrightarrow H\)]. Such feedback may take the form of deliberate strategic adaptation to new information, or selection effects associated with aligned communicators out-competing misaligned ones.3
Footnote 3: A lot of the language around this, e.g. “selection”, “evolution”, “fitness”, betrays the origins of evolutionary game theory in ecology and biology. For our purposes this is somewhat metaphorical, with the selection pressures being largely economic/cultural, though there are debates about the aptness of this analogy (Lewens, 2015).
In a corporate context like this one, it's tempting to speak only using the former, "strategic" terms. The relevant actors, after all, involve humans (or groups of them) with the rational faculties required to calculate the optimal course of action for themselves. Indeed, rational best response is a powerful lens, and there are particular decision-points that are best understood in these terms, and identifying these points should be a core goal of this endeavor. But this is a mistake for a number of reasons. As we stray further from the basic model, the notion of a "rational best response" loses sense as the calculations involve more interacting parameters with high variance, and the only sensible thing a truly rational actor can do is attend to more holistic features of the evolutionary dynamics in which that singular interaction takes place. Moreover, conceiving of signaling strategies as purely strategic can lead to inappropriately associating signal content with the intention of the sender. If signals are intentional, then "misleading" signals are adversarial, and thus to question meaning is to accuse a sender of bad intentions. But signals can deviate from sender intention for lots of reasons-a sender might not even _have_ an intention associated with a signal-and we should be wary of falling into this trap.
It therefore behooves us to consider both mechanisms-individual strategic behavior and evolutionary selection-in order to understand how an initial communication "problem" resolves into a "solution" in the form of a stable language associating signals with meanings. To see how these work, we begin by defining the "basic model" as a game in which both sender and receiver get a constant "reward" whenever the receiver is able to correctly ascertain whether the coin landed heads or tails.
If both actors have sufficient reasoning capabilities, they will almost certainly settle on a correct language more or less quickly. If the first message (\(m\)) by pure chance leads to the correct association (\(H\)), then both sender and receiver will be motivated by their mutual reward to continue with this association in future interactions. If the association fails at first, they will try again until something sticks. Note that there is a potential for a vicious cycle here, whereby both parties "swap" associations in light of failures and never find success. In this simple game, the cycle can be dislodged easily by any amount of experimentation in strategy by the players in signal selection or interpretation, unless they are
extraordinarily unlucky and all such attempts are hopelessly synchronized. But it is worth attending to nonetheless, as such accidental failures to communicate by rational agents with aligned intentions often are thwarted by chance, though this chance may be minimized by agents' persistent strategic experimentation.
By contrast, an evolutionary perspective does not require any reasoning or strategy on the part of the players. Rather, we merely interpret the "reward" in terms of "fitness", whereby agents with higher rewards are somehow more likely to "persist" in the ecosystem. The use of game theory in this way was initially used to explain biological evolution, but evolutionary game theory turns out to be useful to economists and social scientists as well, for understanding which companies "survive" on the market, or which cultural practices "survive" in a community. There are a number of ways we can formalize this idea mathematically in terms of _dynamical evolution_ over time.
A simple, standard way to do this is the _replicator dynamics_, by which successive generations of actors _replicate_ the strategies of previous generations in accordance with how well rewarded they were. We can imagine the first generation selecting an association between message and state purely at random. Some will succeed and some will fail, and this will be reflected in greater and lesser numbers of each strategy in the following generation. It is technically possible for both languages to evolve simultaneously in this manner-especially if the dynamics is _localized_ and agents are more likely to interact with their neighbors. But generally over time, the replicator dynamics will amplify slight advantages that might appear from one language being slightly more common, until the community settles on a single language.
Note that the presence of background evolutionary dynamics is perfectly compatible with a rational-strategic understanding of agent behavior. We might interpret the dynamics as at least partially deriving from intentional strategy selection, or we might imagine a background dynamical system within which one might gain advantage by behaving more strategically. These sorts of intermediate perspectives seem most appropriate for the case at hand.
### Correlation mechanisms
For the basic model, the resulting language will be purely _conventional_ in that neither association between messages and contents is more likely or apt to solve the problem of aligned meanings. The symmetry of the set-up that led to this pure conventionality is an idealization that will typically not hold-it's more accurate to view conventionality as variable (O'Connor, 2021). The symmetry might be broken in a number of ways. This is most obvious if we consider that the agents might have _some_ information about their interlocutor that will cause them to expect one association to be better aligned.
We can also expect asymmetries as a result of including two additional parameters in our model: the _costs_ associated with each message \(c(m)\) and \(c(m^{*})\), and the sender and receiver's respective _prior beliefs_ regarding the bias of the
coin, represented as probabilities \(P_{S}(H)\) and \(P_{R}(H)\). If there is any difference in cost between messages, and any difference in priors, we would expect utility maximizing agents to "prefer" (again, either in the strategically deliberate or passive evolutionary sense) the language that associates the costlier signal with the less likely outcome.
These sorts of symmetry-breaking features of a game-theoretic interaction can be understood as _correlation mechanisms_(Hamilton, 1964), _focal points_(Schellling, 1960), or invocations of _salience_(Lewis, 1969). Though there may be multiple choices of actions or linguistic frameworks that my co-player can choose from, if we have sufficient understanding of one another's motivations, we can take advantage of shared observations to independently arrive at a compatible, optimal solution.4 The correlation mechanism discussed above refers to exogenous environmental features-the costs and frequencies in our shared environment. Shared social norms and laws can also function as correlation mechanisms. And by using _pre-play signaling_, we can arrive at more fine-grained mutual assurances behavior, thereby making it easier to productively cooperate (Santos et al., 2011).
Footnote 4: See Aumann (1987) for a formalization and proof.
### Ambiguity and uncertainty
Consider also how incorporating _signal fidelity_ can impact our analysis. There might be some noise in the observation and/or communication channels, so that trust and shared language are not fully sufficient for the receiver to be confident in their interpretations. In such cases, it might be worth adding an additional "layer" to our model to separate _message interpretation_ (evaluation of the sender's intention behind sending the signal) and either _belief formation_ on the basis of their interpretation, and/or _reaction_ (action performed on the basis of interpretation, which may or may not be relevantly mediated by a belief formation).
Further, note that shared language will require a shared understanding of possibilities for states and messages, which is a non-trivial assumption. Since signal content is a holistic product of solutions to these sorts of signaling games, content has just as much to do with which messages _were not_ sent as the ones that were. Even cooperative communication partners might struggle to contend with the additional coordination problem associated with aligning expectations in this way. It is interesting to consider whether and how communication can occur with only partial model alignment here. Further, if there are more states than signals to choose from (e.g., because there are meaningfully different ways an AI system might satisfy some quantitative fairness criterion), any resulting language will have to associate multiple meanings with a single signal.
These considerations are relevant for us even in cases of pure communicative cooperation, but they loom larger when the goals of communication can diverge and conflict. Deceitful communication partners can weaponize ambiguity in many ways.
First, deceitful actors can leverage and overstate uncertainty about the nature and degree of harm from their actions in order to evade criticism and attempts at regulation. This sort of "manufactured doubt" has been well-documented in the history of tobacco and oil and gas industries (Oreskes and Conway, 2011). These cases can be interpreted as effective propaganda campaigns to re-interpret the "signals" generated by scientific studies as less indicative of the presence of problems and the nature of solutions. Even without antisocial actors or intentions, expressions of uncertainty from experts regarding the evidentiary basis for policy proposals can erode the potential for such evidence to serve as a strong signal upon which we coordinate effective public actions (Ojea Quintana et al., 2021).
Malicious "doubt-mongering" can be understood as a particular instance of a more general phenomenon of abusing norms of discourse. The concept of "conversational implicature" originates from Grice (1975), who argued that communication often relies on the _cooperative principle_: "make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged." The point is that the content of what we say often cannot be taken at face value from _what_ we say, but by considering _how_ we have chosen to say it. Doubt-mongering can be understood as abusing Grice's "maxim of quantity", whereby we generally presume the amount of information we are given is appropriate to a shared goal of discourse. When reasons for doubt are emphasized and expounded upon, while reasons for belief are barely mentioned, we assume it is because this ratio appropriately captures the speaker's understanding of available evidence, and is the appropriate balance for decision making. Doubt-mongers can thus _imply_ that there is more reason to doubt than there is without ever explicitly saying anything untrue.
Tech companies can similarly abuse implicature norms to generate signals that are misleading but not untrue. For example, they might emphasize their product's positive performance on some metric while conspicuously omitting information about its poor performance on relevant and related metrics. This is in fact so accepted a practice from corporations that it is not usually classified as deceptive, merely "business as usual". However, companies regularly weave back and forth between attempting to invoke a cooperative "conversational" context and retreating to a more adversarial "self-interested business" context when convenient. It might thus behoove us to consider information manipulation theory (McCornack, 1992) to analyze how companies can exploit presumed context to manipulate the perceived content of the signals they emit.
### Complicating motives
As is becoming clear, the story gets significantly more complicated once we allow that our agents might have other motivations beyond effective communication. Their motives need not be adversarial for this to be an issue. For example, once we add in the costs associated with sending signals, a sender's desire to send
accurate information might be overshadowed by their motivation to avoid the costs associated with sending particular signals. Incorporating other incentives beyond perfect communication, which are arguably always present, changes how signals can acquire meanings in the first place, and can also shift the pragmatic content of established languages.
The concept of a "misleading" or "deceptive" signal presents a _prima facie_ philosophical puzzle. Insofar as the "meaning" of a signal is entirely determined by the conditions that lead to the signal being sent, there is a sense in which signals are inherently honest (Skyrms, 2010, p. 74). For deception to occur, it must leverage a background context of expected usage, one that can be eroded by repeated exploitation. Evolutionary dynamics and rational learning thus afford us with a sort of immunity to sustained, large-scale deception. It is especially instructive for our purposes to consider some of the ways in which deceptive signaling can nonetheless persist.
First, a default expectation of honest signaling and cooperative behavior can be adaptive despite the presence of some antisocial behavior, so long as it is contained in a way that prevents it from spreading and taking over a population.5 Thus even though deceived agents may experience one-off punishments for "trusting" a signal, this need not fully undermine the signal's meaning in general. For example, if one unscrupulous vendor tells me they are selling me 5 kg of rice that turns out to in fact weigh 2 kg, I will not thereby conclude that the label "5 kg" really only _means_ less than half that amount, but confine the broken association of the term to the vendor who deceived me. So long as such experiences are sufficiently rare, I can continue to presume that grocery labels will have a particular factual relationship with their contents. But too many egregiously deceptive labels will make it unwise for me to expect any association between label and contents, thus undermining all signaling functions of labels.
Footnote 5: Formally, this can be characterized as a pseudo-separating equilibrium (Huttegger et al., 2015, p. 1003)
Unscrupulous vendors are thus more likely to get away with small perturbations of usage. This is easier to accomplish with vaguer terms than metric units, like "unit" or "bushel." When there is tolerance for these sorts of unobtrusive changes in usage, the meaning of terms _will_ dynamically evolve alongside (subtly) deceptive usage, as with the phenomenon of shrinklation. This indicates how a kind of _dynamical deception_ can persist despite the aforementioned "immunity" that systems naturally have to the widespread deceptive use of signals with static meaning. In an arms-race fashion, by the time I have adjusted to your deceptive use of a signal, you might respond to my adjustment with further deception. So long as it is (perceived as) sufficiently useful for me to maintain some degree of association between signal and state, perpetual deceptive use can continue without fully breaking the connection between signal and meaning.
But what counts as a sufficiently "minor" deception to "pass" through the filter of deception-resistance can vary based on other factors. As mentioned, it can
be rational for me to leave myself vulnerable to major deceptions so long as I believe they are sufficiently unlikely that I still expect to benefit from presuming honesty. Deception can thus persist if my information environment is compromised, and my assessment of the likelihood of deception is miscalibrated.
This mechanism of deception-resulting from asymmetric information access, and the possibility of manipulating others' access to information-will be especially important for us to attend to here. AI developers will almost universally have more access to information than users or other stakeholders attempting to interpret their signals, and the particular nature of AI capability threatens to widen this gap in unprecedented ways.
A further explanation for the persistence of deceptive signaling is the fact that signals often appear to us in contexts involving multiple game theoretic interactions. A deceptive salesman mis-applying a weight label will not change my understanding of the use of metric units because in my broader social context, metric units are widely used in straight-forward contexts in which the "basic model" is reasonable to assume--both parties often _are_ motivated primarily to cooperatively share information. I can maintain a healthy skepticism and not take for granted that this the basic model always applies, but nonetheless misidentify which circumstance a given interaction falls into. In this way, AI developers might cleverly (or even accidentally) "launder" signals through academic or government communication channels, thus attaching a veneer of cooperative epistemic motive to what is in fact a potentially adversarial profit motive. This is a particular problem for AI because the cutting edge of knowledge accumulates in corporate settings, creating a knowledge asymmetry that can foster epistemic dependence.
In sum, even _with_ robust self-correcting dynamics making deceptive signaling near oxymoronic, individual agents can persistently deceive by leveraging perpetual knowledge advantage of the contexts and consequences of certain signals, intentionally or otherwise.
### Cost, fakeability, and commitment devices
The forgoing discussion makes the phenomenon of "deceptive signaling" sound more nefarious and deliberate than it necessarily is. It is important to keep in mind that it is quite natural for agents to respond primarily to their fitness contexts, rather than always deferring to some higher calling to "communicate truthfully". This is important to keep in mind when considering deliberate benefcience signaling from AI developers, such as adopting ethical principles. These signals can be understood in the context of multiple different signaling games. If read as statements of factual intention on the part of the particular people asserting them, they may in fact be perfectly "honest" acts of communication. This is consistent with them being completely useless as signals from the _company_ about expected corporate behavior. If the right sorts of dynamical constraints are not in place within companies and the economic and social
contexts in which they act, the relationship between the signals intended by the authors and those in fact sent by the issuing of the document are likely not perfectly identical.
Is there a way to distinguish between two agents asserting the same ethical commitments, one who is honestly reporting their intended behaviors, and another one who is merely saying what needs to be said to be well-received by customers and regulators? The short answer is we can't; "intentions" are not the sorts of things that can be detected.
One thing we can do is look for, engineer, encourage, and rally around signals that are "unfakeable" where possible--signals for which the very act of (successfully, compellingly) sending them is difficult or impossible to send if the relevant state of affairs does not hold. Fakeability generally exists on a continuum, with some signals being more or less intrinsically tied to content. In a biological context, Thomson gazelle's "stotting," or jumping high when they see a predator, is often cited as a relatively unfakeable signal of being agile and difficult to chase. In contrast, poisonous frogs use bright coloration to signal undesirability to predators, a signal that can be and is "faked" by non-poisonous frogs who gain the signaling benefit (while also decreasing the effectiveness of the signal in terms of predator expectations).
Where signals are theoretically fakeable, having it "cost" agents to send a signal can fill the gap to help them be viable nonetheless, so long as the cost is not too burdensome to honest signalers. Huttegger et al. (2015) rightly point out that the distinction between "cost" and "fakeability" is not clear cut; gazelle stotting _is_ costly from a fitness perspective, it's just a cost that is more naturally understood as already "budgeted for" in predator evasion, which requires bursts of speed and agility regardless, whereas coloration feels like a more "artificial" cost imposition. Similarly in an economic context, we tend to think of taxes and subsidies as costs, as opposed to more diffuse impacts of reputation and relationship enhancing actions, despite these forces' ability to have the same ultimate impact in terms of costs/profits of doing business.
The moral here is that "costliness" and "unfakeability" are better understood as two different lenses which can be more or less intuitive to use, but serve the same function for behavioral incentives. Rosenstock and O'Connor (2018) demonstrate this with a mathematical model, showing that in an iterated prisoner's dilemma game in which players can send an "apology" signal upon defection to indicate intent to cooperate in the future, cost and fakeability can be used interchangeably to allow cooperative behavior to evolve despite the threat of deceptive apologizers. Importantly, those results also rely on sufficient repeat interactions with the same actors, so that signals can usefully influence strategy. They also rely on a kind of "cultural starting point" in which actors are willing to entertain apologies despite the risk of exploitation--cooperative behavior cannot evolve from nowhere without some people sticking their necks out a bit, so it can be difficult to bootstrap.
### Evolving meaning
It's vital to keep in mind that the meaning of signals is never fixed. The system is always evolving, and the very act of signaling in a dynamical evolutionary context changes what a signal "means" by changing the associations other actors make with it. Applying a signaling framework to identify a signal's contextual meaning therefore requires humility and finesse. It is not enough to determine that a signal functionally associates with a particular "ground truth" in one context, but we must recognize the context sensitivity of that association, and take care to ensure the relevant conditions are met in other cases for the association to hold.
Meaning also "evolves" in predictable, characteristic ways which we should anticipate. One common pattern is that embodied in "Goodhart's Law," an adage often stated as "when a measure becomes a target, it ceases to be a good measure." The concept of a measurement here functions as a publicly recognized signal. The idea is that to initially be identified as a "measurement," it would have once been a strong signal. But as soon as the signal is rewarded in the system in any way that is theoretically separable from the content, the signal's meaning will be eroded by reward-seeking. A notorious example is the cobra effect (Siebert, 2001), in which a British Colonial policy of providing bounties for dead cobras, aimed at reducing the cobra population, perversely encouraged cobra breeding. While not always backfiring so dramatically, leaning too heavily on a particular observable indicator to inform policy choices is always subject to some vulnerability along these lines due to the shifting meaning of signals.
Over-reliance on single evaluation metrics is a recognised problem in AI (Raji et al., 2021; Liao et al., 2021). A signaling lens may explain how and why these failures occur, and help direct us towards solutions. While evidence from dynamics models can be sensitive to modeling choices and difficult to interpret (Rosenstock et al., 2017), one of the most robust findings in this domain is that communities are more effective at uncovering the truth if they employ a diversity of methods and entertain a variety of hypotheses, rather than rapidly agreeing on a uniform set of methods and beliefs (Zollman, 2010). Another direction suggested by this literature is to look for signals that emerge as best responses to general classes of reward games rather than specific instances, which can be more easily gamified or even non-maliciously over-fit (Skyrms and Zollman, 2010; Zollman, 2008; Mengel, 2012).
## 3 Relation to other work
### Game theoretic development dynamics
The most obvious academic cohort which can benefit from incorporating signaling considerations is the large body of work on game theory informed policy analysis (Hermans et al., 2014), especially for economic policy (Philips, 1995) and policy surrounding the development of dangerous technologies (Coe and Vaynman,
2020). There is some more recent work using game theory to analyze AI development policy in particular.
Askell et al. (2019) argue that if developing AI responsibly incurs any extra cost above doing so irresponsibly, AI developers who aim for beneficial outcomes are faced with a prisoner's dilemma. The article suggests high-level features of policy interventions that show promise for addressing game-theoretically similar situations. As incorporating signals can make iterated prisoner's dilemmas more tractable (Rosenstock and O'Connor, 2018; Santos et al., 2011), these would be a natural and advantageous addition to the base theoretical model.
In a series of ongoing papers, Robert Trager and colleagues develop a more precise game-theoretic framework for modeling the same phenomenon, which they call "dangerous technology races," which they use to illustrate how various features of the AI development context can contribute to the achievability of positive outcomes. One of the mechanisms explored is "knowledge sharing," which is modeled as a raw "amount" of information about the technology that is shared among AI developers (Stafford and Trager, 2022). One way to incorporate signaling here is to adapt this modeling feature from a categorical or one-dimensional variable to a set of more complex options about the nature and method of knowledge sharing. After all, knowledge is not a commodity to be straightforwardly "shared", but is constituted by a body of evidence along with a framework for incorporating that evidence into speculative inferences about how technology works. Different "slices" of that conglomeration will communicate different "facts," and it behooves us to recognize this in our models.
One can also understand signals as appearing here and in (Han, The Anh et al., 2022) in the form of "commitments" to cooperative behavior. So signals are not entirely absent, but as I discuss in the next section, this may not be the most fruitful way to incorporate them in our models.
### Credible commitments
Insofar as "signaling" features in existing literature on this topic, it is usually narrowly scoped to consider signals of a particular sort: stated commitments to perform a certain action or abide by a set of rules. To view these _as_ signals, we are forced to contend more explicitly with certain contextual features of the situation.
First, in drawing out the sender and receiver roles, we need to ask "Who are _we_ to demand particular sorts of signals, and who are _you_ to supply them?" What if "we" are not in an especially powerful position to make demands on AI organizations? Thinking about signals more broadly expands the scope to include things like "natural" signals that emanate from organizations just going about their business, as well as other sorts of "artificial" ones (e.g., agreeing to use model cards or allow auditing) that latch onto and encourage the sorts of behaviors we would otherwise ask for "commitments" to.
Even if we do manage to extract commitments, assessing "credibility" might be difficult in an AI context, where tasks are often quite complex and difficult to evaluate. Asking how to make the _commitments_ credible may not be the right question, and it could be useful to instead ask how to assess trustworthiness outside of the context of explicit commitment-making. The "credible commitment" framing puts us in a position where of course everyone wants to get the benefits associated with _making_ the commitment without incurring the costs of following through, and it frames the problem as one of how to set up the reward structure so that's difficult to pull off.
### Trustworthiness
In light of the above considerations, rather than looking to evaluate the credibility of particular claims, we might look for higher level indications of _trustworthiness_ of other actors more broadly, over and above mere adherence to some particular set of rules and guidelines. This is supported by work referenced earlier suggesting that cooperation is more stable when we use general strategies for classes of games rather than "fine-tuned" strategies. So what could a general and effective "trustworthiness" signal look like?
Hardin (2002) characterizes trustworthiness in terms of "encapsulated self-interest"--I can trust someone if it is in their self-interest to take my self-interest into account. Regardless of the adequacy of this account, it is not readily conducive to a signaling framing. To assess you as trustworthy, I need to determine what motivates you, and (as discussed below in the context of intent) this is not directly observable. Jones' 2012 account is more promising here.
"B is trustworthy with respect to A in domain of interaction D if and only if she is _competent_ with respect to that domain, and she would take the fact that A is _counting on her_, were A to do so in this domain, to be a _compelling reason_ for acting as counted on." (Jones, 2012, p. 61)
Jones' account points towards three features that could possibly be captured in an observable signal: (i) competence in a domain, (ii) knowledge of a person's reliance, and (iii) whether that knowledge plays an appropriately causal role in reasoning about how to act. (i) and (ii) point us towards looking first for indications that a system is competent to perform a particular function and recognizes our reliance.6 While "competence" and "recognition" broadly will take some work to operationalize, there will often be clear prerequisites to look for, such as the presence of certain functionalities and data structures, and past performance in similar situations. The key component to attend to here is (iii): figuring out how to define and detect appropriate inclusion of reliance into a _reasoning_ process for action. Here we might look to work in explainable AI
(Miller, 2018, 2019) for guidance.
As Jacovi et al. (2020) argue, our goal should be to achieve _calibrated trust_-trust in systems that is tightly connected to their actual trustworthiness. The paper helpfully distinguishes between mechanisms of _intrinsic trust_ that are closely tied to underlying trustworthiness, and _extrinsic trust_ that is generated more indirectly by features that are in principle separable from actual trustworthiness. As with "fakeability" of signals, these tend to exist on a continuum, but most examples of signals of "trustworthiness" unfortunately fall more towards the extrinsic end, including expert testimony and performance on specific test sets and data. Arguably, signals which elicit more _intrinsic_ trust should be the goal of transparency mechanisms, as I discuss further below.
### Transparency
Calls for transparency are a common theme in ethical AI policy recommendations, and can be thought of as requests or demands for signals of particular material facts about the nature of a particular product or its development context. This framework pushes us to ask which features of a company's product/process, if made transparent, can function as signals of what facts and towards what ends. As discussed above, one direction to look here is to grant the kind of transparency that would allow users to generate _calibrated trust_ in the system by accessing signals tightly tied to the _intrinsic trustworthiness_ of the system.
From an information-theoretic perspective, too much information--the naive aim of total transparency--can be less informative than the right sort of information. Companies can (and do) offer up a lot of true information that is nonetheless misleading-a counterintuitive fact which can be explained by signaling considerations (c.f. earlier discussion about implicature). On the other hand, it is worth exploring whether the raw quantity of data offered transparently could be a blunt but useful signal of beneficial intent (Santos et al., 2011; Skyrms, 2002).
It is also worth considering how the desiderata for benefcence-promoting signaling practices interact with demands of companies to protect intellectual property. There may be a useful analog in arms control (Coe and Vaynman, 2020). It is also worth exploring connections with _differential privacy_(Dwork, 2006), with an eye towards if and how privacy and the right sort of transparency can co-exist. While this will no doubt be a difficult needle to thread, a signaling perspective might help us formulate transparency demands in a more focused manner than mere maximization, which could make the problem more tractable.
### Intentionality
The question of trustworthiness is related to broader questions about _intent_, both from corporate agencies and eventually AI systems themselves. When we
assess a person as trustworthy, or more generally, when we attempt to predict their behavior and develop a strategic response, a major component of our model will often be our assessment of their _intentions_. We understand this both literally in terms of what, mechanically, people intend to do with their bodies, but also more abstractly, in terms of what guiding values and principles we expect will underlie the particulars of their ultimate mechanical choices. The notion of intention is complicated when it comes to collective/corporate agents (List and Pettit, 2011), and more so when it comes to non-human entities. Characterizing and assessing intention in AI is a key conceptual goal for many researchers in the "AI Alignment" sphere, since much of the work in this area presupposes something intention-like is or will be the key target of alignment efforts.
I caution that overreliance on intention can be predictively and strategically problematic even when interacting with humans. Intention only ever manifests as behavior conditionally on context, to the extent that considerations of context often should dominate (or be considered entangled with) considerations of intent for strategic interaction. Once we acknowledge that even humans lack the kind of "robust" notion of intention that is often sought, we see that the need to identify and attribute something like intention to artificial agents is less conceptually relevant than we may have thought.
The signaling perspective, I argue, refocuses us away from the question of intention (or at least, an overly anthropomorphic concept of it) in ways that may prove instructive. When we look for "signals" of beneficent disposition rather than evidence of intention, we do not require an agent to be deliberately producing those signals in order to demonstrate some internal agentic state. This makes us less vulnerable to the previously discussed challenges associated with "credible commitments". It also paves the way for a clearer path from the current paradigm of prediction and strategy associated with models of corporate entities developing AI technologies, to future models for which the relevant considerations will increasingly be incorporated into the AI systems themselves. We need not concern ourselves with identifying a phase shift from one to the other upon which we are dealing with an AI "agent" with its own "intention", requiring a wholly new model to understand and respond.
## 4 Discussion
I'll end by drawing out what I think are the three core lessons to take from considering beneficial AI development dynamics through a signaling lens. I start more abstractly, and end more concretely.
### AI ethics goes beyond direct consequences.
Theory and practice around ethical AI generally focuses on the "first-order" question of which particular actions lead to better or worse outcomes. The
signaling lens shifts our attention to the "second-order" question of how our actions influence the ethical behavior of other agents.7
Footnote 7: For the ethicists among you: this distinction cross-cuts the consequentialism/deontology debate. While deontologists will likely be more amenable to a relational perspective, they can still be susceptible to a “first-order” myopia, focused too closely on fulfilling obligations to others and not on encouraging ethical behavior in others. Consequentialism is also compatible with a second-order perspective that takes a more “holistic” approach to consequences in terms of the global, dynamical effects of our actions.
Every visible action performed by an individual, team, company, or product, has the potential to influence the beliefs, expectations, and motivations of those who perceive it, and this influence can propagate and have profound social implications. When trying to act "ethically", these agents need to not only consider the direct effects of their actions, but the reverberating effects of the signals generated by these actions. This places a higher burden of ethical considerations, but also opens up new avenues of positive impacts the organization can aim towards.
One of the most promising targets for signaling considerations are the various metrics and benchmarks that are used to quantify and demonstrate adherence to ethical principles such as fairness, privacy, and transparency. These metrics are used in different ways by researchers, regulators, users, and the public to assess the desirability and acceptability of AI products. Depending on how they are understood by these actors and inform their behavior, this generates a straightforward "reward" for other companies in the space to "compete" with respect to performance on these metrics. It is therefore important to attend to which metrics are used and promoted, and how they are communicated.
### Signals are dynamic and adaptive
Signals are "living" in the sense that they are unconstrained by our hopes and expectations for them, and they change over time in response to their environment and how we use them. Recognizing this opens up new, innovative ways we might use signals to promote pro-social behavior in AI development contexts. Every output produced in the course of developing products and ensuring their benefcience generates lots of observable information that we can use to identify or manufacture signals that might be suitable for our purposes, which will evolve over time.
Conversely, if we fail to acknowledge the dynamism of signals we risk to be overconfident in our interpretations of signals, and unprepared for when they change out from under us. For example, any of the various "fairness" metrics for classifiers only indicate the presence of "fairness" in a narrow and contextual sense. Not only should we be more circumspect in which metrics we use and how we interpret them, we need to recognize that depending on how we use them to inform our behavior, i.e. by "rewarding" or "punishing" those perform well or poorly on them, we inevitably shift their meanings to reflect that new reward structure.
### Content arises from contextual incentives
The central lesson of signaling theory is that the meaning of a signal is entirely captured by the beliefs and motivations of the sender and receiver. In other words, all there is to know about what someone is communicating is who is saying it and why. And to know what is heard, you need to know who is listening, and what they believe about the speaker's beliefs, intentions and motivations.
This is a lesson I see my university colleagues grappling with in light of Chat-GPT. In the past (to an extent), when a student handed in an essay, they could (or at least did) take the essays at "face value" as constituting an honest report of what the student has learned and demonstrating their best attempt to respond to the prompt. Knowing that "the same" content could have been generated merely by copying and pasting an essay prompt into a browser has completely undermined their faith in that signal.
The problem professors are contending with is that they _thought_ they were asking students to write an essay in response to their prompt, but what they _actually_ communicated to students was to generate some unique artifact that meets certain criteria indicated on the syllabus--a task they can accomplish without engaging critically with the course content. The effect of ChatGPT here was to widen the gap between the assumed association between "essay with certain characteristic" and "student who learned the material" sufficiently to break the signaling association. "Solving" this problem thus requires mending the gap, either by (i) enforcing compliance through surveillance or auditing, (ii) designing the assignment so that doing well, despite the ability to use ChatGPT, still requires students to "incidentally" learn, or (iii) shifting students incentives to be more intrinsically associated with learning than doing well on assignments.
As an educator, I strive for (iii), acknowledge that (ii) is often more realistic though quite challenging, and view (i) as a dangerous stopgap that positions students as adversaries, which I hope to avoid whenever possible. Achieving (iii) requires being open and curious about what students' actual interests and goals are, and working flexibly with them to find ways to find learning objectives that resonate with them. The approach for corporate actors will of course be different but the principles are the same. If we rely solely on adversarial, punitive measures (i), we essentially ensure an eternal conflict in which companies inevitably exploit and enhance loopholes until our methods are rendered useless. Pure alignment of values (iii) is a worthy aspiration but basically impossible to work towards directly. Our best bet is to focus on carefully creating and adjusting demands and expectations so that the most "rational" response to incentives is to behave prosocially.
|
2307.13215 | Image Segmentation Keras : Implementation of Segnet, FCN, UNet, PSPNet
and other models in Keras | Semantic segmentation plays a vital role in computer vision tasks, enabling
precise pixel-level understanding of images. In this paper, we present a
comprehensive library for semantic segmentation, which contains implementations
of popular segmentation models like SegNet, FCN, UNet, and PSPNet. We also
evaluate and compare these models on several datasets, offering researchers and
practitioners a powerful toolset for tackling diverse segmentation challenges. | Divam Gupta | 2023-07-25T02:56:20Z | http://arxiv.org/abs/2307.13215v1 | # Image Segmentation Keras : Implementation of Segnet, FCN, UNet, PSPNet and other models in Keras
###### Abstract
Semantic segmentation plays a vital role in computer vision tasks, enabling precise pixel-level understanding of images. In this paper, we present a comprehensive library for semantic segmentation, which contains implementations of popular segmentation models like SegNet, FCN, UNet, and PSPNet. We also evaluate and compare these models on several datasets, offering researchers and practitioners a powerful toolset for tackling diverse segmentation challenges.
## 1 Introduction
Semantic segmentation, an essential task in the field of computer vision, aims to assign precise labels to every pixel in an image. Semantic segmentation has wide-ranging applications in autonomous driving, image and video analysis, medical imaging, scene understanding etc. Over the years, deep learning approaches have achieved remarkable success in semantic segmentation.
In this paper, we present a comprehensive library for semantic segmentation 1, aimed at the machine learning community. We also compare various semantic segmentation models on multiple datasets. The library provides an easy to use interface and is built using the TensorFlow and Keras framework. The library offers an extensive collection of easy-to-use models, including SegNet [1], FCN [6], UNet [8], and PSPNet [10], which are widely used networks for semantic segmentation.
Footnote 1: The code is available at [https://github.com/divamgupta/image-segmentation-keras](https://github.com/divamgupta/image-segmentation-keras)
The primary objective behind the development of this library is to provide a user-friendly and accessible platform for machine learning researchers and practitioners interested in semantic segmentation. Our library promotes modular and extensible design principles, allowing users to easily integrate, customize, and extend existing segmentation models to meet their specific needs.
To ensure ease of use, we have extensively documented the library, providing clear instructions and code examples. This documentation includes architectural details and training procedures Additionally, we provide pre-trained weights, enabling users to quickly fine-tune the models on their datasets.
## 2 Networks for Semantic Segmentation
Like most of the other applications, using a CNN for semantic segmentation is the obvious choice. When using a CNN for semantic segmentation, the output is also an image rather than a fixed length vector.
### Fully Convolutional Network
Usually, the architecture of the model contains several convolutional layers, non-linear activations, batch normalization, and pooling layers. The initial layers learn the low-level concepts such as edges and colors and the later level layers learn the higher level concepts such as different ob
jects. 2
Footnote 2: Source: [https://divamqueta.com/image-segmentation/2019/06/06/deep-learning-semantic-segmentation-keras.html](https://divamqueta.com/image-segmentation/2019/06/06/deep-learning-semantic-segmentation-keras.html)
At a lower level, the neurons contain information for a small region of the image, whereas at a higher level the neurons contain information for a large region of the image. Thus, as we add more layers, the size of the image keeps on decreasing and the number of channels keeps on increasing. The downsampling is done by the pooling layers.
For the case of image classification, we need to map the spatial tensor from the convolution layers to a fixed length vector. To do that, fully connected layers are used, which destroy all the spatial information. For the task of semantic segmentation, we need to retain the spatial information, hence no fully connected layers are used. That's why they are called fully convolutional networks. The convolutional layers coupled with downsampling layers produce a low-resolution tensor containing the high-level information.
Taking the low-resolution spatial tensor, which contains high-level information, we have to produce high-resolution segmentation outputs. To do that we add more convolution layers coupled with upsampling layers which increase the size of the spatial tensor. As we increase the resolution, we decrease the number of channels as we are getting back to the low-level information.
This is called an encoder-decoder structure. Where the layers which downsample the input are the part of the encoder and the layers which upsample are part of the decoder. When the model is trained for the task of semantic segmentation, the encoder outputs a tensor containing information about the objects, and its shape and size. The decoder takes this information and produces the segmentation maps.
### Skip Connections
If we simply stack the encoder and decoder layers, there could be loss of low-level information. Hence, the boundaries in segmentation maps produced by the decoder could be inaccurate.
To make up for the information lost, we let the decoder access the low-level features produced by the encoder layers. That is accomplished by skip connections. Intermediate
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c c c c c} \hline Method & mIoU & fIoU & Sky & Buildg & Pole & Road & Pav & Tree & Signl & Fence & Car & Ped & Bicycle & misc \\ \hline \hline MobileNet UNet & 64.51 & 86.08 & 94.3 & 84.03 & 12.9 & 96.59 & 84.98 & 88.28 & 32.69 & 52.25 & 85.25 & 42.43 & 72.68 & 27.77 \\ VGG UNet & 59.59 & 84.13 & 93.22 & 81.28 & 9.98 & 95.76 & 81.4 & 89.59 & 33.99 & 57.0 & 69.12 & 30.69 & 50.89 & 22.18 \\ ResNet50 UNet & 60.38 & 85.89 & 94.01 & 85.42 & 8.54 & 96.71 & 86.91 & 89.63 & 44.21 & 65.93 & 55.98 & 38.0 & 33.76 & 25.45 \\ SegNet & 40.89 & 73.4 & 81.1 & 74.31 & 3.14 & 92.13 & 68.46 & 75.05 & 9.19 & 8.61 & 32.06 & 7.32 & 25.79 & 13.53 \\ PSP Pretrained & 65.88 & 86.5 & 93.15 & 86.29 & 12.03 & 94.87 & 81.89 & 90.04 & 51.53 & 71.65 & 81.72 & 37.39 & 59.52 & 30.43 \\ ResNet50 PSPNet & 51.42 & 80.08 & 90.15 & 79.15 & 5.17 & 92.96 & 75.24 & 86.68 & 25.14 & 51.06 & 47.33 & 20.08 & 23.24 & 20.82 \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results of semantic segmentation on the CamVid dataset.
Figure 1: Qualitative results on the CamVid dataset
outputs of the encoder are added/concatenated with the inputs to the intermediate layers of the decoder at appropriate positions. The skip connections from the earlier layers provide the necessary information to the decoder layers which is required for creating accurate boundaries.
## 3 Experiments
In this section we compare various implementations of segmentation models in several datasets.
**CamVid dataset :** CamVid [2] is a unique dataset that provides pixel-level semantic labels for driving scenario videos, with annotations for 11 semantic classes. It offers over 10 minutes of high-quality footage, along with corresponding semantically labeled images, calibration sequences.
**Sitting people dataset :** The Human Part Segmentation dataset [7] by the University of Freiburg is specifically designed for semantic segmentation of sitting people. It comprises various human parts, such as hands, legs, and arms, and contains approximately 15 distinct classes.
**SUIM dataset :** The SUIM (Semantic Underwater Image Manipulation) [5] dataset is a comprehensive collection of underwater imagery. It consists of more than 1500 images with pixel-level annotations for eight object categories, including fish (vertebrates), reefs (invertebrates), aquatic plants, wrecks/ruins, human divers, robots, and sea-floor. These images are gathered during oceanic explorations and human-robot collaborative experiments, and annotated by human participants.
We benchmark the following models:
**SegNet:** Standard encoder-decoder network, where the encoder network produces an input feature map, and decoder predicts the segmentation classes at the input resolution. This has no skip connections or any pretraining.
**MobileNet UNet :** Efficient and accurate semantic segmentation, combining MobileNet's [4] lightweight feature extraction with U-Net's precise pixel-wise predictions. Ideal for real-time applications and resource-constrained environments.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c c c c} \hline \hline Method & mIoU & fIoU & BG & Head & tor & R LW a & RU a & R h & L. LW a L U a & L h & R LW & 1 & RU1 & R f & L LW & 1 & LU1 & L f \\ \hline \hline VGG UNet & 48.2 & 91.59 & 96.03 & 77.7 & 62.08 & 31.41 & 28.63 & 34.22 & 31.63 & 16.59 & 35.42 & 66.36 & 65.38 & 25.88 & 64.31 & 65.2 & 22.23 \\ PSP Pretrained & 62.19 & 93.95 & 97.02 & 83.18 & 76.42 & 43.5 & 63.2 & 49.05 & 47.1 & 57.39 & 51.15 & 79.15 & 73.54 & 27.82 & 77.05 & 80.94 & 26.31 \\ ResNet50 UNet & 62.16 & 94.2 & 97.25 & 84.28 & 77.04 & 37.36 & 55.47 & 40.14 & 45.42 & 50.7 & 51.5 & 79.36 & 68.84 & 40.6 & 81.58 & 76.37 & 46.46 \\ ResNet50 PSPNet & 44.83 & 91.15 & 95.51 & 77.11 & 74.71 & 12.91 & 43.1 & 18.38 & 18.66 & 42.64 & 22.04 & 52.06 & 64.59 & 24.04 & 45.72 & 64.6 & 16.39 \\ SegNet & 49.26 & 92.48 & 96.62 & 78.24 & 68.49 & 27.44 & 43.07 & 24.39 & 24.62 & 44.17 & 18.93 & 72.71 & 63.22 & 24.93 & 72.21 & 64.11 & 15.81 \\ MobileNet UNet & 58.6 & 93.8 & 97.06 & 78.91 & 80.23 & 34.41 & 43.31 & 44.2 & 38.39 & 52.19 & 49.76 & 72.7 & 68.8 & 30.8 & 77.81 & 76.59 & 33.78 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative results of semantic segmentation on the sitting people dataset.
Figure 2: Qualitative results on the sitting people dataset
**VGG UNet :** A powerful semantic segmentation network, leveraging VGG's [9] deep and expressive features for robust segmentation. Offers high-quality segmentation results at the expense of increased computational complexity.
**ResNet50 UNet :** ResNet50's [3] deep residual blocks for highly accurate and detailed segmentation results. Balances between computational efficiency and superior performance, making it suitable for various segmentation tasks.
**PSP Pretrained :** In this we use a pre-trained PSPNet model on the ADE 20K dataset. Here the model is pre-trained specifically on the semantic segmentation task.
## 4 Results
The results for CamVid, sitting people and SUIM datasets are in Table 1, 2 and 3 respectively. For CamVid and sitting datasets, the pretrained PSPNet yields the best mIoU scores. For the SUIM dataset, MobileNet UNet gives the best results. The qualitative visualizations of the best models are in Figure 1, 2 and 3.
## 5 Conclusion
Our paper introduces a comprehensive library for semantic segmentation for well-known models such as SegNet, FCN, UNet, and PSPNet. The library empowers researchers and practitioners in the field of computer vision with a toolset to achieve pixel-level understanding of images. We have demonstrated the efficacy and robustness of these models, underscoring their applicability in addressing diverse segmentation applications.
## 6 Acknowledgements
We would like to thank all the contributors from the open-source community. We would also like to thank Chat-GPT which helped in the writing of this paper.
Figure 3: Qualitative results on the SUIM dataset
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c} \hline \hline Method & mIoU & fIoU & Background & Human & Plants & Wreks & Robots & Feefs & Fish & Floor \\ \hline \hline SegNet & 17.03 & 36.7 & 68.13 & 11.94 & 0.0 & 0.0 & 0.0 & 31.04 & 13.8 & 11.3 \\ ResNet50 PSPNet & 15.71 & 33.8 & 63.55 & 10.77 & 0.14 & 11.71 & 0.0 & 25.98 & 4.42 & 9.12 \\ MobileNet UNet & 31.38 & 55.96 & 85.0 & 24.65 & 0.0 & 6.4 & 0.02 & 45.3 & 39.44 & 50.21 \\ VGG UNet & 24.51 & 46.6 & 76.5 & 9.34 & 16.21 & 5.08 & 0.0 & 36.86 & 16.76 & 35.36 \\ ResNet50 UNet & 29.38 & 52.11 & 80.74 & 24.37 & 6.83 & 12.99 & 0.0 & 43.18 & 24.41 & 42.52 \\ PSP Pretrained & 24.03 & 49.21 & 76.41 & 0.22 & 0.0 & 11.99 & 0.0 & 43.9 & 14.11 & 45.64 \\ \hline \end{tabular}
\end{table}
Table 3: Quantitative results of semantic segmentation on the SUIM dataset |
2308.03683 | AdS Virasoro-Shapiro amplitude with KK modes | We determine the first curvature correction for the string amplitude of two
supergravity states and two Kaluza-Klein modes on $\text{AdS}_5 \times
\text{S}^5$, which is dual to the correlator $\langle \mathcal{O}_2
\mathcal{O}_2 \mathcal{O}_p \mathcal{O}_p \rangle$ of half-BPS operators in
$\mathcal{N}=4$ SYM theory. The result has the form of an integral over the
Riemann sphere as for the usual Virasoro-Shapiro amplitude, with the insertion
of single-valued multiple polylogarithms of weight three. The result fixes OPE
data of single-trace operators in $\mathcal{N}=4$ SYM theory at strong
coupling, including operators with non-zero $R$-charge and odd spin. We
successfully check our results by comparing to data available from
integrability, localisation and consistency with a $10d$ effective action. | Giulia Fardelli, Tobias Hansen, Joao A. Silva | 2023-08-07T15:56:47Z | http://arxiv.org/abs/2308.03683v1 | # AdS Virasoro-Shapiro amplitude with KK modes
###### Abstract
We determine the first curvature correction for the string amplitude of two supergravity states and two Kaluza-Klein modes on \(\mathrm{AdS}_{5}\times\mathrm{S}^{5}\), which is dual to the correlator \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\) of half-BPS operators in \(\mathcal{N}=4\) SYM theory. The result has the form of an integral over the Riemann sphere as for the usual Virasoro-Shapiro amplitude, with the insertion of single-valued multiple polylogarithms of weight three. The result fixes OPE data of single-trace operators in \(\mathcal{N}=4\) SYM theory at strong coupling, including operators with non-zero \(R\)-charge and odd spin. We successfully check our results by comparing to data available from integrability, localisation and consistency with a \(10d\) effective action.
## 1 Introduction
* 2 Basics
* 2.1 The correlator
* 2.2 Dispersion relation
* 2.3 OPE data of stringy operators
* 3 Worldsheet method
* 3.1 Poles and residues
* 3.2 World-sheet correlator
* 4 Dispersive sum rules method
* 4.1 Flat space OPE data
* 4.2 First curvature correction
* 5 Data and checks
* 5.1 OPE data
* 5.2 Wilson coefficients
* 6 Conclusions
* A \(\mathcal{N}=4\) stress tensor four point function
* B SUGRA\({}^{(k)}\)
* C Mack polynomials
* D Large twist sums
* E Ambiguities of the world-sheet integrand
* F Crossing-symmetric dispersion relation
* G Details about dispersive sum rules
* G.1 Formulas
* G.2 Dispersive sum rule for \(b=0\)
* G.3 Example
Introduction
Despite its prominent appearance in the AdS/CFT correspondence [1], the world-sheet theory of type IIB string theory on \(\mathrm{AdS}_{5}\times\mathrm{S}^{5}\) is still not fully known. While the dual \(\mathcal{N}=4\) supersymmetric Yang-Mills (SYM) theory is very well understood, making contact with the low energy effective action of strings in AdS requires studying \(\mathcal{N}=4\) SYM at strong 't Hooft coupling \(\lambda\). The planar theory can be studied with integrability methods which give precise values for the dimensions of unprotected operators at finite 't Hooft coupling \(\lambda\), see [2] for the latest results. However, at present, these methods are not developed enough to do the same for OPE coefficients.1 In the series of papers [5; 6; 7; 8] a different approach was used to study the AdS Virasoro-Shapiro amplitude as an expansion in \(\alpha^{\prime}/R^{2}\) by combining properties of the putative world-sheet CFT and the dual CFT. The result is an expression for the world-sheet integrand, an optimal target for future direct string theory computations.
Footnote 1: See [3; 4] for approaches that combine integrability and bootstrap.
In [5; 6; 7; 8] the correlator \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\) of four half-BPS operators of dimension two was studied. In this paper we study \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\) where \(\mathcal{O}_{p}\) is a half-BPS operator of generic integer dimension \(p\).2 This enables us to study the OPE data of single trace operators of nonzero \(R\)-charge and odd spin, which appear in the OPE \(\mathcal{O}_{2}\times\mathcal{O}_{p}\). Our main result is a formula for the first Anti-de Sitter (AdS) curvature correction \(A^{(1)}(S,T)\) in \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\) (see section 2.1 and in particular formula (2.8) for the precise definition). It takes the form of a linear combination of terms, each of which is an integral over the Riemann sphere
Footnote 2: We do require that \(p\ll\lambda^{\frac{1}{4}}\).
\[\int d^{2}z|z|^{-2S-2}|1-z|^{-2T-2}G(S,T,z)\,, \tag{1.1}\]
where the integrand \(G(S,T,z)\) is a sum of single-valued multiple polylogarithms of weight \(3\), see section 3.2 for the precise formula. This can be contrasted with the flat space Virasoro-Shapiro amplitude \(A^{(0)}(S,T)\) where the integrand is simply \(1/(S+T)^{2}\)
\[A^{(0)}(S,T)=\int d^{2}z|z|^{-2S-2}|1-z|^{-2T-2}\frac{1}{(S+T)^{2}}\,. \tag{1.2}\]
While the flat space Virasoro-Shapiro amplitude determines OPE data at leading order in the \(\frac{1}{\sqrt{\lambda}}\) expansion, \(A^{(1)}(S,T)\) determines the OPE data at subleading order.
The paper is organised in the following way. In section 2 we review basic facts about the correlator \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\), setting the stage for the computation of \(A^{(1)}(S,T)\). Afterwards we compute \(A^{(1)}(S,T)\) using two different algorithms.
In section 3 we apply the approach of [8], assuming that \(A^{(1)}(S,T)\) is given by a linear combination of integrals of the form (1.1). We write an ansatz for the integrand in terms of single-valued multiple polylogarithms of \(z\) and rational functions of \(S\) and \(T\). This ansatz has a finite number of parameters, which are fixed by consistency of the residues of \(A^{(1)}(S,T)\) with the OPE.
In section 4 we compute \(A^{(1)}(S,T)\) by making use of dispersive sum rules, following [6]. In this case we make an ansatz for the OPE data at subleading order in \(\frac{1}{\sqrt{\lambda}}\) (which involves an infinite number of parameters). To fix these parameters we use locality (in the form of null constraints [9; 10]). Besides this, we assume that the Wilson coefficients that enter in the strong coupling expansion are given by single-valued multiple zeta values. These two conditions fully fix all of the parameters.
Both methods have numerous internal consistency checks and they both agree with each other in the final answer. In section 5 we extract OPE data and Wilson coefficients from \(A^{(1)}(S,T)\) and compare them to the literature, namely results from localisation, integrability and the putative existence of an \(AdS_{5}\times S_{5}\) effective action, finding full agreement. Section 6 contains our final conclusions. We have numerous appendices with auxiliary results. In Appendix F the fully crossing symmetric dispersion relation of [11; 12] is adapted to the present case of \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\) where only crossing symmetry between the \(t\) and \(u\) channels is present. This might be useful in other contexts.
## 2 Basics
### The correlator
In this paper we consider correlation functions involving half-BPS operators \(\mathcal{O}_{p}\) in planar \(\mathcal{N}=4\) SYM, with \(\mathrm{SU}(N)\) gauge group, as an expansion in inverse powers of the 't Hooft coupling \(\lambda=g_{\mathrm{YM}}^{2}N\). These half-BPS operators are Lorentz scalars transforming in the \([0,p,0]\) representation of the \(\mathrm{SU}(4)_{R}\) symmetry group and they have protected conformal dimension \(\Delta=p\). For \(p=2\), this corresponds to the superprimary of the stress-tensor multiplet, while for \(p>2\) they represent higher Kaluza-Klein supergravity excitations. In terms of the \(\mathcal{N}=4\) fundamental scalars, \(\phi^{I}\), \(I=1,\cdots,6\), they take the schematic form
\[\mathcal{O}_{p}=y_{I_{1}}\cdots y_{I_{p}}\mathrm{tr}\left(\phi^{I_{1}}\cdots \phi^{I_{p}}\right)+\cdots \tag{1}\]
where we have contracted the \(\mathrm{SO}(6)_{R}\simeq\mathrm{SU}(4)_{R}\) indices with auxiliary null vectors \(y_{I_{i}}\). The dots in the definition represent contributions from multi-trace operators, which are suppressed by inverse powers of the central charge \(c=\frac{N^{2}-1}{4}\), and will not be relevant in the following.3
Footnote 3: The presence of these additional contributions, on top of the single-trace one, is necessary to make these operators dual to single-particle states in AdS and they can be fixed by requiring that each operator is orthogonal to any other multi-particle operator [13; 14; 15].
In this paper, we will focus on the four-point function involving two \(\mathcal{O}_{2}\)'s and two other identical \(\mathcal{O}_{p}\)'s. It can be written as [16; 17; 18; 19]
\[\begin{split}&\langle\mathcal{O}_{2}(x_{1},y_{1})\mathcal{O}_{2}(x_{2 },y_{2})\mathcal{O}_{p}(x_{3},y_{3})\mathcal{O}_{p}(x_{4},y_{4})\rangle=\frac {y_{12}^{2}y_{34}^{p}}{(x_{12}^{2})^{2}(x_{34}^{2})^{p}}\mathcal{G}_{\{22pp \}}(U,V;\alpha,\overline{\alpha})\,,\\ &\mathcal{G}_{\{22pp\}}(U,V;\alpha,\overline{\alpha})=\mathcal{G} _{\{22pp\}}^{\mathrm{free}}(U,V;\alpha,\overline{\alpha})+\frac{(z-\alpha)(z -\overline{\alpha})(\overline{z}-\alpha)(\overline{z}-\overline{\alpha})}{(z \overline{z})^{2}(\alpha\overline{\alpha})^{2}}\mathcal{T}(U,V)\,,\end{split} \tag{2}\]
where definitions of cross-ratios and generalisations can be found in Appendix A. The function \(\mathcal{G}^{\text{free}}_{\{22pp\}}\) represents the free theory contributions and can be evaluated by means of Wick contractions -- see (10) for its explicit expression. The dynamical information is encoded in the _reduced correlator_\(\mathcal{T}(U,V)\), which can be further split as
\[\mathcal{T}(U,V)=\mathcal{T}^{\text{short}}(U,V)+\mathcal{T}^{\text{long}}(U,V )\,. \tag{3}\]
The first part receives contributions from protected short operators, while the second part contains the contributions of long operators. See Appendix A for more details. As we will soon see, the dimensions and OPE coefficients of these long operators get corrections at large \(\lambda\).
For our purposes, it is convenient to consider the Mellin transform \(M(s_{1},s_{2})\) of the reduced correlator
\[\begin{split}\mathcal{T}(U,V)&=\int_{-i\infty}^{+ i\infty}\frac{ds_{1}ds_{2}}{(4\pi i)^{2}}\,U^{\frac{s_{1}}{2}+2+\frac{p}{3}}\,V^{ \frac{s_{2}}{2}-1-\frac{p}{6}}\,\Gamma_{22pp}(s_{1},s_{2})\,M(s_{1},s_{2})\,, \\ \Gamma_{22pp}(s_{1},s_{2})&=\Gamma\left(\frac{6-p}{ 3}-\frac{s_{1}}{2}\right)\Gamma\left(\frac{2p}{3}-\frac{s_{1}}{2}\right)\Gamma \left(\frac{p}{6}+1-\frac{s_{2}}{2}\right)^{2}\Gamma\left(\frac{p}{6}+1-\frac{ s_{3}}{2}\right)^{2},\end{split} \tag{4}\]
where \(s_{1}+s_{2}+s_{3}=0\).4 Crossing symmetry of the correlator (2) under the exchange of points \(1\leftrightarrow 2\) and \(3\leftrightarrow 4\), translates for the Mellin amplitude to
Footnote 4: These Mellin variables are related to the more standard \(s,t,u\) by \(s_{1}=s-\frac{2p}{3}\), \(s_{2}=t-\frac{2p}{3}\), \(s_{3}=u-\frac{2p}{3}\).
\[M(s_{1},s_{2})=M(s_{1},s_{3})\,. \tag{5}\]
At leading order in \(1/c\) and in an expansion around large \(\lambda\), the Mellin amplitude is given by the supergravity result (\(\lambda\to\infty\)) plus a series of "stringy" corrections, described by polynomials in the Mellin variables and representing contact Witten diagrams with higher derivative vertices [20; 21]
\[M(s_{1},s_{2}) =\frac{4p}{\Gamma(p-1)}\frac{1}{(s_{1}-\frac{6-2p}{3})(s_{2}- \frac{p}{3})(s_{3}-\frac{p}{3})} \tag{6}\] \[+\sum_{b=0}^{\infty}\sum_{a=b}^{\infty}\frac{\Gamma(a+b+p+4)}{ \Gamma(b+1)\Gamma(p)\Gamma(p-1)}s_{1}^{a-b}(s_{2}s_{3})^{b}\lambda^{-\frac{3} {2}-\frac{a}{2}-\frac{b}{2}}\left(\xi_{a,b}^{(0)}(p)+\frac{1}{\sqrt{\lambda}} \xi_{a,b}^{(1)}(p)+\ldots\right).\]
The first term is the supergravity correlator and \(\xi_{a,b}^{(k)}(p)\) are usually referred to as Wilson coefficients.
Following [6] we also consider the transform
\[A(S,T)\equiv 2\Gamma(p-1)\Gamma(p)\int_{-i\infty+\kappa}^{+i\infty+\kappa}\frac{d \alpha}{2\pi i}e^{\alpha}\alpha^{-p-4}M\left(\frac{2\sqrt{\lambda}S}{\alpha}, \frac{2\sqrt{\lambda}T}{\alpha}\right)\,, \tag{7}\]
which is like the flat space limit [22], except that it applies to each layer \(\xi^{(k)}_{a,b}(p)\), whereas the flat space limit considers only \(\xi^{(0)}_{a,b}(p)\). Applied to the low energy expansion (6), it implies
\[\begin{split} A(S,T)&=\frac{1}{\lambda^{\frac{3}{2}} }\sum_{k=0}^{\infty}\frac{1}{\lambda^{k/2}}A^{(k)}(S,T)\,,\\ A^{(k)}(S,T)&=\text{SUGRA}^{(k)}+\sum_{b=0}^{\infty} \sum_{a=b}^{\infty}\frac{2^{a+b+1}}{\Gamma(b+1)}S^{a-b}(TU)^{b}\xi^{(k)}_{a,b} (p)\,.\end{split} \tag{8}\]
The supergravity terms \(\text{SUGRA}^{(k)}\) are given by
\[\text{SUGRA}^{(0)}=\frac{1}{STU}\,,\ \text{SUGRA}^{(1)}=-\frac{p}{6}\,\frac{pS^{2}+2 (p-3)TU}{(STU)^{2}}\,,\ \dots\,,\ \text{SUGRA}^{(k>p)}=0\,, \tag{9}\]
and the general expression can be found in (115). The leading contribution is the Virasoro-Shapiro amplitude in flat space
\[A^{(0)}(S,T)=-\frac{\Gamma\left(-S\right)\Gamma\left(-T\right)\Gamma\left(-U \right)}{\Gamma\left(S+1\right)\Gamma\left(T+1\right)\Gamma\left(U+1\right)}\,, \tag{10}\]
which fixes the first layer of Wilson coefficients \(\xi^{(0)}_{a,b}(p)\).
### Dispersion relation
The ingredients to write a dispersion relation for the Mellin amplitude are its physical poles and residues and its behaviour at infinity. As briefly discussed before, and more widely in Appendix A, the (12) OPE of \(\mathcal{T}(U,V)\) contains superconformal primaries that are singlets under \(\text{SU}(4)_{R}\). We will indicate them by \(\mathcal{O}_{s}\): \(\tau_{s}\) will label their twist and \(C_{s}\) the product of OPE coefficients \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{s}\rangle\times\langle \mathcal{O}_{p}\mathcal{O}_{p}\mathcal{O}_{s}\rangle\). In the Mellin amplitude, the exchange of these operators and their descendants results in an infinite sequence of poles at \(s_{1}=\tau_{s}+2m-\frac{2}{3}p\), for \(m\in\mathbb{N}_{0}\). Similarly, the Mellin amplitude has also poles in \(s_{2}\) and \(s_{3}\), due to the exchange of superprimaries in the other OPE channels. We will denote them by \(\mathcal{O}_{t}\). They transform in the \([0,p-2,0]\) representation of \(R\)-symmetry, have twist \(\tau_{t}\) and we will use \(C_{t}\) to label the product of OPE coefficients \(\langle\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{t}\rangle\times\langle \mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{t}\rangle\). The exact position of their poles is at \(s_{2},s_{3}=\tau_{t}+2m-\frac{2}{3}p\), for \(m\in\mathbb{N}_{0}\).
Due to the bound on chaos, [23] the planar Mellin amplitude is bounded in the Regge limit [24]5
Footnote 5: The condition \(\text{Re}(s_{2})\leq\frac{p}{3}\) comes from the fact that in the correlator \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\) the lowest twist exchanged in the \(t\) channel has twist equal to \(p\).
\[\lim_{s_{1}\to\infty}M(s_{1},s_{2})\leq O(s_{1}^{-2})\,,\ \ \text{Re}(s_{2})\leq \frac{p}{3}\,. \tag{11}\]
This allows us to write a dispersion relation for the Mellin amplitude \(M(s_{1},s_{2})\). To do that, we keep the variable \(s_{3}\) fixed and rewrite the Mellin amplitude at a generic point as a contour integral in the \(s_{1}\) plane. Afterwards, we deform the contour as illustrated in figure 1. The
dispersion relation reads
\[\begin{split} M(s_{1},s_{2})&=\oint_{s_{1}}\frac{ds^{ \prime}}{2\pi i}\frac{M(s^{\prime},-s_{3}-s^{\prime})}{s^{\prime}-s_{1}}\\ &=\sum_{\mathcal{O}_{s}}\sum_{m=0}^{\infty}C_{s}\frac{\mathcal{Q} _{s}(s_{3};\tau_{s},\ell,m)}{s_{1}-\tau_{s}-2m+\frac{2}{3}p}+\sum_{\mathcal{O} _{t}}\sum_{m=0}^{\infty}C_{t}\frac{\mathcal{Q}_{t}(s_{3};\tau_{t},\ell,m)}{s_ {2}-\tau_{t}-2m+\frac{2}{3}p}\,,\end{split} \tag{12}\]
where \(\ell\) labels the spin of the exchanged operator. The kinematic functions \(\mathcal{Q}_{s}(s_{3};\tau_{s},\ell,m)\) and \(\mathcal{Q}_{t}(s_{3};\tau_{t},\ell,m)\) are related to Mack polynomials [25] and their expressions can be found in Appendix C.
Similarly to (3), \(M(s_{1},s_{2})\) receives contributions from short and long multiplets. When plugged in into the dispersion relation (12), the information of protected operators6 completely determines the supergravity amplitude. Away from the strict \(\lambda\to\infty\), single-trace long multiplets start gaining corrections to their OPE data, thus producing the "stringy" \(\frac{1}{\sqrt{\lambda}}\) corrections in (6). We will comment more on that in the next section.7
Footnote 6: See Appendix A for explicit OPE coefficients.
Footnote 7: Double trace operators do not contribute to (12) at \(\mathcal{O}(\frac{1}{c})\).
### OPE data of stringy operators
At strong coupling, the superprimaries belonging to long multiplets gain large anomalous dimensions. Holographically these are related to massive string states. In fact, at leading order in \(\frac{1}{\sqrt{\lambda}}\), their twist \(\tau\) is given by \(mR\), where \(m\) is the mass of the string state and \(R\)
Figure 1: Contour deformation involved in the dispersion relation. The integral around a generic point is equal to the sum over the physical poles plus a contribution from the arc at infinity. The contribution coming from the arc at infinity vanishes due to the bound on chaos.
is the radius of AdS. In type IIB string theory, the string energy levels are \(m^{2}=4\frac{\delta}{\alpha^{\prime}}\), with \(\delta\in\mathbb{N}\). As a consequence
\[\tau=2\sqrt{\delta}\,\lambda^{\frac{1}{4}},\ \ \delta\in\mathbb{N}\,, \tag{13}\]
where recall \(\lambda=\frac{R^{4}}{\alpha^{\prime 2}}\). Apart from \(\delta\), the OPE data of each stringy operator will depend on \(\lambda\), \(p\), the spin \(\ell\) and on other possible quantum numbers, which we will collectively denote by \(\hat{r}\). At leading order, multiple operators can share the same \(\delta,\ell\) and transform in the same \(R\)-symmetry representations. This degeneracy (i.e. the range of \(\hat{r}\)) has been discussed in [26] and, unfortunately, can not be resolved by just looking at four-point functions of half-BPS operators. As a consequence, in our setup, we will not be able to obtain OPE data for the unmixed operators, but just to compute averages over the quantum numbers \(\hat{r}\)
\[\langle\ldots\rangle_{\delta,\ell}=\sum_{\hat{r}}\ldots\,. \tag{14}\]
When the exchanged operators have very large dimensions, as in this case, we can actually perform the sums \(\sum_{m=0}^{\infty}\) in (12). It was shown in [5] that this sum is dominated by \(m\sim\tau^{2}\), so we can set \(m=x\tau^{2}\) and replace the sum with the integral
\[\sum_{m=0}^{\infty}\to\tau^{2}\int_{0}^{\infty}dx\,. \tag{15}\]
Some details on this can be found in Appendix D. In this way we can completely fix the functional dependence of the OPE data on \(\lambda\). The twists go as
\[\begin{split}\tau_{s}(\delta,\ell,\hat{r};\lambda,p)& =2\sqrt{\delta}\,\lambda^{\frac{1}{4}}+\tau_{s,1}(\delta,\ell,\hat{ r};p)+\tau_{s,2}(\delta,\ell,\hat{r};p)\lambda^{-\frac{1}{4}}+\ldots\,,\\ \tau_{t}(\delta,\ell,\hat{r};\lambda,p)&=2\sqrt{ \delta}\,\lambda^{\frac{1}{4}}+\tau_{t,1}(\delta,\ell,\hat{r};p)+\tau_{t,2}( \delta,\ell,\hat{r};p)\lambda^{-\frac{1}{4}}+\ldots\,,\end{split} \tag{16}\]
and the OPE coefficients read
\[\begin{split} C_{s}(\delta,\ell,\hat{r};\lambda,p)& =\frac{\pi^{3}(-1)^{p}\,\tau_{s}(\delta,\ell,\hat{r};\lambda,p)^ {2p+2}\,2^{-2\ell-2p-2\tau_{s}(\delta,\ell,\hat{r};\lambda,p)-8}}{(\ell+1) \Gamma(p-1)\Gamma(p)\sin^{2}\left(\frac{\pi\tau_{s}(\delta,\ell,\hat{r}; \lambda,p)}{2}\right)}f_{s}(\delta,\ell,\hat{r};\lambda,p)\,,\\ C_{t}(\delta,\ell,\hat{r};\lambda,p)&=\frac{\pi^{3 }(-1)^{\ell}\tau_{t}(\delta,\ell,\hat{r};\lambda,p)^{2p+2}2^{-2\ell-2p-2\tau_ {t}(\delta,\ell,\hat{r};\lambda,p)-8}}{(\ell+1)\Gamma(p-1)\Gamma(p)\sin^{2} \left(\frac{\pi p}{2}+\frac{\pi\tau(\delta,\ell,\hat{r};\lambda,p)}{2}\right)} f_{t}(\delta,\ell,\hat{r};\lambda,p)\,,\\ f_{s}(\delta,\ell,\hat{r};\lambda,p)&=f_{s,0}( \delta,\ell,\hat{r};p)+f_{s,1}(\delta,\ell,\hat{r};p)\lambda^{-\frac{1}{4}}+f_ {s,2}(\delta,\ell,\hat{r};p)\lambda^{-\frac{1}{2}}+\ldots\,,\\ f_{t}(\delta,\ell,\hat{r};\lambda,p)&=f_{t,0}( \delta,\ell,\hat{r};p)+f_{t,1}(\delta,\ell,\hat{r};p)\lambda^{-\frac{1}{4}}+f_ {t,2}(\delta,\ell,\hat{r};p)\lambda^{-\frac{1}{2}}+\ldots\,.\end{split} \tag{17}\]
## 3 Worldsheet method
In this section we determine \(A^{(1)}(S,T)\) using the method developed in [7] for the correlator \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\). We make an ansatz for the world-sheet integrand which has transcendentality
three and contains single-valued multiple polylogs and single-valued multiple zeta values. This was motivated in [7; 8] with a simple toy model of strings in AdS expanded around flat space. The parameters of the ansatz are fixed by matching the residues with those predicted by the OPE. We determine these residues directly by applying the Borel transform (7) to the OPE poles of the Mellin amplitude in (12).
### Poles and residues
To discuss how the poles and residues of \(A(S,T)\) in the \(1/\sqrt{\lambda}\) expansion depend on OPE data, we follow Appendix C of [6]. For the Mellin amplitude we know that non-perturbatively all its poles originate from the OPE and have the form (we consider the poles in \(s_{1}\) for concreteness)
\[M(s_{1},s_{2})\sim\frac{C_{s}\mathcal{Q}_{s}(s_{3};\tau,\ell,m)}{s_{1}-\tau-2m +\frac{2}{3}p}\,, \tag{13}\]
where \(m\) labels conformal descendants. We will see that each conformal family will become one particle in the flat space limit, so we have to consider the sum over descendants8
Footnote 8: Considering this sum over poles is justified by the dispersion relation (12).
\[M_{\tau,\ell}(s_{1},s_{2})=\sum_{m=0}^{\infty}\frac{C_{s}\mathcal{Q}_{s}(s_{3} ;\tau,\ell,m)}{s_{1}-\tau-2m+\frac{2}{3}p}\,. \tag{14}\]
The transform (7) of (14) is
\[A_{\tau,\ell}(S,T)=2\Gamma(p-1)\Gamma(p)\int_{-i\infty+\kappa}^{+i\infty+ \kappa}\frac{d\alpha}{2\pi i}e^{\alpha}\alpha^{-p-4}\sum_{m=0}^{\infty}\frac{ C_{s}\mathcal{Q}_{s}(\frac{2\sqrt{\lambda}U}{\alpha};\tau,\ell,m)}{\frac{2 \sqrt{\lambda}S}{\alpha}-\tau-2m+\frac{2}{3}p}\,. \tag{15}\]
We exchange the integral and summation and for each \(m\) pick the pole at9
Footnote 9: At this point we should explain why we do not pick up the pole at \(\alpha=0\). The reason is the absence of a bulk point UV singularity of planar \(\mathcal{N}=4\) SYM at finite ’t Hooft coupling \(\lambda\), as discussed in [27]. In the bulk point limit, we expect a divergence like \(\frac{1}{(z-\overline{z})^{4-3}}=\frac{1}{z-\overline{z}}\). Translating that to Mellin space implies that the Mellin amplitude grows at most as \(M(\xi S,\xi T)\sim\xi^{-p-4}\) in the limit of very large \(\xi\)[24].
\[\alpha=\alpha_{*}\equiv\frac{2\sqrt{\lambda}S}{\tau+2m-\frac{2}{3}p}\,. \tag{16}\]
Next we have to sum over \(m\) as explained in (15). We arrive at
\[A_{\tau,\ell}(S,T)=-2\Gamma(p-1)\Gamma(p)\frac{\tau^{2}}{2\sqrt{\lambda}S} \int_{0}^{\infty}dx\ e^{\alpha_{*}}\alpha_{*}^{-p-2}C_{s}\mathcal{Q}_{s}\left( \tfrac{2\sqrt{\lambda}U}{\alpha_{*}};\tau,\ell,x\tau^{2}\right)\,. \tag{17}\]
We can now expand the integrand at large \(\lambda\), using that \(\tau\sim\lambda^{1/4}\). The answer has the form
\[-2\Gamma(p-1)\Gamma(p)\frac{\tau^{2}}{2\sqrt{\lambda}S}e^{\alpha_{*}} \alpha_{*}^{-p-2}C_{s}\mathcal{Q}_{s}\left(\tfrac{2\sqrt{\lambda}U}{\alpha_{*} };\tau,\ell,x\tau^{2}\right)=\frac{1}{\lambda^{\frac{3}{2}}}\frac{e^{-\frac{1 }{4x}+\frac{\sqrt{\lambda}S}{\tau^{2}x}}}{x^{2}}\sum_{i=0}^{\infty}\frac{1}{ \lambda^{i/4}}P^{(i)}\left(\tfrac{1}{x}\right)\,, \tag{18}\]
where \(P^{(i)}\left(\frac{1}{x}\right)\) are polynomials in \(\frac{1}{x}\) with increasing degree, for instance degree \(\frac{3}{2}i\) for even \(i\). The leading term is given by
\[P^{(0)}\left(\tfrac{1}{x}\right)=\frac{\delta^{p}S^{-p-3}f_{s;0}(\delta,\ell)C_ {\ell}^{(1)}\left(1+\frac{2T}{S}\right)}{4(\ell+1)}\,, \tag{10}\]
where \(C_{\ell}^{(1)}\left(x\right)\) is a Gegenbauer polynomial, as appropriate for partial waves in flat space. We can now do the integrals in \(x\) using
\[\int_{0}^{\infty}dx\,\frac{e^{-\frac{1}{4x}+\frac{\sqrt{3}S}{\tau^{2}x}}}{x^{2 }}=-\frac{\frac{\tau^{2}}{\sqrt{4}}}{S-\frac{\tau^{2}}{4\sqrt{\lambda}}}=- \frac{4\delta}{S-\delta}+O\left(\lambda^{-\frac{1}{4}}\right)\,. \tag{11}\]
The effect of the polynomials \(P^{(i)}\left(\frac{1}{x}\right)\) is that each additional power of \(\frac{1}{x}\) increases the order of the pole by one, as \(\frac{1}{x}\) can be replaced by \(\frac{\tau^{2}}{\sqrt{\lambda}}\partial_{S}\) acting on both sides of (11).
We first compute the poles and residues of \(A_{\tau,\ell}(S,T)\) at order \(\lambda^{-\frac{1}{4}}\). Since \(A_{\tau,\ell}(S,T)\) should have no corrections at this order (8) we set the residues to zero, which fixes the OPE data at this order to the values given in (10) below.
At the next order \(P^{(2)}\left(\frac{1}{x}\right)\) is a polynomial of degree \(3\) so that there are poles up to fourth order. The residues in terms of OPE data are \(p\)-dependent generalisations of equation (102) of [6]. We include them in an ancillary Mathematica file. Crucially, the poles of order four and three depend only on OPE data that was fixed at lower orders.
### World-sheet correlator
Next we make an ansatz for the world-sheet integral representation of our correlator that respects the crossing symmetry \(A^{(1)}(S,T)=A^{(1)}(S,U)\)
\[A^{(1)}(S,T)=B_{1}^{(1)}(S,T)+B_{1}^{(1)}(S,U)+B_{1}^{(1)}(U,T)+B_{2}^{(1)}(S, T)+B_{2}^{(1)}(S,U)\,, \tag{12}\]
where
\[B_{i}^{(1)}(S,T)=\int d^{2}z|z|^{-2S-2}|1-z|^{-2T-2}G_{i}^{(1)}(S,T,z)\,,\quad i =1,2\,, \tag{13}\]
and crossing symmetry implies
\[B_{1}^{(1)}(U,T)=B_{1}^{(1)}(T,U)\quad\Leftrightarrow\quad G_{1}^{(1)}(S,T,z )=G_{1}^{(1)}(T,S,1-z)\,. \tag{14}\]
Note that there is no symmetry constraint on \(B_{2}^{(1)}(S,T)\) and the only reason to include the terms \(B_{1}^{(1)}(S,T)\) and \(B_{1}^{(1)}(S,U)\) in the ansatz is to make it easier to compare to the fully crossing symmetric case \(p=2\), since in this case it is easy to see that our ansatz implies
\[B_{2}^{(1)}(S,T)=0\,,\quad p=2\,. \tag{15}\]
We also impose symmetry under the exchange \(z\leftrightarrow\overline{z}\) and define the following combinations of single-valued multiple polylogarithms10
Footnote 10: See [28, 29] and also [7, 8] for details on these functions in this context.
\[\begin{split}\mathcal{L}_{w}^{s}(z)&=\mathcal{L}_{w} (z)+\mathcal{L}_{w}(1-z)+\mathcal{L}_{w}(\overline{z})+\mathcal{L}_{w}(1- \overline{z})\,,\\ \mathcal{L}_{w}^{a}(z)&=\mathcal{L}_{w}(z)-\mathcal{L}_ {w}(1-z)+\mathcal{L}_{w}(\overline{z})-\mathcal{L}_{w}(1-\overline{z})\,.\end{split} \tag{16}\]
We choose the following weight 3 basis [8]
\[\begin{split}\mathcal{L}^{(1)s}&=\left(\mathcal{L}^{s}_ {000}(z),\mathcal{L}^{s}_{001}(z),\mathcal{L}^{s}_{010}(z),\zeta(3)\right),\\ \mathcal{L}^{(1)a}&=\left(\mathcal{L}^{a}_{000}(z), \mathcal{L}^{a}_{001}(z),\mathcal{L}^{a}_{010}(z)\right).\end{split} \tag{28}\]
Our ansatz for the worldsheet correlator is then
\[G^{(1)}_{i}(S,T,z)=\sum_{u=1}^{4}r^{(1)s}_{i,u}(S,T)\mathcal{L}^{(1)s}_{u}+\sum _{u=1}^{3}r^{(1)a}_{i,u}(S,T)\mathcal{L}^{(1)a}_{u}\,, \tag{29}\]
where \(r^{(1)s/a}_{i,u}(S,T)\) are rational functions of homogeneity 0 and we assume the denominator to be \(U^{2}\). Furthermore \(r^{(1)s/a}_{1,u}(S,T)\) have symmetry properties according to (17)
\[r^{(1)s}_{1,u}(S,T)=\frac{f_{1,u}(p)(S+T)^{2}+f_{2,u}(p)ST}{(S+T)^{2}}\,,\qquad r ^{(1)a}_{1,u}(S,T)=f_{3,u}(p)\frac{S-T}{S+T}\,, \tag{30}\]
whereas \(r^{(1)s/a}_{2,u}(S,T)\) are of the form
\[\frac{f_{4,u}(p)(S+T)^{2}+f_{5,u}(p)ST+f_{6,u}(p)(S^{2}-T^{2})}{(S+T)^{2}}\,. \tag{31}\]
In total our ansatz depends on 32 functions of \(p\).
We can now compute the residues of our ansatz at \(S=1,2,\ldots\) and \(T=1,2,\ldots\) and match them with those computed in section 3.1 in terms of OPE data. We further compute the residues at \(S=0\) and \(T=0\) and match them to the supergravity term (10). As discussed in [8] the functions \(G^{(1)}_{i}(S,T,z)\) have certain ambiguities, i.e. terms that we can add to \(G^{(1)}_{i}(S,T,z)\) without changing \(A^{(1)}(S,T)\). With the ansatz discussed here, there are 6 such ambiguities, so that our final answer has the form
\[G^{(1)}_{i}(S,T,z)=\sum_{u=1}^{4}r^{(1)s}_{i,u}\mathcal{L}^{(1)s}_{u}+\sum_{u =1}^{3}r^{(1)a}_{i,u}\mathcal{L}^{(1)a}_{u}+\sum_{j=1}^{6}a_{j}\left(\sum_{u= 1}^{4}\hat{r}^{(1)s}_{i,ju}\mathcal{L}^{(1)s}_{u}+\sum_{u=1}^{3}\hat{r}^{(1)a }_{i,ju}\mathcal{L}^{(1)a}_{u}\right)\,, \tag{32}\]
with
\[\begin{split} r^{(1)s}_{1}&=\frac{1}{24}\left(-p^{2 },2(p-2)p,p^{2}-2p-6,48\right)\,,\\ r^{(1)a}_{1}&=\frac{p^{2}(S-T)}{24(S+T)}\left(-1,2, 1\right)\,,\\ r^{(1)s}_{2}&=\frac{p(p-2)}{24(S+T)}\left(3S,-2(2S+ T),-2S-T,0\right)\,,\\ r^{(1)a}_{2}&=\frac{p(p-2)}{24(S+T)}\left(3S,-2(2S -T),-2S+T\right)\,.\end{split} \tag{33}\]
The coefficients \(a_{j}\) are unfixed and multiply the ambiguities, which are given in appendix E.
Dispersive sum rules method
In this section we compute \(A^{(1)}(S,T)\) with a different method, following [6]. We apply the low energy expansion (6) to a dispersion relation to obtain dispersive sum rules, which relate Wilson coefficients to OPE data. We then make an ansatz in terms of nested sums for certain combinations of OPE data, and fix the parameters of the ansatz with locality and single-valuedness constraints for the Wilson coefficients.
In (12) we have seen a way to write \(M(s_{1},s_{2})\) dispersively in terms of two channels. Unfortunately, written in this way, the symmetry under the exchange of \(s_{2}\) and \(s_{3}\) is obscured. Therefore, to make it manifest, in this section we implement a different dispersion relation, where, instead of \(s_{3}\), we keep fixed a new parameter \(r=\frac{s_{2}s_{3}}{s_{1}}\). With this change of coordinates, \(s_{2}\) and \(s_{3}\) are now treated on the same footing, thus leading to a more symmetric dispersion relation and eventually to simpler sum rules.11 The details on how this dispersion relation is derived are deferred to Appendix F. In the following we simply report the associated sum rules and the corresponding results for the Wilson coefficients and OPE data.
Footnote 11: In spirit, this is very similar to the crossing symmetric dispersion relations of [11; 12] for the fully crossing symmetric case.
### Flat space OPE data
The first of the dispersive sum rules relating Wilson coefficients and OPE data reads
\[\xi^{(0)}_{a,b}(p)=\sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{c^{(0)}_{s,a,b,q}F^{(0)}_{s;q}(\delta)+c^{(0)}_{t,a,b,q}F^{(0)}_{t;q}(\delta)}{\delta^{3+a+b }}\,, \tag{14}\]
where
\[\begin{split} F^{(0)}_{s;q}(\delta)&=\frac{4^{q}}{ \Gamma(2q+2)}\sum_{\ell=0,2}^{2\delta-2}\langle f_{s,0}\rangle_{\delta,\ell}( \ell-q+1)_{q}(2+\ell)_{q}\,,\\ F^{(0)}_{t;q}(\delta)&=\frac{4^{q}}{\Gamma(2q+2)} \sum_{\ell=0,1}^{2\delta-2}\langle f_{t,0}\rangle_{\delta,\ell}(\ell-q+1)_{q} (2+\ell)_{q}\,,\end{split} \tag{15}\]
and
\[\begin{split} c^{(0)}_{s,a,b,q}&=\frac{(-1)^{q}q2^ {-a-b-1}\Gamma(2b-q)}{\Gamma(b-q+1)},\quad q>0\,,\\ c^{(0)}_{s,a,b,0}&=2^{-a-1}\delta_{b,0},\quad q=0 \,,\\ c^{(0)}_{t,a,b,q}&=\frac{(-1)^{a+1+q}\Gamma(a) \Gamma(b+1)2^{-a-b-1}(-a-b+q)}{\Gamma(b-q+1)\Gamma(a-b+q+1)}\\ &\quad\times_{3}F_{2}(q,q-b,-a-b+q+1;-a-b+q,a-b+q+1;1),\quad q>0 \,,\\ c^{(0)}_{t,a,b,0}&=\frac{(-2)^{-a-b-1}(a+b)\Gamma(b -a)}{\Gamma(1-a)},\quad q=0\,.\end{split} \tag{16}\]
The Wilson coefficients \(\xi^{(0)}_{a,b}(p)\) can be read off from the flat space Virasoro-Shapiro amplitude (see (117) and (120)). Thus we can systematically analyse the equations (10), solving for the functions \(F^{(0)}_{s;q}(\delta)\) and \(F^{(0)}_{t;q}(\delta)\). As an example, we work out the \(b=0\) case in appendix G.2. After studying many cases, the general conclusion we extract is that
\[F^{(0)}_{s;q}(\delta)=F^{(0)}_{t;q}(\delta)=F^{(0)}_{q}(\delta),\ \forall q\in\mathbb{N}_{0}\,, \quad\Leftrightarrow\quad\langle f_{s,0}\rangle_{\delta,\ell}=\langle f_{t,0} \rangle_{\delta,\ell}=\langle f_{0}\rangle_{\delta,\ell}\,, \tag{121}\]
where \(F^{(0)}_{q}(\delta)\) and \(\langle f_{0}\rangle_{\delta,\ell}\) are the same as in the \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\) case [6], i.e.
\[F^{(0)}_{q}(\delta)=\sum_{d=\lfloor\frac{q+1}{2}\rfloor}^{q}\sum_{\begin{subarray} {c}s_{1},\ldots,s_{d}\in\{1,2\}\\ s_{1}+\ldots+s_{d}=q\end{subarray}}2^{\sum_{i}\delta_{s_{i},1}}\delta^{q}Z_{s_ {1},\ldots,s_{d}}(\delta-1)\,, \tag{122}\]
where \(Z_{s_{1},\ldots,s_{d}}(\delta-1)\) is an Euler-Zagier sum defined by
\[Z_{s_{1},s_{2},s_{3},\ldots}(N)=\sum_{n=1}^{N}\frac{Z_{s_{2},s_{3},\ldots}(n-1 )}{n^{s_{1}}}\,,\qquad Z(N)=1\,,\qquad Z_{s_{1},s_{2},s_{3},\ldots}(0)=0\,. \tag{123}\]
In particular we find that \(\langle f_{t,0}\rangle_{\delta,\ell}=0\), if \(\ell\) is odd. Also, we find that any \(\langle f_{s,0}\rangle_{\delta,\ell}\) does not depend on \(p\), i.e. at leading order in \(\lambda\) the full \(p\)-dependence of the OPE coefficients is in the prefactor in (17). All this is in agreement with the more general result of [26] for the product of leading OPE coefficients \(\langle\mathcal{O}_{p_{1}}\mathcal{O}_{p_{2}}\mathcal{O}_{\Delta,\ell,[0,n,0] }\rangle\times\langle\mathcal{O}_{p_{3}}\mathcal{O}_{p_{4}}\mathcal{O}_{\Delta,\ell,[0,n,0]}\rangle\).
At the next order we impose that there are no \(\lambda^{-\frac{1}{4}}\) corrections to the flat space Wilson coefficients. From the string theory perspective, this would correspond to fractional powers of \(\alpha^{\prime}/R^{2}\), and we assume that such powers are absent. This leads to the equation
\[\begin{split} 0=&\sum_{\delta=1}^{\infty}\sum_{q=0}^{b} \frac{c^{(0)}_{s,a,b,q}\big{(}F^{(1)}_{s;q}(\delta)-(3+a+b)T^{(1)}_{s;q}( \delta)\big{)}}{\delta^{\frac{7}{2}+a+b}}\\ &+\sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{c^{(0)}_{t,a,b,q} \big{(}F^{(1)}_{t;q}(\delta)-(3+a+b)T^{(1)}_{t;q}(\delta)\big{)}}{\delta^{ \frac{7}{2}+a+b}}\,,\end{split} \tag{124}\]
where
\[F^{(1)}_{s;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,2}^{2\delta-2}(\ell-q+1) _{q}(2+\ell)_{q}\left(\sqrt{\delta}\langle f_{s,1}\rangle_{\delta,\ell}- \langle f_{s,0}\rangle_{\delta,\ell}\left(\ell p+\ell+2p+\frac{7}{4}\right) \right),\] \[T^{(1)}_{s;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,2}^{2\delta-2}\langle f_{ s,0}\rangle_{\delta,\ell}(\ell-q+1)_{q}(2+\ell)_{q}(\tau_{s,1}(\delta,\ell)+\ell+2),\] \[F^{(1)}_{t;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,1}^{2\delta-2}(\ell-q+1) _{q}(2+\ell)_{q}\left(\sqrt{\delta}\langle f_{t,1}\rangle_{\delta,\ell}- \langle f_{t,0}\rangle_{\delta,\ell}\left(\ell p+\ell+\frac{p^{2}}{2}+\frac{15 }{4}\right)\right),\] \[T^{(1)}_{t;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,1}^{2\delta-2}\langle f_{ t,0}\rangle_{\delta,\ell}(\ell-q+1)_{q}(2+\ell)_{q}(\tau_{t,1}(\delta,\ell)+\ell+2). \tag{125}\]
The solution to these equations is12
Footnote 12: We assumed in this analysis that all operators acquire the same finite shift, as argued in section 6.1.1 of [2].
\[\begin{split}\tau_{s,1}(\delta,\ell)&=\tau_{t,1}( \delta,\ell)=-2-\ell\,,\\ \langle f_{s,1}\rangle_{\delta,\ell}&=\frac{\langle f _{s,0}\rangle_{\delta,\ell}}{\sqrt{\delta}}\left(\ell p+\ell+2p+\frac{7}{4} \right)\,,\\ \langle f_{t,1}\rangle_{\delta,\ell}&=\frac{\langle f _{t,0}\rangle_{\delta,\ell}}{\sqrt{\delta}}\left(\ell p+\ell+\frac{p^{2}}{2}+ \frac{15}{4}\right)\,.\end{split} \tag{4.9}\]
### First curvature correction
At the next order we find the dispersive sum rule
\[\begin{split}\xi^{(1)}_{a,b}(p)=&\sum_{\delta=1}^{ \infty}\sum_{q=0}^{b}\frac{c^{(0)}_{s,a,b,q}\big{(}F^{(2)}_{s;q}(\delta)-(3+a+b )T^{(2)}_{s;q}(\delta)\big{)}+c^{(2,0)}_{s,a,b,q}F^{(0)}_{s;q}(\delta)+c^{(2,1 )}_{s,a,b,q}F^{(0)}_{s;q+1}(\delta)}{\delta^{4+a+b}}\\ &+\sum_{\delta=1}^{\infty}\sum_{q=0}^{b}\frac{c^{(0)}_{t,a,b,q} \big{(}F^{(2)}_{t;q}(\delta)-(3+a+b)T^{(2)}_{t;q}(\delta)\big{)}+c^{(2,0)}_{t,a,b,q}F^{(0)}_{t;q}(\delta)+c^{(2,1)}_{t,a,b,q}F^{(0)}_{t;q+1}(\delta)}{ \delta^{4+a+b}}\,,\end{split} \tag{4.10}\]
where
\[F^{(2)}_{s;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,1}^{2\delta-2}(\ell-q+1) _{q}(2+\ell)_{q}\left(\delta\langle f_{s,2}\rangle_{\delta,\ell}-\frac{1}{4} (4p^{2}+9p+5)\ell\langle f_{s,0}\rangle_{\delta,\ell}\right),\] \[T^{(2)}_{s;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,1}^{2\delta-2}\sqrt{ \delta}(\ell-q+1)_{q}(2+\ell)_{q}\langle f_{s,0}\tau_{s,2}\rangle_{\delta,\ell},\] \[F^{(2)}_{t;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,1}^{2\delta-2}(\ell-q+1) _{q}(2+\ell)_{q}\left(\delta\langle f_{t,2}\rangle_{\delta,\ell}-\frac{1}{4} \left(2p^{3}-2p^{2}+9p+13\right)\ell\langle f_{t,0}\rangle_{\delta,\ell} \right),\] \[T^{(2)}_{t;q}(\delta) =\frac{4^{q}}{\Gamma(2q+2)}\sum_{\ell=0,1}^{2\delta-2}\sqrt{ \delta}(\ell-q+1)_{q}(2+\ell)_{q}\langle f_{t,0}\tau_{t,2}\rangle_{\delta,\ell}, \tag{4.11}\]
and we wrote \(c^{(2,0)}_{s,a,b,q}\), \(c^{(2,1)}_{s,a,b,q}\), \(c^{(2,0)}_{t,a,b,q}\) and \(c^{(2,1)}_{t,a,b,q}\) in (G.1). The OPE data (4.11) is truly sensitive to AdS curvature corrections, since it depends on Wilson coefficients \(\xi^{(1)}_{a,b}(p)\) which are not fixed by the flat space Virasoro-Shapiro amplitude. The Wilson coefficients \(\xi^{(1)}_{a,b}(p)\) are in general unknown.
The situation regarding the sum rules (4.10) is now different from the one we encountered in section 4.1. In section 4.1 the LHS of the sum rules was known and the Wilson coefficients were an input into the dispersive sum rules and the OPE data was the output. To make progress we will proceed along the same lines of [6]. Fixing \(b\), we make a general ansatz for the OPE data that enters (4.10), as a linear combination of Euler-Zagier sums involving a few
undetermined functions of \(p\). Using (4.10) this determines the Wilson coefficients in terms of those functions. Afterwards, we impose physical conditions on the Wilson coefficients (namely single-valuedness) and these will completely fix all of these functions.
Let us see how this comes about in practice. In complete analogy with the \(p=2\) case, we first assume that \(T^{(2)}_{s;q}(\delta)\) and \(T^{(2)}_{t;q}(\delta)\) are linear combinations of Euler-Zagier sums with maximal depth \(q+1\) and maximal weight \(q+2\) and that \(F^{(2)}_{s;q}(\delta)\) and \(F^{(2)}_{t;q}(\delta)\) are linear combinations of Euler-Zagier sums with maximal depth \(q+1\) and maximal weight \(q+3\):
\[\begin{split} T^{(2)}_{s;q}(\delta)&=A^{1}_{q+2,q+1 },\\ T^{(2)}_{t;q}(\delta)&=A^{2}_{q+2,q+1},\\ F^{(2)}_{s;q}(\delta)&=A^{3}_{q+3,q+1}+\delta^{3} \zeta(3)A^{4}_{q,q},\\ F^{(2)}_{t;q}(\delta)&=A^{5}_{q+3,q+1}+\delta^{3} \zeta(3)A^{6}_{q,q},\end{split} \tag{4.12}\]
for \(q\geq 1\) where
\[A^{i}_{w_{\max},d_{\max}}=\sum_{w=1}^{w_{\max}}\sum_{d=1}^{d_{\max}}\sum_{ \begin{subarray}{c}s_{1},\ldots,s_{d}\in\mathbb{N}\\ s_{1}+\ldots+s_{d}=w\end{subarray}}c^{i}_{s_{1},\ldots,s_{d}}(p)\delta^{w}Z_ {s_{1},\ldots,s_{d}}(\delta-1)\,. \tag{4.13}\]
\(c^{i}_{s_{1},\ldots,s_{d}}(p)\) are the functions of \(p\) that are fixed by our procedure. For \(q=0\), see formula (G.6).
Second, we impose that the strong coupling expansion cannot contain negative powers of the Mellin variables apart from the supergravity term. This simple condition is nontrivial from the point of view of equations (4.10) and leads to
\[\xi^{(1)}_{a,b}(p)=0,\text{ for }a=0,\ldots,b-1,\forall\,b\in\mathbb{N}. \tag{4.14}\]
Plugging (4.10) into (4.14) leads to constraints on the OPE data.
Third, we impose that \(\xi^{(1)}_{a,b}(p)\) is in the ring of single-valued multiple zeta values. For a practical introduction to this topic, see section 3.1 of [6]. We use the MAPLE program _HyperlogProcedures_[30] to do so. In practice, for any fixed \(b\) it is enough to impose single-valuedness for a few values of \(a\) to fix all of the undetermined functions. Afterwards, we check that the resulting expression for \(\xi^{(1)}_{a,b}(p)\) is single-valued for other values of \(a\) that we did not use before.13
Footnote 13: For example, for \(b=0\), we impose single-valuedness for \(a=0,\ldots,6\) and checked it for \(a=7,\ldots,10\). For \(b=1\) we imposed it for \(a=1,\ldots,8\) and checked it for \(a=9,10,11\). For \(b=2\) we imposed it for \(a=2,\ldots,8\) and checked it for \(a=9,10,11\).
In this manner we solved (4.10) for \(b=0,\ldots,5\). In appendix G.3 we write all the details on how to implement this for the \(b=0\) case. From these cases, we extract the following general conclusions
\[\begin{split} T^{(2)}_{s;q}(\delta)&=T^{(2)}_{q}( \delta)\,,\\ T^{(2)}_{t;q}(\delta)&=T^{(2)}_{q}(\delta)+\frac{p^ {2}-4}{4}F^{(0)}_{q}(\delta)\,,\end{split} \tag{4.15}\]
where \(T_{q}^{(2)}(\delta)\) is the value obtained in the \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\) case [6] for \(T_{s;q}^{(2)}(\delta)\).
As to the OPE coefficients, they are determined by
\[\begin{split} F_{s;q}^{(2)}(\delta)&=F_{q}^{(2)}( \delta)+(p-2)\big{(}g_{1}(q,p)F_{q}^{(0)}(\delta)+g_{2}(q,p)F_{q+1}^{(0)}( \delta)\big{)}\,,\\ F_{t;q}^{(2)}(\delta)&=F_{q}^{(2)}(\delta)+(p-2) \big{(}g_{3}(q,p)F_{q}^{(0)}(\delta)+g_{4}(q,p)F_{q+1}^{(0)}(\delta)\big{)}\,, \end{split} \tag{4.16}\]
where \(F_{q}^{(2)}(\delta)\) is the value obtained in the \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\) case [6] for \(F_{s;q}^{(2)}(\delta)\) and
\[\begin{split} g_{1}(q,p)&=-\frac{p^{2}}{3}+\frac{1 }{6}p\left(3q^{2}+3q+5\right)+2q^{2}+3q+6,\\ g_{2}(q,p)&=\frac{1}{2}(q+1)(pq+p+4q+5),\\ g_{3}(q,p)&=\frac{1}{24}\left(3p^{3}+4p^{2}+p\left( 12q^{2}+18q+41\right)+48q^{2}+84q+78\right),\\ g_{4}(q,p)&=g_{2}(q,p).\end{split} \tag{4.17}\]
The solution for the OPE data completely fixes the Wilson coefficients \(\xi_{a,b}^{(1)}(p)\). At the next order in \(\frac{1}{\sqrt{\lambda}}\) this procedure does not allow us to fix all the undetermined functions.
## 5 Data and checks
### OPE data
We can determine all of the averages of the OPE data for each \(\delta\) and \(\ell\), either by computing the residues of the poles of the world-sheet integral (3.9) and comparing with the residues computed in section 3.1, or by equating (4.15) and (4.16) with (4.11). For example, for the leading Regge trajectory we find
\[\begin{split}&\langle f_{s,0}\tau_{s,2}\rangle_{\delta,2\delta-2}= \frac{r_{0}(\delta)}{2\delta^{\frac{3}{2}}}(3\delta^{2}-\delta+2),\\ &\langle f_{t,0}\tau_{t,2}\rangle_{\delta,2\delta-2}=\frac{r_{0}( \delta)}{2\delta^{\frac{3}{2}}}(3\delta^{2}-\delta+\frac{p^{2}}{2}),\\ &\langle f_{s,2}\rangle_{\delta,2\delta-2}=\frac{r_{0}(\delta)}{ \delta^{2}}\Big{(}\frac{7\delta^{2}}{2}-\frac{7\delta}{12}-\frac{p^{3}}{3}+ \left(2\delta^{2}-\delta+\frac{1}{2}\right)p^{2}\\ &\qquad\qquad\qquad\qquad+\frac{1}{6}\left(24\delta^{2}+3\delta- 1\right)p+\delta^{3}\left(2\zeta(3)-\frac{7}{6}\right)-\frac{35}{32}\Big{)}, \\ &\langle f_{t,2}\rangle_{\delta,2\delta-2}=\frac{r_{0}(\delta)}{ \delta^{2}}\Big{(}\frac{7\delta^{2}}{2}+\frac{17\delta}{12}+\frac{p^{4}}{8}+ \left(\delta-\frac{13}{12}\right)p^{3}+\left(2\delta^{2}-\frac{7\delta}{2}+ \frac{23}{8}\right)p^{2}\\ &\qquad\qquad\qquad\qquad+\frac{1}{6}\left(24\delta^{2}+3\delta- 28\right)p+\delta^{3}\left(2\zeta(3)-\frac{7}{6}\right)+\frac{77}{32}\Big{)}, \end{split} \tag{5.1}\]
where
\[r_{n}(\delta)=\frac{4^{2-2\delta}\delta^{2\delta-2n-1}(2\delta-2n-1)}{\Gamma( \delta)\Gamma\left(\delta-\big{\lfloor}\frac{n}{2}\big{\rfloor}\right)}\,. \tag{5.2}\]
Using that the operators on the leading Regge trajectory are non-degenerate, we can extract the twists for these operators with \(\mathrm{SU}(4)_{R}\) representation \([0,p-2,0]\) from the second equation in (5.1)
\[\tau^{[0,p-2,0]}\left(\tfrac{\ell}{2}+1,\ell\right)=\sqrt{2(\ell+2)}\lambda^{ \frac{1}{4}}-\ell-2+\frac{3\ell^{2}+10\ell+2p^{2}+8}{4\sqrt{2(\ell+2)}}\lambda^ {-\frac{1}{4}}+O(\lambda^{-\frac{3}{4}})\,. \tag{5.3}\]
This is in agreement with [31].14 A few of the anomalous dimensions for \(p>2\) have also been recomputed recently in [2] using the quantum spectral curve, namely
Footnote 14: With the respect to the conventions of (2.28) of [31], we need to identify \(S=\ell+2\) and remember the shift \(J=(p-2)+2\).
\[\tau_{2}^{[0,1,0]}(1,0) =\frac{13}{4}\,, \tau_{2}^{[0,2,0]}(1,0) =5\,, \tau_{2}^{[0,3,0]}(1,0) =\frac{29}{4}\,, \tag{5.4}\] \[\tau_{2}^{[0,1,0]}(2,2) =\frac{29}{4\sqrt{2}}\,, \tau_{2}^{[0,2,0]}(2,2) =\frac{9}{\sqrt{2}}\,,\]
also in perfect agreement with (5.3).
At the order we are considering, operators with odd spin only contribute to the quantity \(\langle f_{t,2}\rangle_{\delta,\ell}\), which is the leading contribution to the OPE coefficients for these operators. For the first few odd spin Regge trajectories our results for this quantity are
\[\langle f_{t,2}\rangle_{\delta\geq 2,2\delta-3}= -\frac{r_{\frac{1}{2}}(\delta)}{\delta}(\delta-1)\left(p^{2}-4 \right)\,, \tag{5.5}\] \[\langle f_{t,2}\rangle_{\delta\geq 3,2\delta-5}= -\frac{2r_{\frac{3}{2}}(\delta)}{3}(\delta-2)(\delta-1)(\delta+4 )\left(p^{2}-4\right)\,,\] \[\langle f_{t,2}\rangle_{\delta\geq 4,2\delta-7}= -\frac{r_{\frac{5}{2}}(\delta)}{45}(\delta-3)\left(10\delta^{4}+ 73\delta^{3}+128\delta^{2}-352\delta-192\right)\left(p^{2}-4\right)\,.\]
Anomalous dimensions for operators with odd spin appear only at higher orders in the \(1/\sqrt{\lambda}\) expansion, i.e. in \(A^{(k>1)}(S,T)\). In terms of degeneracies, the leading odd spin Regge trajectory with \(\ell=2\delta-3\) is particularly simple: it was shown in [26] that all the operators on this trajectory have degeneracy 2.
### Wilson coefficients
The Wilson coefficients that appear in the low energy expansion (2.8) can be computed either from the world-sheet correlator (3.18), as explained in [7] (which uses the methods of [32]) or by plugging the solutions (4.15-4.16) into the dispersive sum rule (4.10) and performing the sums, following [6]. The first few terms in the expansion are given by
\[A^{(1)}(S,T)=-\frac{p}{S^{2}T^{2}U^{2}}\left(\frac{p}{6}S^{2}+ \frac{p-3}{3}TU\right)+p(p-2)S\zeta(5) \tag{5.6}\] \[+\frac{\zeta(3)^{2}}{3}\left(\left(p^{2}-6p-36\right)S^{2}+2 \left(p^{2}+18\right)TU\right)+\zeta(7)\left(2p(p-2)S^{3}-\left(2p^{2}+2p+ \tfrac{441}{8}\right)STU\right)\] \[+\frac{2\zeta(3)\zeta(5)}{3}\left(\left(p^{2}-10p-81\right)S^{4} +2\left(2p^{2}+4p+81\right)S^{2}TU-\left(2p^{2}+4p+81\right)T^{2}U^{2}\right) +\ldots\,.\]
Some values of the Wilson coefficients \(\xi^{(1)}_{a,b}(p)\) had been computed before in the literature. Below we perform a detailed comparison of our results with all available results in the literature and we find full agreement. In [21] the Mellin amplitude was computed to order \(\lambda^{-\frac{5}{2}}\) using localisation methods. We find full agreement when comparing their results to our prediction for \(\xi^{(1)}_{a,0}(p)\) for \(a=0,1\), which is
\[\xi^{(1)}_{0,0}(p) =0\,, \tag{5.7}\] \[\xi^{(1)}_{1,0}(p) =\frac{1}{4}(p-2)p\zeta(5)\,. \tag{5.8}\]
In [33] an algorithm to compute the \(p\)-dependence of Wilson coefficients in a generic correlator \(\langle\mathcal{O}_{p_{1}}\mathcal{O}_{p_{2}}\mathcal{O}_{p_{3}}\mathcal{O}_{ p_{4}}\rangle\) was proposed. This algorithm is based on considering an effective field theory in full \(AdS_{5}\times S_{5}\). With this algorithm it is possible to compute the \(p\)-dependence of Wilson coefficients up to a finite number of undetermined parameters. Take for example the Wilson coefficient \(\xi^{(1)}_{2,0}(p)\), which enters at \(O(\lambda^{-3})\). [33] predicts
\[\xi^{(1)}_{2,0}(p)=\frac{1}{24}\zeta(3)^{2}(3\tilde{B}_{1}-24\tilde{E}_{0}+(p- 6)p-20), \tag{5.9}\]
where \(\tilde{B}_{1}\) and \(\tilde{E}_{0}\) are undetermined rational numbers.15 Our own computation gives
Footnote 15: More specifically, \(B_{1}=\tilde{B}_{1}\zeta^{2}(3)\) and \(E_{0}=\tilde{E}_{0}\zeta^{2}(3)\) are defined in equation (84) of [33].
\[\xi^{(1)}_{2,0}(p)=\frac{1}{24}((p-6)p-36)\zeta(3)^{2}\,. \tag{5.10}\]
We see that (5.9) and (5.10) are fully compatible and together they imply \(3\tilde{B}_{1}-24\tilde{E}_{0}+16=0\). Combining this approach with our results and extending the localisation constraints in [21] at order \(\lambda^{-3}\) we were able to fix the Mellin amplitude up to an unknown number \(\beta\)
\[\begin{split}& M(s_{1},s_{2})\Big{|}_{\lambda^{-3}}=\frac{\zeta^{ 2}_{3}(p)_{4}}{8\Gamma(p-1)}\Big{(}(p+4)_{3}\,s_{1}s_{2}s_{3}+\frac{(p+4)_{2}} {3}(p^{2}-6p-36)\,s_{1}^{2}\\ &+\frac{2(p+4)_{2}}{3}(p^{2}+18)s_{2}s_{3}+\frac{(p+4)}{3}(p-2)( (p^{3}-7p^{2}-24p+36)+\frac{3}{2}\beta(p+2))s_{1}\\ &+\frac{2(p-12)}{27}(p^{5}-3p^{4}+2p^{3}+138p^{2}+288p+216)+ \frac{\beta}{3}(p-2)^{2}p(p+2)\Big{)}\,.\end{split} \tag{5.11}\]
## 6 Conclusions
In this paper we determined the first curvature correction to the AdS Virasoro-Shapiro amplitude with external KK modes, for the correlator \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{p}\mathcal{O}_{p}\rangle\), generalising the \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\) results of [5; 6; 7]. We expect that higher curvature corrections can be obtained in an analogous way, as demonstrated in [8], where the second curvature correction for \(\langle\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\mathcal{O}_{2}\rangle\) was computed.
We also used the opportunity to recapitulate and contrast the two methods that were previously used to solve the problem at hand. The method introduced in [8] and used in
section 3 assumes the existence of a world-sheet integral with insertions of single-valued multiple polylogs, which is suggested by the fact that world-sheet non-linear sigma models for AdS expanded around flat space receive contributions from flat space string amplitudes with insertions of additional soft gravitons [7; 8]. The ansatz is then fixed essentially by consistency with the OPE.
The method used in section 4 was introduced earlier, in [6], and involves an ansatz for certain combinations of OPE data in terms of Euler-Zagier sums. This ansatz was inspired by the structure of the dispersive sum rules and their leading solution which can be inferred from flat space. Of course, both methods lead to the same results.
An obvious future direction in the same line as the present paper would be to consider the correlator with generic half-BPS operators \(\langle\mathcal{O}_{p_{1}}\mathcal{O}_{p_{2}}\mathcal{O}_{p_{3}}\mathcal{O}_{ p_{4}}\rangle\). This would give access to operators in even more general \(\mathrm{SU}(4)_{R}\) representations, see (100).
Another generalisation would be to relax the assumption \(p\ll\lambda^{\frac{1}{4}}\). As discussed in [26] (see also [34]), in the regime \(p\ll\lambda^{\frac{1}{4}}\) all operators that are not KK modes of the \(\mathrm{SO}(5)\) singlet are suppressed by powers of \(\lambda^{-\frac{1}{4}}\). Studying correlators in the regime \(p\sim\lambda^{\frac{1}{4}}\) would make it easier to access those operators, which can also be characterised as having momentum in the \(S^{5}\) directions.
## Acknowledgements
We thank Fernando Alday, Joseph Minahan, Erik Panzer and Oliver Schnetz for useful discussions. GF thanks the Oxford Mathematical Institute for hospitality during the early stages of this work and Agnese Bissi, Parijat Dey, Simon Ekhammar and Dima Volin for valuable discussions. The work of GF is supported by Knut and Alice Wallenberg Foundation under grant KAW 2016.0129 and by VR grant 2018-04438. The work of TH is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 787185). JS is supported by the STFC grant ST/T000864/1.
## Appendix A \(\mathcal{N}=4\) stress tensor four point function
We normalise two point functions of half-BPS operators \(\mathcal{O}_{p}(x,y)\) as
\[\langle\mathcal{O}_{p}(x_{1},y_{1})\mathcal{O}_{p}(x_{2},y_{2})\rangle=g_{12 }^{p} \tag{101}\]
with \(g_{ij}\equiv\frac{y_{ij}}{x_{ij}^{2}}\), where \(y_{ij}=y_{i}\cdot y_{j}\) and \(x_{ij}=x_{i}-x_{j}\). The correlator of four generic half-BPS operators \(\mathcal{O}_{p_{i}}\) can be written as [17]
\[\langle\mathcal{O}_{p_{1}}(x_{1},y_{1})\mathcal{O}_{p_{2}}(x_{2}, y_{2})\mathcal{O}_{p_{3}}(x_{3},y_{3})\mathcal{O}_{p_{4}}(x_{4},y_{4})\rangle =\mathcal{K}_{\{p_{i}\}}(x_{i},y_{i})\mathcal{G}_{\{p_{i}\}}(z, \overline{z};\alpha,\overline{\alpha})\,, \tag{102}\] \[\mathcal{K}_{\{p_{i}\}}(x_{i},y_{i}) =g_{12}^{\frac{p_{1}+p_{2}}{2}}g_{34}^{\frac{p_{3}+p_{4}}{2}} \left(\frac{g_{24}}{g_{14}}\right)^{\frac{p_{21}}{2}}\left(\frac{g_{13}}{g_{14 }}\right)^{\frac{p_{24}}{2}}\,, \tag{103}\]
where \(p_{ij}=p_{i}-p_{j}\). We have introduced the cross-ratios
\[U =\frac{x_{12}^{2}x_{34}^{2}}{x_{13}^{2}x_{24}^{2}}=z\overline{z}\,, V=\frac{x_{14}^{2}x_{23}^{2}}{x_{13}^{2}x_{24}^{2}}=(1-z)(1-\overline{z})\,, \tag{100}\] \[\frac{1}{\sigma} =\frac{y_{12}y_{34}}{y_{13}y_{24}}=\alpha\overline{\alpha}\,, \frac{\tau}{\sigma}=\frac{y_{23}y_{14}}{y_{13}y_{24}}=(1-\alpha)(1-\overline{ \alpha})\,. \tag{101}\]
If we consider the \(s\)-channel _super_conformal block decomposition, \({\cal G}_{\{p_{i}\}}\) contains all the \(\mathrm{SU}(4)_{R}\) representations in \([0,p_{1}-2,0]\otimes[0,p_{2}-2,0]\cap[0,p_{3}-2,0]\otimes[0,p_{4}-2,0]\). These can be labelled as
\[[n-m,2m+s,n-m]\,,\ n=0,\cdots P=\min\left(p_{i},\frac{p_{i}+p_{j}+p_{k}-p_{l}}{ 2}\right)-2\,,\ m=0,\cdots,n\,, \tag{102}\]
with \(s=\max(|p_{12}|,|p_{34}|)\).
We decompose \({\cal G}_{\{p_{i}\}}\) into its free-theory value plus the _reduced correlator_\({\cal T}_{\{p_{i}\}}\)
\[{\cal G}_{\{p_{i}\}}(z,\overline{z};\alpha,\overline{\alpha})={\cal G}^{\rm free }_{\{p_{i}\}}(z,\overline{z};\alpha,\overline{\alpha})+\frac{(z-\alpha)(z- \overline{\alpha})(\overline{z}-\alpha)(\overline{z}-\overline{\alpha})}{(z \overline{z})^{2}(\alpha\overline{\alpha})^{2}}{\cal T}_{\{p_{i}\}}(z, \overline{z};\alpha,\overline{\alpha})\,. \tag{103}\]
In the case under analysis, namely the one with two \({\cal O}_{2}\)'s and two \({\cal O}_{p}\)'s, the free correlator takes the form
\[{\cal G}^{\rm free}_{\{22pp\}}=g_{12}^{2}g_{34}^{p}+\frac{p}{2c}\left(g_{12} g_{14}g_{23}g_{34}^{p-1}+g_{12}g_{13}g_{24}g_{34}^{p-1}\right)+\frac{p(p-1)}{2c}g _{13}g_{14}g_{23}g_{24}g_{34}^{p-2}\,. \tag{104}\]
Only one representation is exchanged in the OPE decomposition and \({\cal T}_{\{p_{i}\}}(z,\overline{z};\alpha,\overline{\alpha})\) is just a known function of \(\alpha,\overline{\alpha}\) times a function of the cross-ratios
\[{\cal T}_{\{22pp\}}(z,\overline{z};\alpha,\overline{\alpha}) ={\cal T}_{\{22pp\}}(U,V)\,, \tag{105}\] \[{\cal T}_{\{p22p\}/\{2p2\}}(z,\overline{z};\alpha,\overline{ \alpha}) =(\alpha\overline{\alpha})^{\frac{2-p}{2}}{\cal T}^{\prime}_{\{p2 p\}/\{2p2p\}}(U,V)\,.\]
By imposing superconformal Ward identities, it is possible to disentangle contributions from short, protected operators and long, unprotected ones in \({\cal T}_{\{p_{i}\}}\) -- see [16; 18; 19; 35; 36] for details. So we can write
\[{\cal T}_{\{p_{i}\}}={\cal T}^{\rm short}_{\{p_{i}\}}+{\cal T}^{\rm long}_{\{p_{ i}\}}\,, \tag{106}\]
and each part can be expanded in superconformal blocks. For the correlators under analysis, the protected contribution reads
\[{\cal T}^{\rm short}_{\{22pp\}}(z,\overline{z};\alpha,\overline{ \alpha}) =-\sum_{\lambda\geq 1}^{\infty}A^{(s)}_{2[\lambda+1]}G^{(0,0)}_{ \lambda+5,\lambda-1}(z,\overline{z})\,, \tag{107}\] \[{\cal T}^{\rm short}_{\{2p2p\}}(z,\overline{z};\alpha,\overline{ \alpha}) =-(\alpha\overline{\alpha})^{\frac{2-p}{2}}\sum_{\lambda\geq 1}^{\infty}A^{(t)} _{p[\lambda+1]}G^{(p-2,2-p)}_{\lambda+p+3,\lambda-1}(z,\overline{z})\,,\] \[{\cal T}^{\rm short}_{\{p22pp\}}(z,\overline{z};\alpha,\overline{ \alpha}) =-(\alpha\overline{\alpha})^{\frac{2-p}{2}}\sum_{\lambda\geq 1}^{\infty}(-1)^{ \lambda}A^{(t)}_{p[\lambda+1]}G^{(2-p,2-p)}_{\lambda+p+3,\lambda-1}(z, \overline{z})\,,\]
where we have introduced the conformal blocks
\[G^{(r,s)}_{\Delta,\ell}(z,\overline{z}) =\frac{z\overline{z}}{\overline{z}-z}\left[k^{r,s}_{\frac{\Delta- \ell-2}{2}}(z)k^{r,s}_{\frac{\Delta+\ell}{2}}(\overline{z})-k^{r,s}_{\frac{ \Delta+\ell}{2}}(z)k^{r,s}_{\frac{\Delta-\ell-2}{2}}(\overline{z})\right],\] (A.12) \[k^{r,s}_{h}(z) =z^{h}\,_{2}F_{1}\left(h+\frac{r}{2},h+\frac{s}{2};2h,z\right)\,.\] (A.13)
The OPE coefficients appearing in (A.11), necessary to reproduce the supergravity correlator in the dispersion relation, are given by
\[A^{(s)}_{2[\lambda]} =\frac{p}{2c}\frac{1+(-1)^{\lambda}}{2}\,\frac{2(\lambda!)^{2}}{ (2\lambda)!}\,,\] (A.14) \[A^{(t)}_{p[\lambda]} =\frac{p}{2c}\frac{\Gamma(p+\lambda-1)\left((p-1)\Gamma(\lambda+ 1)+(-1)^{\lambda}(p-1)_{\lambda}\right)}{\Gamma(p+2\lambda-1)}\,.\]
Finally, the long contribution can be expanded in superconformal blocks as
\[\mathcal{T}^{\text{long}}_{\{22pp\}}(z,\overline{z},\alpha, \overline{\alpha}) =\sum_{\Delta,\ell}C_{s}\,G^{(0,0)}_{\Delta+4,\ell}(z,\overline{z })\,,\] (A.15) \[\mathcal{T}^{\text{long}}_{\{2p2p\}}(z,\overline{z},\alpha, \overline{\alpha}) =(\alpha\overline{\alpha})^{\frac{2-p}{2}}\sum_{\Delta,\ell}C_{t} \,G^{(p-2,2-p)}_{\Delta+4,\ell}(z,\overline{z})\,,\] \[\mathcal{T}^{\text{long}}_{\{p22p\}}(z,\overline{z},\alpha, \overline{\alpha}) =(\alpha\overline{\alpha})^{\frac{2-p}{2}}\sum_{\Delta,\ell}(-1)^ {\ell}C_{t}\,G^{(2-p,2-p)}_{\Delta+4,\ell}(z,\overline{z})\,.\]
## Appendix B Sugra\({}^{(k)}\)
\[\text{SUGRA}^{(1\leq k\leq p)}=\frac{2^{-k}\Gamma(1+p)}{\Gamma(1 +p-k)}\frac{1}{(STU)^{k+1}}\Bigg{\{}\Big{(}{-\frac{p}{3}}\Big{)}^{k}S^{2k}+ \left({-\frac{2}{3}(p-3)}\right)^{k}(TU)^{k}\] (B.1) \[\times\sum_{i=0}^{k-j}\!\left({-\frac{p-6}{3}}\right)^{k-j-i}\! \left({-\frac{2}{3}(p-3)}\right)^{i}\!\!\frac{j(i+2j-k+1)_{-i-j+k-1}(-i-2j+k+1) _{i}}{i!(-i-j+k)!}\Bigg{\}}\,,\] \[\text{SUGRA}^{(k>p)}=0\,.\] (B.2)
## Appendix C Mack polynomials
To reproduce the superconformal block expansion of \(\mathcal{T}(U,V)\) in (A.15), the Mellin amplitudes should have poles at precise locations, whose residues can be shown to be given in terms of Mack polynomials \(Q^{\Delta_{12},\Delta_{34},\tau}_{\ell,m}(s)\)[25; 37]
\[\mathcal{Q}_{s}(s_{3};\tau,\ell,m) =\kappa^{(4,4,p+2,p+2)}_{\ell,m,\tau+4}Q^{0,0,\tau+4}_{\ell,m} \left(s_{3}-2-\frac{p}{3}\right)\,,\] (C.1) \[\mathcal{Q}_{t}(s_{3};\tau,\ell,m) =\kappa^{(p+2,4,4,p+2)}_{\ell,m,\tau+4}(-1)^{\ell}Q^{p-2,2-p,\tau+ 4}_{\ell,m}\left(s_{3}-\frac{4}{3}p\right)\,.\]
We obtain a closed expression for \(Q_{\ell,m}\) starting from (3.47) of [38],
\[P_{i(\Delta-d/2),\ell}(s,t)=\frac{\sqrt{\pi}2^{-d-\ell+3}\Gamma(d+\ell-2)}{\Gamma \left(\frac{d-1}{2}\right)\Gamma\left(\frac{d}{2}+\ell-1\right)}\alpha_{\ell}\,,\] (C.2)
where \(P_{\nu,\ell}(s,t)\) is the Mellin transform for the conformal partial wave in the convention of [39]. From this data, it is straightforward to obtain a compact expression for the Mack polynomials, by partially resumming the expression of \(\alpha_{\ell}\)
\[Q^{\Delta_{12},\Delta_{34},\tau}_{\ell,m}(s) =\frac{2^{\ell}\ell!\Gamma(\ell+\tau-1)}{\Gamma(2\ell+\tau-1)} \sum_{k=0}^{\ell}\sum_{n=0}^{\ell-k}(-m)_{k}\left(m+\frac{s+\tau}{2}\right)_{n }\mu(\ell,k,n,\Delta_{12},\Delta_{34},\tau)\,,\] \[\mu(\ell,k,n,\Delta_{12},\Delta_{34},\tau) =\frac{(-1)^{k+n+1}(\ell+\tau-1)_{n}}{k!n!(-k-n+\ell)!}\bigg{(}n+ 1+\frac{\Delta_{34}-\Delta_{12}}{2}\bigg{)}_{k}\] \[\times\quad\bigg{(}k+n-\frac{\Delta_{12}}{2}+\frac{\tau}{2} \bigg{)}_{-k-n+\ell}\bigg{(}k+n+\frac{\Delta_{34}}{2}+\frac{\tau}{2}\bigg{)}_ {-k-n+\ell}\] (C.3) \[\times\,_{4}F_{3}\left(\begin{array}{c}-k,-1-n-\ell,-1+\frac{ \Delta_{12}}{2}+\frac{\tau}{2},-1-\frac{\Delta_{34}}{2}+\frac{\tau}{2}\\ -\ell,\frac{\Delta_{12}}{2}-\frac{\Delta_{34}}{2}-k-n,-2+\tau\end{array};1 \right)\,,\]
and
\[\kappa^{(p_{1},p_{2},p_{3},p_{4})}_{\ell,m,\tau} =-\frac{2^{1-\ell}(\ell+\tau-1)_{\ell}\Gamma(2\ell+\tau)\Gamma \big{(}-\frac{p_{1}-p_{2}}{2}+\ell+\frac{\tau}{2}\big{)}}{m!\left(-1+\ell+ \tau\right)_{m}\Gamma\big{(}-m+\frac{p_{1}}{2}+\frac{p_{2}}{2}-\frac{\tau}{2} \big{)}\Gamma\big{(}-m+\frac{p_{3}}{2}+\frac{p_{4}}{2}-\frac{\tau}{2}\big{)}}\] (C.4) \[\times\Gamma\bigg{(}\frac{p_{1}-p_{2}}{2}+\ell+\frac{\tau}{2} \bigg{)}\Gamma\bigg{(}-\frac{p_{3}-p_{4}}{2}+\ell+\frac{\tau}{2}\bigg{)}\Gamma \bigg{(}\frac{p_{3}-p_{4}}{2}+\ell+\frac{\tau}{2}\bigg{)}.\]
We checked that these polynomials satisfy the expected recursion relations [39]
\[(\mathcal{D}_{s}-\lambda_{\ell})Q_{\ell,m}(s) =4m\left(\frac{d}{2}-\tau-\ell-m\right)(2Q_{\ell,m}(s)-Q_{\ell,m- 1}(s+2)-Q_{\ell,m-1}(s))\,,\] \[\mathcal{D}_{s}Q_{\ell,m}(s) =((2m+s+\tau)Q_{\ell,m}(s+2)-2sQ_{\ell,m}(s))(\Delta_{12}-\Delta_ {34}+2m+s+\tau)\,,\] \[\quad+(\Delta_{12}+s)(s-\Delta_{34})Q_{\ell,m}(s-2)\,,\] (C.5) \[\lambda_{\ell} =(\Delta_{12}+2m+\tau)(-\Delta_{34}+2m+\tau)+4\ell(2m+\tau-1)+4 \ell^{2}\,.\]
In particular the case \(m=0\) takes a very simple form
\[Q^{(\Delta_{12},\Delta_{34})}_{\ell,0}(s)=-2^{\ell}\frac{\left(\frac{\tau}{2}- \frac{\Delta_{12}}{2}\right)_{\ell}\left(\frac{\Delta_{34}}{2}+\frac{\tau}{2} \right)_{\ell}}{(\ell+\tau-1)_{\ell}}\,_{3}F_{2}\left(\begin{array}{c}-\ell, \frac{s}{2}+\frac{\tau}{2},\tau+\ell-1\\ \frac{\tau}{2}-\frac{\Delta_{12}}{2},\frac{\Delta_{34}}{2}+\frac{\tau}{2} \end{array};1\right)\,.\]
## Appendix D Large twist sums
Given the dispersion relation
\[M(s_{1},s_{2})=\sum_{\mathcal{O}_{s}}\sum_{m=0}^{\infty}C_{s}\frac{\mathcal{Q} _{s}(s_{3};\tau_{s},\ell,m)}{s_{1}-\tau_{s}-2m+\frac{2}{3}p}+\sum_{\mathcal{O} _{t}}\sum_{m=0}^{\infty}C_{t}\frac{\mathcal{Q}_{t}(s_{3};\tau_{t},\ell,m)}{s_{2 }-\tau_{t}-2m+\frac{2}{3}p}\] (D.1)
and given that the operators exchanged have twist \(\tau\sim\lambda^{1/4}\), we ask how do the OPE coefficients \(C_{s}\), \(C_{t}\) depend on \(\lambda\) in such a way as to reproduce the first stringy correction
\[\frac{p(p+1)(p+2)(p+3)}{\lambda^{3/2}\Gamma(p-1)}\times\zeta(3). \tag{104}\]
At large \(\tau\) the sum in \(m\) in (103) is dominated by terms of order \(\tau^{2}\). The large \(\tau\) expansion of (103) can be obtained by setting \(m\to x\tau^{2}\) and replacing \(\sum_{m=0}^{\infty}\to\int dx\tau^{2}\). In this limit
\[\lim_{\tau_{s}\to\infty}\int_{0}^{\infty}dx\tau_{s}^{2}\frac{ \mathcal{Q}_{s}(s_{3};\tau_{s},\ell,x\tau_{s}^{2})}{s_{1}-\tau_{s}-2x\tau_{s}^ {2}+\frac{2}{3}p} =\frac{(\ell+1)(-1)^{-p}\Gamma(p+4)\sin^{2}\left(\frac{\pi\tau_{ s}}{2}\right)2^{2\ell+2p+2\tau_{s}+13}}{\pi^{3}\tau_{s}^{2(p+4)}},\] \[\lim_{\tau_{t}\to\infty}\int_{0}^{\infty}dx\tau_{t}^{2}\frac{ \mathcal{Q}_{t}(s_{3};\tau_{t},\ell,x\tau_{t}^{2})}{s_{2}-\tau_{t}-2x\tau_{t} ^{2}+\frac{2}{3}p} =\frac{(-1)^{\ell}(\ell+1)\Gamma(p+4)2^{2\ell+2p+2\tau_{t}+13} \sin^{2}\left(\frac{1}{2}\pi(p-\tau_{t})\right)}{\pi^{3}\tau_{t}^{2(p+4)}}. \tag{105}\]
Inserting (105) and (104) into (103) we derive (17).
## Appendix E Ambiguities of the world-sheet integrand
The ambiguous terms in the world-sheet correlator (3.2) are given by
\[\hat{r}_{1,1}^{(1)s} =\,\left(0,0,0,\tfrac{S^{2}+4ST+T^{2}}{(S+T)^{2}}\right)\,,\quad \hat{r}_{1,1}^{(1)a}=(0,0,0)\,\quad\hat{r}_{2,1}^{(1)s}=(0,0,0,0)\,\quad\hat{r}_{2,1}^{(1)a}=(0,0,0)\,\] \[\hat{r}_{1,2}^{(1)s} =\,\left(-\tfrac{5S^{2}+8ST+5T^{2}}{12(S+T)^{2}},\tfrac{S^{2}+ST+ T^{2}}{3(S+T)^{2}},-\tfrac{5S^{2}+8ST+5T^{2}}{12(S+T)^{2}},4\right)\,,\] \[\hat{r}_{1,2}^{(1)a} =\,\left(-\tfrac{5(S-T)}{12(S+T)},\tfrac{S-T}{3(S+T)},\tfrac{S-T} {4(S+T)}\right)\,,\quad\hat{r}_{2,2}^{(1)s}=(0,0,0,0)\,\quad\hat{r}_{2,2}^{(1)a}=(0,0,0)\,\] \[\hat{r}_{1,3}^{(1)s} =\,(0,0,0,0)\,\quad\hat{r}_{1,3}^{(1)a}=(0,0,0)\,\quad\hat{r}_{2,3}^{(1)s}= \left(0,0,0,\tfrac{2(S^{2}+2ST)}{(S+T)^{2}}\right)\,,\quad\hat{r}_{2,3}^{(1)a} =(0,0,0)\,\] \[\hat{r}_{1,4}^{(1)s} =\,(0,0,0,1)\,\quad\hat{r}_{1,4}^{(1)a}=(0,0,0)\,\quad\hat{r}_{2,4}^{(1)s}= \left(0,0,0,-\tfrac{2S+T}{S+T}\right)\,,\quad\hat{r}_{2,4}^{(1)a}=(0,0,0)\,\] \[\hat{r}_{1,5}^{(1)s} =\,(1,0,1,0)\,\ \hat{r}_{1,5}^{(1)a}=\tfrac{S-T}{S+T}\,(1,-2,-1)\,\ \hat{r}_{2,5}^{(1)a}=\left(-\tfrac{T^{2}-2S^{2}}{4(S+T)^{2}},-\tfrac{2S^{2}+9ST+8T ^{2}}{2(S+T)^{2}},-\tfrac{(S+2T)^{2}}{4(S+T)^{2}}\right)\,,\] \[\hat{r}_{2,5}^{(1)s} =\,\left(\tfrac{2S^{2}+T^{2}}{4(S+T)^{2}},-\tfrac{2S^{2}+3ST+4T^{ 2}}{2(S+T)^{2}},-\tfrac{S^{2}-4T^{2}}{4(S+T)^{2}},\tfrac{4(4S^{2}+9ST-4T^{2}) }{(S+T)^{2}}\right)\,,\] \[\hat{r}_{1,6}^{(1)s} =\,(0,1,0,0)\,\quad\hat{r}_{1,6}^{(1)a}=(0,0,0)\,\quad\hat{r}_{2,6}^{(1)a}= \left(\tfrac{6S^{2}+4ST+T^{2}}{8(S+T)^{2}},-\tfrac{6S^{2}+7ST+4T^{2}}{4(S+T)^{2 }},-\tfrac{3S^{2}}{8(S+T)^{2}}\right)\,,\] \[\hat{r}_{2,6}^{(1)s} =\,\left(\tfrac{6S^{2}+4ST-T^{2}}{8(S+T)^{2}},-\tfrac{6S^{2}+13ST +8T^{2}}{4(S+T)^{2}},\tfrac{-3S^{2}+4ST+8T^{2}}{8(S+T)^{2}},\tfrac{8S^{2}+6ST-8T ^{2}}{(S+T)^{2}}\right)\,. \tag{106}\]
## Appendix F Crossing-symmetric dispersion relation
Our Mellin amplitude has the symmetry \(M(s_{1},s_{2})=M(s_{1},s_{3})\). Following [11; 12], it is useful to construct a dispersion relation where this symmetry is manifest, since that will lead to simpler dispersive sum rules.
We will use the variables \(s_{1}\) and \(r\), where \(r=\tfrac{s_{2}s_{3}}{s_{1}}\). In terms of these variables we have
\[s_{2}^{\prime}(s_{1},r)=\frac{1}{2}\left(-s_{1}+\sqrt{s_{1}(s_{1}-4r)}\right)\,, \qquad s_{3}^{\prime}(s_{1},r)=\frac{1}{2}\left(-s_{1}-\sqrt{s_{1}(s_{1}-4r)} \right)\,, \tag{107}\]
and we will consider
\[\widetilde{M}(s_{1},r)=M(s_{1},s^{\prime}_{2}(s_{1},r)). \tag{109}\]
Although \(s^{\prime}_{2}(s_{1},r)\) has a branch cut from \(s_{1}=0\) to \(s_{1}=4r\), \(\widetilde{M}(s_{1},r)\) does not have a branch cut because of the symmetry \(M(s_{1},s^{\prime}_{2}(s_{1},r))=M(s_{1},s^{\prime}_{3}(s_{1},r))\). In the new variables the bound on chaos (11) becomes
\[\widetilde{M}(s_{1},r)=O(s_{1}^{-2})\,,\text{ for }|s_{1}|\to\infty\text{ with } \text{Re}(r)>-\frac{p}{3}\,. \tag{110}\]
Thus, keeping \(r\) fixed and with \(\text{Re}(r)>-\frac{p}{3}\) we can write a dispersion relation in the new variables
\[\widetilde{M}(s_{1},r)=\oint_{s_{1}}\frac{ds^{\prime}_{1}}{2\pi i}\frac{ \widetilde{M}(s^{\prime}_{1},r)}{(s^{\prime}_{1}-s_{1})}\,. \tag{111}\]
In the \(s_{1}\) plane \(\widetilde{M}(s_{1},r)\) has poles at \(s_{1}=\tau_{s}^{m}\) and \(s_{1}=-\frac{(\tau_{s}^{m})^{2}}{\tau_{t}^{m}+r}\), where \(\tau_{s}^{m}=\tau_{s}+2m-\frac{2}{3}p\) and \(\tau_{t}^{m}=\tau_{t}+2m-\frac{2}{3}p\). This leads to
\[\widetilde{M}(s_{1},r)=\sum_{\mathcal{O}_{s}}\sum_{m=0}^{\infty}C_{s}\frac{ \tilde{\mathcal{Q}}_{s}(s^{\prime}_{2}(\tau_{s}^{m},r)-2-\frac{p}{3})}{s_{1}- \tau_{s}^{m}}-\sum_{\mathcal{O}_{t}}\sum_{m=0}^{\infty}C_{t}\frac{\tau_{t}^{m} (2r+\tau_{t}^{m})}{(r+\tau_{t}^{m})^{2}}\frac{\tilde{\mathcal{Q}}_{t}(-2- \frac{p}{3}-\frac{(\tau_{t}^{m})^{2}}{r+\tau_{t}^{m}})}{s_{1}+\frac{(\tau_{t} ^{m})^{2}}{r+\tau_{t}^{m}}}, \tag{112}\]
where
\[\tilde{\mathcal{Q}}_{s}(x) \equiv\kappa_{\ell,m,\tau+4}^{(4,4,p+2,p+2)}Q_{\ell,m}^{0,0,\tau+ 4}(x)\,, \tag{113}\] \[\tilde{\mathcal{Q}}_{t}(x) \equiv\kappa_{\ell,m,\tau+4}^{(p+2,4,4,p+2)}Q_{\ell,m}^{2-p,2-p, \tau+4}(x)\,.\]
It is important to have formulas that connect OPE data with the low energy expansion of \(\widetilde{M}(s_{1},r)\). Expanding (112) as
\[\widetilde{M}(s_{1},r)=\sum_{a,b=0}^{\infty}\xi_{a,b}s_{1}^{a}r^{b}\,, \tag{114}\]
the coefficients read
\[\xi_{a,b}=\sum_{\mathcal{O}_{s}}\sum_{m=0}^{\infty}C_{s}\sum_{q=0}^{b}\mathfrak{ U}_{a,b,q}^{\tau_{s}^{m},\,s}\partial_{x}^{q}\mathfrak{Q}_{x}^{q}\mathfrak{Q}_{x}^{q} \mathfrak{Q}_{x}(x)|_{x=0}+\sum_{\mathcal{O}_{t}}\sum_{m=0}^{\infty}C_{t}\sum_ {q=0}^{b}\mathfrak{U}_{a,b,q}^{\tau_{t}^{m},\,t}\partial_{x}^{q}\mathfrak{Q}_{ t}(x)|_{x=0}\,, \tag{115}\]
where
\[\mathfrak{U}_{a,b,0}^{\tau,\,s} =-\frac{\delta_{b,0}}{\tau^{a+1}}\,, \tag{116}\] \[\mathfrak{U}_{a,b,q>0}^{\tau,\,s} =\frac{(-1)^{q+1}\Gamma(2b-q)}{\Gamma(b+1)\Gamma(q)\Gamma(b-q+1) \tau^{a+b+1-q}}\,,\] (117) \[\mathfrak{U}_{a,b,0}^{\tau,\,t} =\frac{(-1)^{-a-b}(a+b)(1-a)_{b-1}}{\Gamma(b+1)\tau^{a+b+1}}\,,\] \[\mathfrak{U}_{a,b,q>0}^{\tau,\,t} =\frac{(-1)^{a-1}\Gamma(a)(a+b-q)\,_{3}F_{2}(q,q-b,-a-b+q+1;-a-b+ q,a-b+q+1;1)}{\Gamma(q+1)\Gamma(b-q+1)\Gamma(a-b+q+1)\tau^{1+a+b-q}}\,,\]
and
\[\mathfrak{Q}_{s}(x) \equiv\kappa^{(4,4,p+2,p+4)}_{\ell,m,\tau+4}Q^{0,0,\tau+4}_{\ell,m}( x-2-\frac{p}{3})\,, \tag{105}\] \[\mathfrak{Q}_{t}(x) \equiv\kappa^{(p+2,4,4,p+2)}_{\ell,m,\tau+4}(-1)^{\ell}Q^{p-2,2-p, \tau+4}_{\ell,m}(-x-\frac{4}{3}p)\,.\]
Using formula (102) and the large twist expansions of Appendix D we can derive the expressions relating the Wilson coefficients and the OPE data that we present in the main text. Finally, the Wilson coefficients \(\xi^{(0)}_{a,b}(p)\) can be read off from the flat space Virasoro-Shapiro amplitude through [13]
\[\frac{4p}{\Gamma(p-1)}\frac{1}{s_{1}s_{2}s_{3}}+M_{\text{flat}}(s_{1},s_{2})= \frac{4}{\Gamma(p)\Gamma(p-1)s_{1}s_{2}s_{3}}\int_{0}^{\infty}d\beta\,e^{- \beta}\,\beta^{p}\,A_{\text{flat}}(2\beta s_{1},2\beta s_{2})\,, \tag{106}\]
where
\[A_{\text{flat}}(s_{1},s_{2}) =\exp\left(\sum_{k=1}^{\infty}\frac{2\zeta(2k+1)}{2k+1}\left( \frac{1}{4\sqrt{\lambda}}\right)^{2k+1}\left(s_{1}^{2k+1}+s_{2}^{2k+1}+s_{3}^ {2k+1}\right)\right)\,, \tag{107}\] \[M_{\text{flat}}(s_{1},s_{2}) =\sum_{a=0}^{\infty}\sum_{b=0}^{a}\frac{\Gamma(a+b+p+4)}{\Gamma(b +1)\Gamma(p)\Gamma(p-1)}s_{1}^{a-b}\big{(}s_{2}s_{3}\big{)}^{b}\lambda^{-\frac {3}{2}-\frac{a}{2}-\frac{b}{2}}\xi^{(0)}_{a,b}(p)\,.\]
## Appendix G Details about dispersive sum rules
### Formulas
The functions appearing in (4.1) are given by
\[c^{(2,0)}_{s,a,b,q} =\frac{1}{96}\Big{(}-16a^{3}-16b^{3}-16b^{2}(p-3q+6)+32p^{3}-112p^ {2}-480p+321 \tag{108}\] \[-16a^{2}(3b+p-3q+6)-80p^{2}q-80pq-168q^{2}+228q-48p^{2}q^{2}-144 pq^{2}\] \[+8b\left(4p^{2}+8pq-10p-3q^{2}+36q+2\right)\] \[-8a\left(6b^{2}+4b(p-3q+6)-4p^{2}-8pq+10p+3q^{2}-36q-2\right) \Big{)}c^{(0)}_{s,a,b,q}\,,\] \[c^{(2,1)}_{s,a,b,q} =-\frac{1}{48}(q+1)\Big{(}-12a^{2}-12b^{2}-2b(8p-6q+33)+32p^{2}+5 6p-15\] \[-2a(12b+8p-6q+33)+24p^{2}q+72pq+84q\Big{)}c^{(0)}_{s,a,b,q}\,,\] \[c^{(2,0)}_{t,a,b,q} =\frac{1}{96}\Big{(}-16a^{3}-16b^{3}-16b^{2}(p-3q+6)+32p^{3}-112p^ {2}-480p+321\] \[-16a^{2}(3b+p-3q+6)-80p^{2}q-80pq-168q^{2}+228q-48p^{2}q^{2}-144 pq^{2}\] \[+8b\left(4p^{2}+8pq-10p-3q^{2}+36q+2\right)-12(p-2)^{2}\left(2a+2 b+p^{2}+6p-2q+17\right)\] \[-8a\left(6b^{2}+4b(p-3q+6)-4p^{2}-8pq+10p+3q^{2}-36q-2\right) \Big{)}c^{(0)}_{t,a,b,q}\,,\] \[c^{(2,1)}_{t,a,b,q} =-\frac{1}{48}(q+1)\Big{(}-12a^{2}-12b^{2}-2b(8p-6q+33)+32p^{2}+5 6p-15\] \[-2a(12b+8p-6q+33)+24p^{2}q+72pq+84q\Big{)}c^{(0)}_{t,a,b,q}\,.\]
### Dispersive sum rule for \(b=0\)
The simplest dispersive sum rule is the one for the flat space Wilson coefficient \(\xi^{(0)}_{a,0}(p)\). It leads to the equations
\[\zeta(3+a) =\frac{1}{2}\sum_{\delta=1}^{\infty}\frac{\sum_{\ell=0,2}^{2\delta- 2}f_{s,0}(\delta,\ell)}{\delta^{3+a}}+\frac{1}{2}\sum_{\delta=1}^{\infty}\frac{ \sum_{\ell=0,1}^{2\delta-2}f_{t,0}(\delta,\ell)}{\delta^{3+a}},\quad a\in 2 \mathbb{N}_{0}\,, \tag{100}\] \[0 =\sum_{\delta=1}^{\infty}\frac{\sum_{\ell=0,2}^{2\delta-2}f_{s,0} (\delta,\ell)}{\delta^{3+a}}-\sum_{\delta=1}^{\infty}\frac{\sum_{\ell=0,1}^{2 \delta-2}f_{t,0}(\delta,\ell)}{\delta^{3+a}},\quad a\in(2\mathbb{N}_{0}+1)\,, \tag{101}\]
i.e. the first equation holds for \(a\) even and the second for \(a\) odd. This separation between \(a\) even and odd has to do with the fact that in the flat space limit the Mellin amplitude only contains terms of type \((s_{1}^{2}+s_{2}^{2}+s_{3}^{2})^{n_{1}}(s_{1}s_{2}s_{3})^{n_{2}}\). The equations for odd \(a\) imply that
\[\sum_{\ell=0,2}^{2\delta-2}f_{s,0}(\delta,\ell)=\sum_{\ell=0,1}^{2\delta-2}f_{ t,0}(\delta,\ell). \tag{102}\]
As to the equations with even \(a\), since \(\zeta(3+a)=\sum_{\delta=1}^{\infty}\frac{1}{\delta^{3+a}}\) they imply that
\[\sum_{\ell=0,2}^{2\delta-2}f_{s,0}(\delta,\ell)=1. \tag{103}\]
### Example
In this Appendix we demonstrate how we solve (101) for \(b=0\). For \(q=0\) the ansatz for the OPE data is
\[\begin{split} T^{(2)}_{s;q=0}(\delta)&=d_{1}(p)Z( \delta-1)+d_{2}(p)\delta Z_{1}(\delta-1)+d_{3}(p)\delta^{2}Z_{2}(\delta-1)\,, \\ T^{(2)}_{t;q=0}(\delta)&=d_{4}(p)Z(\delta-1)+d_{5}( p)\delta Z_{1}(\delta-1)+d_{6}(p)\delta^{2}Z_{2}(\delta-1)\,,\\ F^{(2)}_{s;q=0}(\delta)&=d_{7}(p)Z(\delta-1)+d_{8} (p)\delta Z_{1}(\delta-1)+d_{9}(p)\delta^{2}Z_{2}(\delta-1)\\ &\qquad+d_{10}(p)\delta^{3}Z_{3}(\delta-1)+d_{11}(p)\delta^{3} \zeta(3)Z(\delta-1),\\ F^{(2)}_{t;q=0}(\delta)&=d_{12}(p)Z(\delta-1)+d_{1 3}(p)\delta Z_{1}(\delta-1)+d_{14}(p)\delta^{2}Z_{2}(\delta-1)\\ &\qquad+d_{15}(p)\delta^{3}Z_{3}(\delta-1)+d_{16}(p)\delta^{3} \zeta(3)Z(\delta-1).\end{split} \tag{104}\]
\(d_{i}(p)\) are 16 functions of \(p\) that we will fix by our procedure.16 We plug (104) into (101) for \(b=0\) and use the formula
Footnote 16: The terms proportional to \(Z(\delta-1)\) are required, by contrast to the \(q\geq 1\) case. The reason is that the functions in (4.11) vanish for (\(\delta=1\), \(q\geq 1\)), but do not vanish when (\(\delta=1\), \(q=0\)).
\[\zeta(s,s_{1},s_{2},\ldots)=\sum_{\delta=1}^{\infty}\frac{Z_{s_{1},s_{2}, \ldots}(\delta-1)}{\delta^{s}}\,, \tag{105}\]
to express \(\xi^{(1)}_{a,0}(p)\) in terms of multiple zeta values and as a function of \(d_{i}(p)\). Convergence of the \(\delta\) sum implies that
\[d_{10}(p)+d_{11}(p)+d_{15}(p)+d_{16}(p)=0. \tag{100}\]
Finally, we impose that \(\xi^{(1)}_{a,0}(p)\) is a linear combination of single-valued multiple zeta values. We do so using the MAPLE program _HyperlogProcedures_[30]. It was enough to impose this for \(a=0,\dots,6\). In this manner we fix 13 out of the 16 functions \(d_{i}(p)\)17
Footnote 17: After fixing the functions \(d_{i}(p)\) in this way we checked that the resulting expression for \(\xi^{(1)}_{a,0}(p)\) is single-valued for \(a=7,\dots,10\).
\[\begin{split}& d_{2}(p)=\frac{1}{4},\,d_{3}(p)=1,\,d_{5}(p)= \frac{1}{4},\,d_{6}(p)=1,d_{7}(p)=2d_{1}(p)+2d_{4}(p)-d_{12}(p)+\frac{p^{4}}{8}- \frac{5p^{3}}{12}+\frac{19p^{2}}{8}+\frac{25p}{6}\\ &+\frac{13}{16},\,d_{8}(p)=2d_{1}(p)+2d_{4}(p)+\frac{p^{2}}{2}+3 p-\frac{39}{8},\,d_{9}(p)=2,\,d_{10}(p)=-2,\,d_{13}(p)=2d_{1}+2d_{4}+\frac{p^{2}}{2 }+3p\\ &-\frac{39}{8},\,d_{14}(p)=2,\,d_{15}(p)=-2,\,d_{16}(p)=2.\end{split} \tag{101}\]
The 3 functions \(d_{1}(p)\), \(d_{4}(p)\) and \(d_{12}(p)\) are not fixed by this procedure. However, they are fixed by repeating this exercise for \(b=0,1,\dots\).
|
2302.08955 | Tropical Feynman integration in the Minkowski regime | We present a new computer program, $\texttt{feyntrop}$, which uses the
tropical geometric approach to evaluate Feynman integrals numerically. In order
to apply this approach in the physical regime, we introduce a new parametric
representation of Feynman integrals that implements the causal $i\varepsilon$
prescription concretely while retaining projective invariance.
$\texttt{feyntrop}$ can efficiently evaluate dimensionally regulated,
quasi-finite Feynman integrals, with not too exceptional kinematics in the
physical regime, with a relatively large number of propagators and with
arbitrarily many kinematic scales. We give a systematic classification of all
relevant kinematic regimes, review the necessary mathematical details of the
tropical Monte Carlo approach, give fast algorithms to evaluate (deformed)
Feynman integrands, describe the usage of $\texttt{feyntrop}$ and discuss many
explicit examples of evaluated Feynman integrals. | Michael Borinsky, Henrik J. Munch, Felix Tellander | 2023-02-17T15:39:35Z | http://arxiv.org/abs/2302.08955v2 | # Desy-23-026
###### Abstract
We present a new computer program, feyntrop, which uses the tropical geometric approach to evaluate Feynman integrals numerically. In order to apply this approach in the physical regime, we introduce a new parametric representation of Feynman integrals that implements the causal \(i\varepsilon\) prescription concretely while retaining projective invariance. feyntrop can efficiently evaluate dimensionally regulated, quasi-finite Feynman integrals, with not too exceptional kinematics in the physical regime, with a relatively large number of propagators and with arbitrarily many kinematic scales. We give a systematic classification of all relevant kinematic regimes, review the necessary mathematical details of the tropical Monte Carlo approach, give fast algorithms to evaluate (deformed) Feynman integrands, describe the usage of feyntrop and discuss many explicit examples of evaluated Feynman integrals.
###### Contents
* 1 Introduction
* 2 Feynman integrals
* 2.1 Momentum and parametric representations
* 2.2 Kinematic regimes
* 2.3 Contour deformation
* 2.4 Dimensional regularization and \(\epsilon\) expansions
* 3 Tropical geometry
* 3.1 Tropical approximation
* 3.2 Tropical sampling
* 3.3 Base polytopes and generalized permutahedra
* 3.4 Generalized permutahedral tropical sampling
Numerical integration * 4.1 Monte Carlo integration * 4.2 Fast evaluation of (deformed) Feynman integrands
* 5 The program feyntrop
* 5.1 Installation
* 5.2 Basic usage of feyntrop
* 5.3 Deformation tuning parameter
* 6 Examples of Feynman integral evaluations
* 6.1 A 5-loop 2-point zigzag diagram
* 6.2 A 3-loop 4-point envelope diagram
* 6.3 A 2-loop 4-point \(\mu e\)-scattering diagram
* 6.4 A QCD-like, 2-loop 5-point diagram
* 6.5 A QED-like, 4-loop vacuum diagram
* 6.6 An elliptic, conformal, 4-point integral
* 7 Conclusions and outlook
## 1 Introduction
Feynman integrals are a key tool in quantum field theory. They are necessary to produce accurate predictions from given theoretical input such as a Lagrangian. Applications are, for instance, the computations of virtual contributions to scattering cross-sections for particle physics phenomenology [1], corrections to the magnetic moment of the muon or the half-life of positronium [2], critical exponents in statistical field theory [3] and corrections to the Newton potential due to general relativity [4]. An entirely mathematical application of Feynman integrals is the certification of cohomology classes in moduli spaces of curves or of graphs [5].
In this paper, we introduce feyntrop1, a new tool to evaluate Feynman integrals numerically. In contrast to existing tools, feyntrop can efficiently evaluate Feynman integrals with a relatively large number of propagators and with an arbitrary number of scales. Moreover, feyntrop can deal with Feynman integrals in the _physical_ Minkowski regime and automatically takes care of the usually intricate contour deformation procedure. The spacetime dimension is completely arbitrary and integrals that are expanded in a dimensional regulator can be evaluated. The main restriction of feyntrop is that it cannot deal with Feynman integrals having subdivergences, that means the input Feynman integrals are required to be _quasi-finite_. Moreover, feyntrop is not designed to integrate Feynman integrals at certain highly exceptional kinematic points. Outside the Euclidean regime, the external kinematics are required to be sufficiently _generic_. It is worthwhile mentioning though that such highly exceptional kinematic points seem quite rare and feyntrop performs surprisingly well in these circumstances--in spite of the lack of mathematical guarantees for functioning. In fact, we were not able to find a quasi-finite integral with exceptional kinematics for which the integration with feyntrop fails. We only observed significantly decreased rates of convergence in such cases.
Footnote 1: feyntrop can be downloaded from [https://github.com/michibo/feyntrop](https://github.com/michibo/feyntrop).
The mathematical theory of Feynman integrals has advanced rapidly in the last decades. Corner stone mathematical developments for Feynman integrals were, for instance, the systematic exploitation of their unitarity constraints (see, e.g., [6, 7]), the systematic solution
parts identities (see, e.g., [8, 9]), the application of modern algebraic geometric and number theoretic tools for the benefit of their evaluation (see, e.g., [10, 11, 12]) and the systematic understanding of the differential equations which they fulfill (see, e.g., [13, 14]).
Primarily, these theoretical developments were aimed at facilitating the _analytic_ evaluation of Feynman integrals. All known analytic evaluation methods are inherently limited to a specific class of sufficiently simple diagrams. Especially for high-accuracy collider physics phenomenology, such analytic methods are often not sufficient to satisfy the demand for Feynman integral computations at higher loop order, which frequently involve complicated kinematics with many scales. Even if an analytic expression for a given Feynman integral is available, it is usually a highly non-trivial task to perform the necessary analytic continuation into the physical kinematic regime. On a different tack, computations of corrections to the Newton potential in the post-Newtonian expansion of general relativity [4] require the evaluation of large amounts of Feynman diagrams in three dimensional Euclidean space. As analytic evaluation is often more difficult in odd-dimensional spacetime, tropical Feynman integration is a promising candidate to fulfill the high demand for large loop order Feynman integrals in this field.
For this reason, numerical methods for the evaluation of Feynman integrals seem unavoidable once a certain threshold in precision has to be overcome. In this paper, we will use _tropical sampling_ that has been introduced in [15] to evaluate Feynman integrals numerically. This numerical integration technique is faster than traditional methods because the known (tropical) geometric structures of Feynman integrals are employed for the benefit of their numerical evaluation. For instance, general Euclidean Feynman integrals with up to 17 loops and 34 propagators can be evaluated using basic hardware with the proof-of-concept implementation that was distributed by the first author with [15]. The code of feyntrop is based on this implementation. The relevant mathematical structure is the _tropical geometry_ of Feynman integrals in the parametric representation [16, 15]. This tropical geometry itself is a simplification of the intricate _algebraic geometry_ Feynman integrals display (see, e.g., [17]). Tropical Feynman integration was already used, for instance, in [18] to estimate the \(\phi^{4}\) theory \(\beta\) function up to loop order 11. Some ideas from [15] have already been implemented in the FIESTA package [19]. Tropical sampling has been extended to toric varieties with applications to Bayesian statistics [20]. Moreover, the tropical approach was recently applied to study infrared divergences of Feynman integrals in the Minkowski regime [21].
The tropical approach to Feynman integrals falls in line with the increasing number of fruitful applications of tools from convex geometry in the context of quantum field theory. These include, for example, the discovery of polytopes in amplitudes (see, e.g. [22, 23]). Further, Feynman integrals can be seen as generalized Mellin-transformations [24, 25, 26]. As such they are solutions to GKZ-type differential equation systems [27]. Tropical and convex geometric tools are central to this analytic approach towards Feynman integrals (see, e.g., [28, 29, 30, 31, 32]).
Tropical Feynman integration is closely related to the _sector decomposition_ approach [33, 34, 35], which applies to completely general algebraic integrals. State of the art implementations of sector decompositions are, for instance, pySecDec[36] and FIESTA[19]. Other numerical methods that are tailored specifically to Feynman integrals are, for instance, difference equations [9], unitarity methods [37], the Mellin-Barnes representation [38] and loop-tree duality [39, 40]. With respect to potential applications to collider phenomenology, the latter three have the advantage of being inherently adapted to Minkowski spacetime kinematics. A newer technique is the systematic semi-numerical evaluation of Feynman integrals using differential equations [41], which is implemented, for instance, in AMFlow[42], DiffExp[43] and SeaSyde[44]. This technique can evaluate Feynman integrals quickly in the physical regime with high accuracy. A caveat is that it relies on analytic
boundary values for the differential equations and on the algebraic solution of the usually intricate integration-by-parts system associated to the respective Feynman integral. We expect feyntrop, which does not rely on any analytic or algebraic input, to be useful for computing boundary values as input for such methods.
feyntrop uses the _parametric representation_ of Feynman integrals for the numerical evaluation, which we briefly review in Section 2.1. This numerical evaluation has quite different characters in separate _kinematic regimes_. We propose a new classification of such kinematic regimes in Section 2.2 which, in addition to the usual Euclidean and Minkowski regimes, includes the intermediate _pseudo-Euclidean_ regime. The original tropical Feynman integration implementation from [15] was limited to the Euclidean regime. Here, we achieve the extension of this approach to non-Euclidean regimes.
In the Minkowski regime, parametric Feynman integrands can have a complicated pole structure inside the integration domain. For the numerical integration an explicit _deformation_ of the integration contour, which respects the desired causality properties, is needed. The use of explicit contour deformation prescriptions for numerics has been pioneered in [37] and was later applied in the sector decomposition framework [45]. (Recently, a momentum space based approach for the solution of the deformation problem was put forward [46].) In Section 2.3, we propose an explicit deformation prescription which, in its basic form, was employed in [47] in the context of cohomological properties of Feynman integrals. This deformation prescription has the inherent advantage of retaining the projective symmetry of the parametric Feynman integrand. We provide explicit formulas for the Jacobian and thereby propose a new _deformed parametric representation_ of the Feynman integral.
It is often desirable to evaluate a Feynman integral using dimensional regularization by adding a formal expansion parameter to the spacetime dimension, e.g. \(D=D_{0}-2\epsilon\), where \(D_{0}\) is a fixed number and we wish to evaluate the Laurent or Taylor expansion of the integral in \(\epsilon\). We will explain how feyntrop deals with such dimensionally regularized Feynman integrals in Section 2.4. Moreover, we will discuss one of the major limitations of feyntrop in this section: In its present form feyntrop can only integrate Feynman integrals that are _quasi-finite_. That means, input Feynman integrals are allowed to have an overall divergence, but no subdivergences. Further analytic continuation prescriptions (along the lines of [24, 25, 48]) would be needed to deal with such subdivergences and we postpone the implementation of such prescriptions into feyntrop to a future publication. For now, the user of the program is responsible to render all input integrals quasi-finite; for instance by projecting them to a quasi-finite basis [48]. Note, however, that within our approach, the base dimension \(D_{0}\) is completely arbitrary and can even be a non-integer value if desired. The applicability in the case \(D_{0}=3\) makes feyntrop a promising tool for the computation of post-Newtonian corrections to the gravitational potential [49].
In Sections 3.1 and 3.2, we will review the necessary ingredients for the tropical Monte Carlo approach from [15]: The concepts of the _tropical approximation_ and _tropical sampling_. In Section 3.3, we review the (tropical) geometry of parametric Feynman integrands and the particular shape that the Symanzik polynomials' Newton polytopes exhibit. We will put special focus on the _generalized permutahedron_ property of the second Symanzik \(\mathcal{F}\) polynomial. At particularly exceptional kinematic points, this property of the \(\mathcal{F}\) polynomial can be lost. In these cases the integration with feyntrop might fail. We discuss this limitation in detail in Section 3.3. The overall tropical sampling algorithm is summarized in Section 3.4.
In Section 4.2, we summarize the necessary steps for the efficient evaluation of (deformed) parametric Feynman integrands. The key step is to express the entire integrand in terms of explicit matrix expressions. Our method is more efficient than the naive expansion of the Symanzik poly
nomials, as fast linear algebra routines can be used for the evaluation of such matrix expressions.
The structure, installation and usage of the program feyntrop is described in Section 5. To illustrate its capabilities we give multiple detailed examples of evaluated Feynman integrals in Section 6. In Section 7, we conclude and give pointers for further developments of the general tropical Feynman integration method and the program feyntrop.
## 2 Feynman integrals
### Momentum and parametric representations
Let \(G\) be a one-particle irreducible Feynman graph with edge set \(E\) and vertex set \(V\). Each edge \(e\in E\) comes with a mass \(m_{e}\) and an edge weight \(\nu_{e}\). Each vertex \(v\in V\) comes with an incoming spacetime momentum \(p_{v}\). Vertices without incoming momentum, i.e. where \(p_{v}=0\), are internal. Let \(\mathcal{E}\) by the incidence matrix of \(G\) which is formed by choosing an arbitrary orientation for the edges and setting \(\mathcal{E}_{v,e}=\pm 1\) if \(e\) points to/from \(v\) and \(\mathcal{E}_{v,e}=0\) if \(e\) is not incident to \(v\). The Feynman integral associated to \(G\) reads
\[\mathcal{I}=\int\prod_{e\in E}\frac{\mathrm{d}^{D}q_{e}}{i\pi^{D/2}}\left( \frac{-1}{q_{e}^{2}-m_{e}^{2}+i\varepsilon}\right)^{\nu_{e}}\prod_{v\in V \setminus\{v_{0}\}}i\pi^{D/2}\delta^{(D)}\left(p_{v}+\sum_{e\in E}\mathcal{E} _{v,e}q_{e}\right), \tag{1}\]
where we integrate over all \(D\)-dimensional spacetime momenta \(q_{e}\) and we extracted the \(\delta\) function that accounts for overall momentum conservation by removing the vertex \(v_{0}\in V\). We compute the squared length \(q_{e}^{2}=(q_{e}^{0})^{2}-(q_{e}^{1})^{2}-(q_{e}^{2})^{2}-\ldots\) using the mostly-minus signature Minkowski metric.
To evaluate \(\mathcal{I}\) numerically, we will use the equivalent parametric representation (see, e.g., [50])
\[\mathcal{I}=\Gamma(\omega)\int_{\mathbb{P}_{+}^{E}}\phi\quad\text{ with }\quad\phi=\left(\prod_{e\in E}\frac{x_{e}^{\nu_{e}}}{\Gamma(\nu_{e})}\right) \frac{1}{\mathcal{U}(\mathbf{x})^{D/2}}\left(\frac{1}{\mathcal{V}(\mathbf{x})-i \varepsilon\,\sum_{e\in E}x_{e}}\right)^{\omega}\Omega\,. \tag{2}\]
We integrate over the _positive projective simplex_\(\mathbb{P}_{+}^{E}=\{\mathbf{x}=[x_{0},\ldots,x_{|E|-1}]\in\mathbb{RP}^{E-1}:x_{e}>0\}\) with respect to its canonical volume form
\[\Omega=\sum_{e=0}^{|E|-1}(-1)^{|E|-e-1}\frac{\mathrm{d}x_{0}}{x_{0}}\wedge \cdots\wedge\frac{\widehat{\mathrm{d}x_{e}}}{x_{e}}\wedge\cdots\wedge\frac{ \mathrm{d}x_{|E|-1}}{x_{|E|-1}}\,. \tag{3}\]
Note that in the scope of this article we make the unusual choice to start the indexing with \(0\) for the benefit of a seamless notational transition to our computer implementation. So, the edge and vertex sets are always assumed to be given by \(E=\{0,1,\ldots,|E|-1\}\) and \(V=\{0,1,\ldots,|V|-1\}\).
The _superficial degree of divergence_ of the graph \(G\) is given by \(\omega=\sum_{e\in E}\nu_{e}-DL/2\), where \(L=|E|-|V|+1\) is the number of loops of \(G\).
We use \(\mathcal{V}(\mathbf{x})=\mathcal{F}(\mathbf{x})/\mathcal{U}(\mathbf{x})\) as a shorthand for the quotient of the two _Symanzik polynomials_ that can be defined using the _reduced graph Laplacian_\(\mathcal{L}(\mathbf{x})\), a \((|V|-1)\times(|V|-1)\) matrix given element-wise by \(\mathcal{L}(\mathbf{x})_{u,v}=\sum_{e\in E}\mathcal{E}_{u,e}\mathcal{E}_{v,e}/x_{e}\) for all \(u,v\in V\setminus\{v_{0}\}.\) We have the identities
\[\mathcal{U}(\mathbf{x})=\det\mathcal{L}(\mathbf{x})\,\left(\prod_{e\in E}x_{e}\right) \,,\quad\mathcal{F}(\mathbf{x})=\mathcal{U}(\mathbf{x})\,\left(-\sum_{u,v\in V \setminus\{v_{0}\}}\mathcal{P}^{u,v}\;\mathcal{L}^{-1}(\mathbf{x})_{u,v}+\sum_{e \in E}m_{e}^{2}x_{e}\right), \tag{4}\]
where \(\mathcal{P}^{u,v}=p_{u}\cdot p_{v}\) with the scalar product being computed using the Minkowski metric.
Combinatorial Symanzik polynomialsWe also have the combinatorial formulas for \(\mathcal{U}\) and \(\mathcal{F}\)
\[\mathcal{U}(\mathbf{x}) =\sum_{T}\prod_{e\notin T}x_{e}\,, \mathcal{F}(\mathbf{x}) =-\sum_{F}p(F)^{2}\prod_{e\notin F}x_{e}+\mathcal{U}(\mathbf{x})\sum_{ e\in E}m_{e}^{2}x_{e}\,, \tag{5}\]
where we sum over all spanning trees \(T\) and all spanning two-forests \(F\) of \(G\), and \(p(F)^{2}\) is the Minkowski squared momentum running between the two-forest components. From this formulation it can be seen that \(\mathcal{U}\) and \(\mathcal{F}\) are homogeneous polynomials of degree \(L\) and \(L+1\) respectively. Hence, \(\mathcal{V}\) is a homogeneous rational function of degree \(1\).
We will give fast algorithms to evaluate \(\mathcal{U}(\mathbf{x})\) and \(\mathcal{F}(\mathbf{x})\) in Section 4.2.
### Kinematic regimes
By Poincare invariance, the value of the Feynman integral (1) only depends on the \(|V|\times|V|\) Gram matrix \(\mathcal{P}^{u,v}=p_{u}\cdot p_{v}\) and not on the explicit form of the vectors \(p_{v}\). In fact, it is even irrelevant in which ambient dimension the vectors \(p_{v}\) have been defined. The following characterisation of the different _kinematic regimes_ that we propose will therefore only take the input of a symmetric \(|V|\times|V|\) matrix \(\mathcal{P}\) with vanishing row and column sums (i.e. the momentum conservation conditions \(\sum_{v\in V}p_{u}\cdot p_{v}=\sum_{v\in V}\mathcal{P}^{u,v}=0\) for all \(u\in V\)), without requiring any explicit knowledge of the \(p_{v}\) vectors. In fact, we will not even require that there are any vectors \(p_{v}\) for which \(\mathcal{P}^{u,v}=p_{u}\cdot p_{v}\).
Euclidean regimeWe say a given Feynman integral computation problem is in the _Euclidean regime_ if the matrix \(\mathcal{P}\) is negative semi-definite. In this regime, \(\mathcal{F}(\mathbf{x})\geq 0\) for all \(\mathbf{x}\in\mathbb{P}_{+}^{E}\). We call this the Euclidean regime, because the integral (1) is equivalent to an analogous Feynman integral where scalar products are computed with the Euclidean all-minus metric. To see this, note that as \(-\mathcal{P}\) is positive semi-definite, there is a \(|V|\times|V|\) matrix \(\mathcal{Q}\) such that \(\mathcal{P}=-\mathcal{Q}^{T}\mathcal{Q}\). We can think of the column vectors \(\widetilde{p}_{1},\dots,\widetilde{p}_{|V|}\) of \(\mathcal{Q}\) as an auxiliary set of incoming momentum vectors. Elements of \(\mathcal{P}\) can be interpreted as Euclidean, all-minus metric, scalar products of the \(\widetilde{p}_{v}\)-vectors: \(\mathcal{P}^{u,v}=-\widetilde{p}_{u}^{T}\widetilde{p}_{v}=-\sum_{w\in V} \mathcal{Q}^{w,u}\mathcal{Q}^{w,v}\). Translating this back to (1) means that we can change
Figure 1: Partition of kinematics into different regimes.
the signature of the scalar products to the all-minus metric if we replace the external momenta with the \(\widetilde{p}_{v}\) vectors which are defined in an auxiliary space \(\mathbb{R}^{|V|}\). We emphasise that this way of relating Euclidean and Minkowski space integrals is inherently different from the typical _Wick rotation_ procedure and that the \(\widetilde{p}_{v}\)-vectors will in general be different from the original \(p_{v}\) vectors.
Pseudo-Euclidean regimeIn fact, \(\mathcal{F}(\mathbf{x})\geq 0\) for all \(\mathbf{x}\in\mathbb{P}^{E}_{+}\) in a larger kinematic regime, where \(\mathcal{P}\) is not necessarily negative semi-definite. If for each subset \(V^{\prime}\subset V\) of the vertices the inequality
\[\left(\sum_{v\in V^{\prime}}p_{v}\right)^{2}=\sum_{u,v\in V^{\prime}}p_{u} \cdot p_{v}=\sum_{u,v\in V^{\prime}}\mathcal{P}^{u,v}\leq 0 \tag{6}\]
is respected, then we are in the _pseudo-Euclidean_ regime. The first two equalities in (6) are only included as mnemonic devices; knowledge of \(\mathcal{P}\) is sufficient to check the inequalities. Equivalently, we can require the element sums of all _principle minor matrices_ of the \(\mathcal{P}\) matrix to be \(\leq 0\).
By (5) and (6), the coefficients of \(\mathcal{F}\) are non-negative in the pseudo-Euclidean regime. Our choice of normalization factors ensures that (1) and (2) are real positive in this case.
We remark that there is a commonly used alternative definition of a kinematic regime which, on first sight, is similar to the condition above. This alternative definition requires the inequalities \(p_{u}\cdot p_{v}\leq 0\) to be fulfilled for all \(u,v\in V\) (see, e.g., [51, Sec. 2.5]). This is more restrictive than our condition in (6). In fact, it is too restrictive for our purposes, as not even entirely _Euclidean Feynman integrals_ can generally be described in this regime. The reason for this is that not all negative semi-definite matrices \(\mathcal{P}\) fulfill this more restrictive condition.
In our case, the Euclidean regime is contained in the pseudo-Euclidean regime. To verify this, we have to make sure that a negative semi-definite \(\mathcal{P}\) fulfills the conditions in (6). Such a \(\mathcal{P}\) can be represented with an appropriate set of \(\widetilde{p}_{v}\) vectors as above: \(\mathcal{P}^{u,v}=-\widetilde{p}_{u}^{T}\widetilde{p}_{v}\). For each \(V^{\prime}\subset V\) we get the principle minor element sum
\[\sum_{u,v\in V^{\prime}}\mathcal{P}^{u,v}=-\sum_{u,v\in V^{\prime}}\widetilde {p}_{u}^{T}\widetilde{p}_{v}=-\left(\sum_{v\in V^{\prime}}\widetilde{p}_{v} \right)^{T}\left(\sum_{v\in V^{\prime}}\widetilde{p}_{v}\right)\leq 0\,. \tag{7}\]
Minkowski regimeIf we are not in the pseudo-Euclidean regime (and thereby also not in the Euclidean regime), then we are in the _Minkowski regime_.
Generic and exceptional kinematicsWithout any resort to the explicit incoming momentum vectors \(p_{v}\), we call a vertex \(v\)_internal_ if \(\mathcal{P}^{u,v}=0\) for all \(u\in V\) and _external_ otherwise. Let \(V^{\mathrm{ext}}\subset V\) be the set of external vertices. Complementary to the classification above, we say that our kinematics are _generic_ if for each _proper_ subset \(V^{\prime}\subsetneq V^{\mathrm{ext}}\) of the external vertices of \(G\) and for each non-empty subset \(E^{\prime}\subset E\) of the edges of \(G\) we have
\[\left(\sum_{v\in V^{\prime}}p_{v}\right)^{2}=\sum_{u,v\in V^{\prime}}p_{u} \cdot p_{v}=\sum_{u,v\in V^{\prime}}\mathcal{P}^{u,v}\neq\sum_{e\in E^{\prime }}m_{e}^{2}\,. \tag{8}\]
For example, the kinematics are always generic in the pseudo-Euclidean regime if \(m_{e}>0\) for all \(e\in E\) or if \(\sum_{u,v\in V^{\prime}}\mathcal{P}^{u,v}<0\) for all \(V^{\prime}\subsetneq V^{\mathrm{ext}}\). Note that generic kinematics also exclude _on-shell external momenta_, i.e. cases where \(p_{v}^{2}=\mathcal{P}^{v,v}=0\) for some \(v\in V^{\mathrm{ext}}\) as long as not all \(m_{e}>0\).
Kinematic configurations that are not generic are called _exceptional_.
As above, only the statements on \(\mathcal{P}^{u,v}\) are sufficient for the classification. The other equalities are added to enable a seamless comparison to the literature.
The discussed kinematic regimes and their respective overlaps are illustrated in Figure 1. In contrast to what the figure might suggest, the exceptional kinematics only cover a space that is of lower dimension than the one of the generic regime. The Minkowski regime is not explicitly shown as it covers the whole area that is not pseudo-Euclidean. Note that Minkowski, pseudo-Euclidean and Euclidean kinematics can be exceptional.
feyntrop detects the relevant kinematic regime using the conditions discussed above.
### Contour deformation
In the pseudo-Euclidean (and thereby also in the Euclidean) regime, \(\mathcal{F}(\mathbf{x})\) stays positive and the integral (2) cannot have any simple poles inside the integration domain.
In the Minkowski regime however, simple propagator poles of the integrand (1) and simple poles associated to zeros of \(\mathcal{F}\) in (2) are avoided using the causal \(i\varepsilon\) prescription (see, e.g., [52]). This prescription tells us to which side of the pole the integration contour needs to be deformed. When evaluating integrals such as (1) numerically, we have to find an _explicit_ choice for such an integration contour. Finding such an explicit contour deformation, which also has decent numerical stability properties, is a surprisingly complicated task. Explicit contour deformations for numerical evaluation were pioneered by Soper [37] and later refined [45, 53]. This original type of contour deformation has the caveat that the projective symmetry of the integral (2) is lost as these deformations are inherently non-projective and usually formulated in affine charts, i.e. 'gauge fixed' formulations of (2). Experience, e.g. from [15], shows that the projective symmetry of (2) is a treasured good that should not be given up lightly.
To retain projective symmetry we will hence use a different deformation than established numerical integration tools. We will use the embedding \(\iota_{\lambda}:\mathbb{P}^{E}_{+}\hookrightarrow\mathbb{CP}^{|E|-1}\) (recall that \(\mathbb{P}^{E}_{+}\) is a subset of \(\mathbb{RP}^{|E|-1}\)) of the projective simplex into \(|E|-1\) complex dimensional projective space given by
\[\iota_{\lambda}:x_{e}\mapsto x_{e}\exp\left(-i\lambda\frac{\partial\mathcal{V }}{\partial x_{e}}(\mathbf{x})\right). \tag{9}\]
This deformation prescription was proposed in [47, eq. (43)] in the context of the cohomological viewpoint on Feynman integrals (see also [54, Sec. 4.3]). As \(\mathcal{U}\) and \(\mathcal{F}\) are homogeneous polynomials of degree \(L\) and \(L+1\) respectively and \(\mathcal{V}(\mathbf{x})=\mathcal{F}(\mathbf{x})/\mathcal{U}(\mathbf{x})\), the partial derivative \(\frac{\partial\mathcal{V}}{\partial x_{e}}\) is a rational function in \(\mathbf{x}\) of homogeneous degree \(0\), so \(\iota_{\lambda}\) indeed respects projective equivalence.
We want to deform the integration contour \(\mathbb{P}^{E}_{+}\) of (2) into \(\iota_{\lambda}\big{(}\mathbb{P}^{E}_{+}\big{)}\subset\mathbb{CP}^{|E|-1}\). The deformation \(\iota_{\lambda}\) does not change the boundary of \(\mathbb{P}^{E}_{+}\) as each boundary face of \(\mathbb{P}^{E}_{+}\) is characterized by at least one vanishing homogeneous coordinate \(x_{e}=0\). So, \(\iota_{\lambda}\big{(}\partial\mathbb{P}^{E}_{+}\big{)}=\partial\mathbb{P}^{E} _{+}\). By Cauchy's theorem, we can deform the contour as long as we do not hit any poles of the integrand \(\phi\). Supposing that \(\lambda\) is small enough such that no poles of \(\phi\) are hit by the deformation, we have
\[\mathcal{I}=\Gamma(\omega)\int_{\iota_{\lambda}\big{(}\mathbb{P}^{E}_{+} \big{)}}\phi=\Gamma(\omega)\int_{\mathbb{P}^{E}_{+}}\iota_{\lambda}^{*}\phi\,, \tag{10}\]
where \(\iota_{\lambda}^{*}\phi\) denotes the _pullback_ of the differential form \(\phi\). A computation on forms reveals that
\[D=D_{0}-2\epsilon\,, \tag{15}\]
where \(D_{0}\) is a fixed number and \(\epsilon\) is an expansion parameter2. Analogously, we define \(\omega_{0}=\sum_{e\in E}\nu_{e}-D_{0}L/2\). Using this notation, we may make the \(\epsilon\) dependence in (12) explicit and expand,
Footnote 2: Note that the causal \(i\varepsilon\) and the regularization/expansion parameter \(\epsilon\) are (unfortunately) usually referred to with the same Greek letter. We will follow this tradition, but use different versions of the letter for the respective meanings consistently.
\[\mathcal{I}=\Gamma(\omega_{0}+\epsilon L)\sum_{k=0}^{\infty}\frac{\epsilon^{ k}}{k!}\int_{\mathbb{P}_{+}^{E}}\left(\prod_{e\in E}\frac{X_{e}^{\nu_{e}}}{ \Gamma(\nu_{e})}\right)\frac{\det\mathcal{J}_{\lambda}(\mathbf{x})}{\mathcal{U} \left(\mathbf{X}\right)^{D_{0}/2}\cdot\mathcal{V}\left(\mathbf{X}\right)^{\omega_{0}}} \log^{k}\left(\frac{\mathcal{U}(\mathbf{X})}{\mathcal{V}(\mathbf{X})^{L}}\right)\, \Omega\,. \tag{16}\]
If the \(k=0\) integral is finite, all higher orders in \(\epsilon\) are also finite as the \(\log^{k}\) factors cannot spoil the integrability. The \(\Gamma\) factor can be expanded in \(\epsilon\) using \(\Gamma(z+1)=z\Gamma(z)\) and the expansion
\[\log\Gamma(1-\epsilon)=\gamma_{E}\epsilon+\sum_{n=2}^{\infty}\frac{\zeta(n)}{ n}\epsilon^{n}\,, \tag{17}\]
with Euler's \(\gamma_{E}\) and Riemann's \(\zeta\) function.
Together, eqs. (16) and (17) give us an explicit formulation of the \(\epsilon\) expansion of the Feynman integral (1) in the quasi-finite case. In the remainder of this article we will explain how to evaluate the expansion coefficients in (16) using the tropical sampling approach.
## 3 Tropical geometry
### Tropical approximation
We will use the tropical sampling approach which was put forward in [15] to evaluate the deformed parametric Feynman integrals in (12) and (16). Here we briefly review the basic concepts.
For any homogeneous polynomial in \(|E|\) variables \(p(\mathbf{x})=\sum_{k\in\mathrm{supp}(p)}a_{k}\prod_{e=0}^{|E|-1}x_{e}^{k_{e}}\), the support \(\mathrm{supp}(p)\) is the set of multi-indices for which \(p\) has a non-zero coefficient \(a_{k}\). For any such polynomial \(p\), we define the _tropical approximation_\(p^{\mathrm{tr}}\) as
\[p^{\mathrm{tr}}(\mathbf{x})=\max_{k\in\mathrm{supp}(p)}\,\prod_{e=0}^{|E|-1}x_{e}^ {k_{e}}\,. \tag{18}\]
If, for example, \(p(\mathbf{x})=x_{0}^{2}x_{1}-2x_{0}x_{1}x_{2}+5ix_{2}^{3}\), then \(p^{\mathrm{tr}}(\mathbf{x})=\max\{x_{0}^{2}x_{1},x_{0}x_{1}x_{2},x_{2}^{3}\}\). Note that the tropical approximation forgets about the explicit value of the coefficients; it only depends on the fact that a specific coefficient is zero or non-zero. This way, the tropical approximation only depends on the set \(\mathrm{supp}(p)\subset\mathbb{Z}_{\geq 0}^{|E|}\). In fact, it only depends on the shape of the _convex hull_ of \(\mathrm{supp}(p)\), which is the _Newton polytope_ of \(p\). For this reason, \(p^{\mathrm{tr}}\) is nothing but a function avatar of this polytope. Indeed, we can write \(p^{\mathrm{tr}}(\mathbf{x})\) as follows,
\[p^{\mathrm{tr}}(\mathbf{x})=\exp\left(\max_{\mathbf{v}\in\mathbf{N}[p]}\mathbf{v}^ {T}\mathbf{y}\right), \tag{19}\]
where \(\mathbf{y}=(y_{0},\ldots,y_{|E|-1})\) with \(y_{e}=\log x_{e}\), \(\mathbf{v}^{T}\mathbf{y}=\sum_{e\in E}v_{e}y_{e}\) and we maximize over the Newton polytope \(\mathbf{N}[p]\) of \(p\). The exponent above is the _tropicalization_\(\mathrm{Trop}[p]\) of \(p\) over \(\mathbb{C}\) with trivial valuation. It plays a central role in tropical geometry (see, e.g., [55]). For us, the key property of the tropical approximation is that it may be used to put upper and lower bounds on a polynomial:
**Theorem 3.1** ([15, Theorem 8]).: _For a homogeneous \(p\in\mathbb{C}[x_{0},\ldots,x_{|E|-1}]\) that is completely non-vanishing on \(\mathbb{P}^{E}_{+}\) there exists constants \(C_{1},C_{2}>0\) such that_
\[C_{1}\leq\frac{|p(\mathbf{x})|}{p^{\mathrm{tr}}(\mathbf{x})}\leq C_{2}\quad\text{ for all }\quad\mathbf{x}\in\mathbb{P}^{E}_{+}\,. \tag{20}\]
A polynomial \(p\) is _completely non-vanishing_ on \(\mathbb{P}^{E}_{+}\) if it does not vanish in the interior of \(\mathbb{P}^{E}_{+}\) and if another technical condition is fulfilled (see [24, Definition 1] for a precise definition).
The \(\mathcal{U}\) polynomial is always completely non-vanishing on \(\mathbb{P}^{E}_{+}\) and in the pseudo-Euclidean regime also \(\mathcal{F}\) is completely non-vanishing on \(\mathbb{P}^{E}_{+}\). We define the associated tropical approximations \(\mathcal{U}^{\mathrm{tr}}\), \(\mathcal{F}^{\mathrm{tr}}\) and \(\mathcal{V}^{\mathrm{tr}}=\mathcal{F}^{\mathrm{tr}}/\mathcal{U}^{\mathrm{tr}}\).
Our key assumption for the integration of Feynman integrals in the Minkowski regime is that the approximation property can also be applied to the deformed Symanzik polynomials.
_Assumption 3.2_.: There are \(\lambda\) dependent constants \(C_{1}(\lambda),C_{2}(\lambda)>0\) such that for small \(\lambda>0\),
\[C_{1}(\lambda)\leq\left|\left(\frac{\mathcal{U}^{\mathrm{tr}}(\mathbf{x})}{ \mathcal{U}(\mathbf{X})}\right)^{D_{0}/2}\left(\frac{\mathcal{V}^{\mathrm{tr}}( \mathbf{x})}{\mathcal{V}(\mathbf{X})}\right)^{\omega_{0}}\right|\leq C_{2}(\lambda) \quad\text{ for all }\quad\mathbf{x}\in\mathbb{P}^{E}_{+}\,, \tag{21}\]
where we recall that \(\mathbf{X}=(X_{1},\ldots,X_{|E|})\) and \(X_{e}=x_{e}\exp\big{(}-i\lambda\frac{\partial\mathcal{V}}{\partial x_{e}}(\bm {x})\big{)}\).
In the pseudo-Euclidean regime the assumption is fulfilled, as we are allowed to set \(\lambda=0\) and use the established approximation property from [15] on \(\mathcal{U}\) and \(\mathcal{F}\). In the Minkowski regime, Assumption 3.2 can only be fulfilled if there are no Landau singularities, i.e. solutions to (14). If there are no such singularities, Assumption 3.2 seems to be mostly fulfilled. It would be very interesting to give a concise set of conditions for the validity of Assumption 3.2. We leave this to a future research project.
Another highly promising research question is to find a value for \(\lambda\) such that the constants \(C_{1}(\lambda)\) and \(C_{2}(\lambda)\) tighten the bounds as much as possible. Finding such an optimal value for \(\lambda\) would result in the first entirely _canonical_ deformation prescription which does not depend on free parameters.
### Tropical sampling
Intuitively, Assumption 3.2 tells us that the integrands in (12) and (16) are, except for phase factors, reasonably approximated by the tropical approximation of the undeformed integrand. To evaluate the integrals (16) with _tropical sampling_, as in [15, Sec. 7.2], we define the probability distribution
\[\mu^{\mathrm{tr}}=\frac{1}{I^{\mathrm{tr}}}\frac{\prod_{e\in E}x_{e}^{\nu_{e} }}{\mathcal{U}^{\mathrm{tr}}(\mathbf{x})^{D_{0}/2}\,\mathcal{V}^{\mathrm{tr}}( \mathbf{x})^{\omega_{0}}}\,\Omega\,, \tag{22}\]
where \(I^{\mathrm{tr}}\) is a normalization factor, which is chosen such that \(\int_{\mathbb{P}^{E}_{+}}\mu^{\mathrm{tr}}=1\). As of Assumption 3.2 and the requirement that the integrals in (16) shall be finite, the factor \(I^{\mathrm{tr}}\) must also be finite. If \(\omega_{0}=0\), this normalization factor is equal to the associated _Hepp bound_ of the graph \(G\)[16]. Because \(\mu^{\mathrm{tr}}>0\) for all \(\mathbf{x}\in\mathbb{P}^{E}_{+}\), \(\mu^{\mathrm{tr}}\) gives rise to a proper probability distribution on this domain.
Using the definition of \(\mu^{\mathrm{tr}}\) to rewrite (16) results in
\[\mathcal{I}=\frac{\Gamma(\omega_{0}+\epsilon L)}{\prod_{e\in E}\Gamma(\nu_{e})} \sum_{k=0}^{\infty}\frac{\epsilon^{k}}{k!}\mathcal{I}_{k},\quad\text{with} \tag{23}\]
We will evaluate the integrals above by sampling from the probability distribution \(\mu^{\mathrm{tr}}\).
In [15], two different methods to generate samples from \(\mu^{\mathrm{tr}}\) were introduced. The first method [15, Sec. 5], which does not take the explicit structure of \(\mathcal{U}\) and \(\mathcal{F}\) into account, requires the computation of a triangulation of the refined normal fans of the Newton polytopes of \(\mathcal{U}\) and \(\mathcal{F}\). Once such a triangulation is computed, arbitrarily many samples from \(\mu^{\mathrm{tr}}\) can be generated with little computational effort. Unfortunately, obtaining such a triangulation is a highly computationally demanding process.
The second method [15, Sec. 6] to generate samples from the probability distribution \(\mu^{\mathrm{tr}}\) makes use of a particular property of the Newton polytopes of \(\mathcal{U}\) and \(\mathcal{F}\) which allows to bypass the costly triangulation step. This second method additionally has the advantage that it is relatively straight forward to implement. This faster method of sampling from \(\mu^{\mathrm{tr}}\) relies on the Newton polytopes of \(\mathcal{U}\) and \(\mathcal{F}\) being _generalized permutahedra_.
For the program feyntrop we will make use of this second method. Our tropical sampling algorithm to produce samples from \(\mu^{\mathrm{tr}}\) is essentially equivalent to the one published with [15].
### Base polytopes and generalized permutahedra
A fantastic property of generalized permutahedra is that they come with a _canonical normal fan_ which greatly facilitates the sampling of \(\mu^{\mathrm{tr}}\), see [15, Theorem 27 and Algorithm 4]. Here, we briefly explain the necessary notions. As a start, we define a more general class of polytopes first and discuss restrictions later.
Base polytopesConsider a function \(z:\mathbf{2}^{E}\to\mathbb{R}\) that assigns a number to each subset of \(E\), the edge set of our Feynman graph \(G\). In the following we often identify a subset of \(E\) with a subgraph of \(G\) and use the respective terms interchangeably. So, \(z\) assigns a number to each subgraph of \(G\). We define \(\mathbf{P}[z]\) to be the subset of \(\mathbb{R}^{|E|}\) that consists of all points \((a_{0},\ldots,a_{|E|-1})\in\mathbb{R}^{|E|}\) which fulfill \(\sum_{e\in E}a_{e}=z(E)\) and the \(2^{|E|}-1\) inequalities
\[\sum_{e\in\gamma}a_{e}\geq z(\gamma)\quad\text{ for all }\quad\gamma\subsetneq E\,. \tag{24}\]
Clearly, these inequalities describe a convex bounded domain, i.e. a _polytope_. This polytope \(\mathbf{P}[z]\) associated to an arbitrary function \(z:\mathbf{2}^{E}\to\mathbb{R}\) is called the _base polytope_.
Generalized permutahedraThe following is a special case of a theorem by Aguiar and Ardila who realized that numerous seemingly different structures from combinatorics can be understood using the same object: the _generalized permutahedron_ which was initially defined by Postnikov [56].
**Theorem 3.3** ([57, Theorem 12.3] and the references therein).: _The polytope \(\mathbf{P}[z]\) is a generalized permutahedron if and only if the function \(z\) is supermodular. That means, \(z\) fulfills the inequalities_
\[z(\gamma)+z(\delta)\leq z(\gamma\cup\delta)+z(\gamma\cap\delta)\text{ for all pairs of subgraphs }\gamma,\delta\subset E\,. \tag{25}\]
Because other properties of generalized permutahedra are not of central interest in this paper, we will take Theorem 3.3 as our definition of these special polytopes. Important for us is that for many kinematic situations the Newton polytopes of the Symanzik polynomials fall into this category. The following theorem is due to Schultka [26]:
**Theorem 3.4**.: _The Newton polytope \(\mathbf{N}[\mathcal{U}]\) of \(\mathcal{U}\) is equal to the base polytope \(\mathbf{P}[z_{\mathcal{U}}]\) with \(z_{\mathcal{U}}\) being the function \(z_{\mathcal{U}}(\gamma)=L_{\gamma}\), which maps each subgraph \(\gamma\) to its loop number \(L_{\gamma}\). Moreover, \(z_{\mathcal{U}}\) is supermodular. Hence, by Theorem 3.3, \(\mathbf{N}[\mathcal{U}]\) is a generalized permutahedron._
Proof.: See [26, Sec. 4] and the references therein. In [16], it was observed that \(\mathbf{N}[\mathcal{U}]\) is a _matroid polytope_, which by [57, Sec. 14] also proves the statement.
Because \(\mathbf{N}[\mathcal{U}]\) is a generalized permutahedron, we also say that \(\mathcal{U}\) has the generalized permutahedron property.
Generalized permutahedron property of the \(\mathcal{F}\) polynomialFor the second Symanzik \(\mathcal{F}\) polynomial the situation is more tricky. We need the notion of _mass-momentum spanning_ subgraphs which has been defined by Brown [17] (see also [26, Sec. 4] for an interesting relationship to the concept of _s-irreducibility_[58] or [59] were related results have been obtained). We use the following slightly generalized version of Brown's definition (see also [15, Sec. 7.2]): We call a subgraph \(\gamma\subset E\) mass-momentum spanning if the the second Symanzik polynomial of the cograph \(G/\gamma\) vanishes identically \(\mathcal{F}_{G/\gamma}=0\).
**Theorem 3.5**.: _In the Euclidean regime with generic kinematics, the Newton polytope \(\mathbf{N}[\mathcal{F}]\) is a generalized permutahedron. It is equal to the base polytope \(\mathbf{P}[z_{\mathcal{F}}]\) with the function \(z_{\mathcal{F}}\) defined for all subgraphs \(\gamma\) by \(z_{\mathcal{F}}(\gamma)=L_{\gamma}+1\) if \(\gamma\) is mass-momentum spanning and \(z_{\mathcal{F}}(\gamma)=L_{\gamma}\) otherwise. Consequently, this function \(z_{\mathcal{F}}:\mathbf{2}^{E}\to\mathbb{R}\) is supermodular, i.e. it fulfills (25)._
Proof.: This has also been proven in [26, Sec. 4]. The proof relies on a special infrared factorization property of \(\mathcal{F}\) that has been discovered by Brown [17, Theorem 2.7].
We explicitly state the following generalization of Theorem 3.5:
**Theorem 3.6**.: _Theorem 3.5 holds in all regimes if the kinematics are generic._
Proof.: The \(\mathcal{F}\) polynomial has the same monomials (with different coefficients) as in the Euclidean regime with generic kinematics. To verify this, note that the conditions for generic kinematics prevent cancellations between the mass and momentum part of the \(\mathcal{F}\) polynomial as given in eq. (5). So, the respective Newton polytopes coincide.
There is also the following further generalization of Theorem 3.5 to Euclidean but exceptional kinematics. This generalization is very plausible (see [17, Example 2.5]), but it is a technical challenge to prove it. We will not attempt to include a proof here for the sake of brevity. So, we state this generalization as a conjecture:
_Conjecture 3.7_.: Theorem 3.5 holds in the Euclidean regime for all (also exceptional) kinematics.
We emphasize that \(\mathbf{N}[\mathcal{F}]\) is generally _not_ a generalized permutahedron outside of the Euclidean regime. This has been observed in [26, Remark 4.16] (see also [60, Sec. 4.2], [61, Sec. 2.2.3] or [15, Remark 35]). Explicit counter examples are encountered while computing the massless on-shell boxes depicted in Figure 2. The \(\mathcal{F}\) polynomials of the completely massless box with only on-shell external momenta, the massless box with one off-shell momentum and the massless box with two adjacent off-shell momenta (depicted in Figures 1(a), 1(b) and 1(c)) do not fulfill the generalized permutahedron property. On the other hand, the \(\mathcal{F}\) polynomial does fulfill the generalized permutahedron property for the massless box with two or more off-shell legs such that two off-shell legs are on opposite sides (as depicted in Figure 1(d)).
Therefore, we have to make concessions in the Minkowski regime with exceptional kinematics.
An observation of Arkani-Hamed, Hillman, Mizera is helpful (see [21, eq. (8)] and the discussion around it): the facet presentation of \(\mathbf{N}[\mathcal{F}]\) given in Theorem 3.5 turns out to hold in a quite broad range of kinematic regimes, even if \(\mathbf{N}[\mathcal{F}]\) is not a generalized permutahedron.
_Observation 3.8_.: The Newton polytope of \(\mathcal{F}\) is often equal to the base polytope \(\mathbf{P}[z_{\mathcal{F}}]\) with the function \(z_{\mathcal{F}}\) defined as in Theorem 3.5.
For instance, all massless boxes depicted in Figure 2 have the property that the Newton polytopes of their \(\mathcal{F}\) polynomial are base polytopes described by the respective \(z_{\mathcal{F}}\) functions, i.e. \(\mathbf{N}[\mathcal{F}]=\mathbf{P}[z_{\mathcal{F}}]\). In the first three cases (Figures 1(a), 1(b), 1(c)) the \(z_{\mathcal{F}}\) function does not fulfill the inequalities (25). For the graph in Figure 1(d) these inequalities are fulfilled and the associated Newton polytope \(\mathbf{N}[\mathcal{F}]=\mathbf{P}[z_{\mathcal{F}}]\) is a generalized permutahedron.
Internally, feyntrop uses the polytope \(\mathbf{P}[z_{\mathcal{F}}]\) as a substitute for \(\mathbf{N}[\mathcal{F}]\) as the former is easier to handle and faster to compute than the latter.
It would be very beneficial to have precise conditions for when \(\mathbf{P}[z_{\mathcal{F}}]\) indeed is equal to \(\mathbf{N}[\mathcal{F}]\). Empirically, we have observed that it is valid for quite a wide range of exceptional kinematics. We know, however, that this condition is not fulfilled for arbitrary exceptional kinematics [62]. An explicit counter example3 is depicted in Figure 3. For this triangle graph with the indicated exceptional kinematic configuration, the polytope \(\mathbf{N}[\mathcal{F}]\) is different from \(\mathbf{P}[z_{\mathcal{F}}]\). We find that \(\mathcal{F}(\mathbf{x})=m^{2}(x_{1}^{2}+x_{2}^{2})+(2m^{2}-Q^{2})x_{1}x_{2}\) which implies that \(\mathbf{N}[\mathcal{F}]\) is a one-dimensional polytope. On the other hand, \(\mathbf{P}[z_{\mathcal{F}}]\) can be shown to be a two-dimensional polytope. In \(D=4\), the Feynman integral associated to Figure 3 is infrared divergent and therefore not quasi-finite. In \(D=6\)
Figure 2: Massless box with different external legs on- or off-shell. On-shell (\(p^{2}=0\)) legs are drawn as dashed lines and off-shell (\(p^{2}\neq 0\)) legs with solid lines. Internal propagators are massless.
fevntrop can evaluate the integral without problems. Nonetheless, we expect there to be more complicated Feynman graphs with similarly exceptional external kinematics, that are quasi-finite, but which cannot be evaluated using feyntrop. We did not, however, manage to find such a graph.
Even if \(\mathbf{N}[\mathcal{F}]\neq\mathbf{P}[z_{\mathcal{F}}]\), the Newton polytope \(\mathbf{N}[\mathcal{F}]\) is _bounded_ by the base polytope \(\mathbf{P}[z_{\mathcal{F}}]\). The reason for this is that \(\mathcal{F}\) can only lose monomials if we make the kinematics less generic.
**Theorem 3.9**.: _We have \(\mathbf{N}[\mathcal{F}]\subset\mathbf{P}[z_{\mathcal{F}}]\)._
Efficient check of the generalized permutahedron property of a base polytopeNaively, it is quite hard to check if the base polytope \(\mathbf{P}[z]\) associated to a given function \(z:\mathbf{2}^{E}\to\mathbb{R}\) is a generalized permutahedron. There are of the order \(2^{2|E|}\) many inequalities to be checked for (25). A more efficient way is to only check the following inequalities
\[z(\gamma\cup\{e\})+z(\gamma\cup\{h\})\leq z(\gamma)+z(\gamma\cup\{e,h\}) \tag{26}\]
for all subgraphs \(\gamma\subset E\) and edges \(e,h\in E\setminus\gamma\). The inequalities (26) imply the ones in (25). For (26) less than \(|E|^{2}2^{|E|}\) inequalities need to be checked. So, (26) is a more efficient version of (25).
### Generalized permutahedral tropical sampling
feyntrop uses a slightly adapted version of the generalized permutahedron tropical sampling algorithm from [15, Sec. 6.1 and Sec. 7.2] to sample from the distribution given by \(\mu^{\mathrm{tr}}\) in eq. (22).
The algorithm involves a preprocessing and a sampling step.
PreprocessingThe first algorithmic task to prepare for the sampling from \(\mu^{\mathrm{tr}}\) is to check in which regime the kinematic data are located. The kinematic data are provided via the matrix \(\mathcal{P}^{u,v}\) as it was defined in Section 2.1 and via a list of masses \(m_{e}\) for each edge \(e\in E\). If the symmetric \(|V|\times|V|\) matrix \(\mathcal{P}^{u,v}\) is negative semi-definite (which is easy to check using matrix diagonalization), then we are in the Euclidean regime. Similarly we check if the defining (in)equalities for the other kinematic regimes given in Section 2.2 are fulfilled or not. Depending on the kinematic regime, we need to use a contour deformation for the integration or not. Further, if the kinematics are Euclidean or generic, we know that the generalized permutahedron property of \(\mathcal{F}\) is fulfilled (also thanks to the unproven Conjecture 3.7). Table 1 summarizes this dependence of the algorithm on the kinematic regime.
Figure 3: Triangle Feynman graph relevant in QED. The two solid propagators have mass \(m\) and the solid legs have incoming squared momentum \(m^{2}\). The dashed propagator is massless and the doubled leg has incoming squared momentum \(Q^{2}\).
If we find that we are at an exceptional and non-Euclidean kinematic point, \(\mathbf{N}[\mathcal{F}]\) might not be a generalized permutahedron and it might not even be equal to \(\mathbf{P}[z_{\mathcal{F}}]\). In this case, the program prints a message warning the user that the integration might not work. The program then continues under the assumption that \(\mathbf{N}[\mathcal{F}]=\mathbf{P}[z_{\mathcal{F}}]\). In any other case, \(\mathbf{N}[\mathcal{F}]\) is a generalized permutahedron and equal to \(\mathbf{P}[z_{\mathcal{F}}]\). Hence, the tropical sampling algorithm is guaranteed to give a convergent Monte Carlo integration method by [15, Sec. 6.1].
The next task is to compute the loop number \(L_{\gamma}\) and check if \(\gamma\) is mass-momentum spanning (by asking if \(\mathcal{F}_{G/\gamma}=0\)) for each subgraph \(\gamma\subset E\). Using these data, we can compute the values of \(z_{\mathcal{U}}(\gamma)\) and \(z_{\mathcal{F}}(\gamma)\) for all subgraphs \(\gamma\subset E\) using the respective formulas from Theorems 3.4 and 3.5.
If we are at an exceptional and non-Euclidean kinematic point, we check the inequalities (26) for the \(z_{\mathcal{F}}\) function. If they are all fulfilled, then \(\mathbf{P}[z_{\mathcal{F}}]\) is a generalized permutahedron and we get further indication that the tropical integration step will be successful. The program prints a corresponding message in this case. Also assuming that Assumption 3.2 is fulfilled, we can compute all integrals in (23) efficiently.
Note that even in the pseudo-Euclidean and the Minkowski regimes with exceptional kinematics, the integration is often successful. For instance, we can integrate all Feynman graphs depicted in Figure 2 regardless of the fulfillment of the generalized permutahedron property. In fact, we did not find a quasi-finite example where the algorithm fails (even though the convergence rate is quite bad for examples in highly exceptional kinematic regimes). We emphasise, however, that the user should check the convergence of the result separately when integrating at a manifestly exceptional and non-Euclidean kinematic point. For instance, by running the program repeatedly with different numbers of sample points or by slightly perturbing the kinematic point. Recall that for generic kinematics \(\mathbf{N}[\mathcal{F}]\) is always a generalized permutahedron by Theorem 3.6 and the integration is guaranteed to work if the finiteness assumptions are fulfilled.
The next computational step is to compute the _generalized degree of divergence_ (see [15, Sec. 7.2]) for each subgraph \(\gamma\subset E\). It is defined by
\[\omega(\gamma)=\sum_{e\in\gamma}\nu_{e}-DL_{\gamma}/2-\omega\delta_{\gamma}^{ \mathrm{m.m.}}, \tag{27}\]
where \(L_{\gamma}\) is the loop number of the subgraph \(\gamma\) and \(\delta_{\gamma}^{\mathrm{m.m.}}=1\) if \(\gamma\) is mass-momentum spanning and \(0\) otherwise.
If \(\omega(\gamma)\leq 0\) for any proper subgraph \(\gamma\), then we discovered a subdivergence. This means that all integrals (16) are divergent. Tropical sampling is not possible in this case and the program prints an error message and terminates. An additional analytic continuation step from (16) to a set of finite integrals would solve this issue. We will leave this to a future research project.
If we have \(\omega(\gamma)>0\) for all \(\gamma\subset E\), we can proceed to the key preparatory step for generalized permutahedral tropical sampling: We use \(\omega(\gamma)\) to compute the following auxiliary subgraph function
\begin{table}
\begin{tabular}{l|l|l|l} & Euclidean & Pseudo-Euclidean & Minkowski \\ \hline Generic & no def. / always GP & no def. / always GP & def. / always GP \\ \hline Exceptional & no def. / always GP & no def. / not always GP & def. / not always GP \\ \end{tabular}
\end{table}
Table 1: Table of the necessity of a deformation (def.) and the fulfillment of the generalized permutahedron property of \(\mathcal{F}\) (GP) in each kinematic regime.
\(J(\gamma)\), which is _recursively_ defined by \(J(\emptyset)=1\) and
\[J(\gamma)=\sum_{e\in\gamma}\frac{J(\gamma\setminus e)}{\omega(\gamma\setminus e )}\text{ for all }\gamma\subset E\,, \tag{28}\]
where \(\gamma\setminus e\) is the subgraph \(\gamma\) with the edge \(e\) removed. The terminal element of this recursion is the subgraph that contains all edges \(E\) of \(G\). We find that \(J(E)=I^{\mathrm{tr}}\), where \(I^{\mathrm{tr}}\) is the normalization factor in (22) and (23) (see [15, Proposition 29] for a proof and details).
In the end of the preprocessing step we compile a table with the information \(L_{\gamma},\delta_{\gamma}^{\mathrm{m.m.}},\omega(\gamma)\) and \(J(\gamma)\) for each subgraph \(\gamma\subset E\) and store it in the memory of the computer.
Sampling stepThe sampling step of the algorithm repeats the following simple algorithm to generate samples \(\mathbf{x}\in\mathbb{P}_{+}^{E}\) that are distributed according to the probability density (22). It is completely described in Algorithm 1. The runtime of our implementation of the algorithm grows roughly quadratically with \(|E|\), but a linear runtime is achievable. The validity of the algorithm has been proven in a more general setup in [15, Proposition 31]. The additional computation of the values of \(\mathcal{U}^{\mathrm{tr}}(\mathbf{x})\) and \(\mathcal{V}^{\mathrm{tr}}(\mathbf{x})\) is an application of an optimization algorithm by Fujishige and Tomizawa [63] (see also [15, Lemma 26]).
The key step of the sampling algorithm is to interpret the recursion (28) as a probability distribution for a given subgraph over its edges. That means, for a given \(\gamma\subset E\) we define \(p_{e}^{\gamma}=\frac{1}{J(\gamma)}\frac{J(\gamma\setminus e)}{\omega(\gamma \setminus e)}\). Obviously, \(p_{e}^{\gamma}\geq 0\) and by (28) we have \(\sum_{e\in\gamma}p_{e}^{\gamma}=1\). So, for each \(\gamma\subset E\), \(p_{e}^{\gamma}\) gives a proper probability distribution on the edges of the subgraph \(\gamma\).
```
Initialize the variables \(\gamma=E\) and \(\kappa,U=1\). while\(\gamma\neq\emptyset\)do Pick a random edge \(e\in\gamma\) with probability \(p_{e}^{\gamma}=\frac{1}{J(\gamma)}\frac{J(\gamma\setminus e)}{\omega(\gamma \setminus e)}\). Set \(x_{e}=\kappa\). If \(\gamma\) is mass-momentum spanning but \(\gamma\setminus e\) is not, set \(V=x_{e}\). If \(L_{\gamma\setminus e}<L_{\gamma}\), multiply \(U\) with \(x_{e}\) and store the result in \(U\), i.e. set \(U\gets x_{e}\cdot U\). Remove the edge \(e\) from \(\gamma\), i.e. set \(\gamma\leftarrow\gamma\setminus e\). Pick a uniformly distributed random number \(\xi\in[0,1]\). Multiply \(\kappa\) with \(\xi^{1/\omega(\gamma)}\) and store the result in \(\kappa\), i.e. set \(\kappa\leftarrow\kappa\xi^{1/\omega(\gamma)}\). endwhile Return \(\mathbf{x}=[x_{0},\ldots,x_{|E|-1}]\in\mathbb{P}_{+}^{E}\), \(\mathcal{U}^{\mathrm{tr}}(\mathbf{x})=U\) and \(\mathcal{V}^{\mathrm{tr}}(\mathbf{x})=V\).
```
**Algorithm 1** Generating a sample distributed as \(\mu^{\mathrm{tr}}\) from (22)
The algorithm can also be interpreted as iteratively _cutting_ edges of the graph \(G\): We start with \(\gamma=E\) and pick a random edge with probability \(p_{e}^{\gamma}\). This edge is _cut_ and removed from \(\gamma\). We continue with the newly obtained graph and repeat this cutting process until all edges are removed. In the course of this, Algorithm 1 computes appropriate random values for the coordinates \(\mathbf{x}\in\mathbb{P}_{+}^{E}\).
Numerical integration
### Monte Carlo integration
We now have all the necessary tools at hand to evaluate the integrals in (23) using Monte Carlo integration. In this section, we briefly review this procedure. The integrals in (23) are of the form
\[I_{f}=\int_{\mathbb{P}^{E}_{+}}f(\mathbf{x})\mu^{\rm tr}\,, \tag{29}\]
where, thanks to the tropical approximation property, \(f(\mathbf{x})\) is a function that is at most log-singular inside, or on the boundary of, \(\mathbb{P}^{E}_{+}\). To evaluate such an integral, we first use the tropical sampling Algorithm 1 to randomly sample \(N\) points \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathbb{P}^{E}_{+}\) that are distributed according to the tropical probability measure \(\mu^{\rm tr}\). By the central limit theorem and as \(f(\mathbf{x})\) is square-integrable,
\[I_{f}\approx I_{f}^{(N)}\qquad\qquad\text{where}\quad\ I_{f}^{(N)}=\frac{1}{N} \sum_{i=1}^{N}f(\mathbf{x}^{(i)})\,. \tag{30}\]
For sufficiently large \(N\), the expected error of this approximation of the integral \(I_{f}\) is
\[\sigma_{f}=\sqrt{\frac{I_{f^{2}}-I_{f}^{2}}{N}}\quad\text{ where }\quad\ I_{f^{2}}=\int_{\mathbb{P}^{E}_{+}}f(\mathbf{x})^{2}\mu^{\rm tr}\,, \tag{31}\]
which itself can be estimated (as long as \(f(\mathbf{x})^{2}\) is square-integrable) by
\[\sigma_{f}\approx\sigma_{f}^{(N)}\qquad\qquad\text{where}\quad\sigma_{f}^{(N) }=\sqrt{\frac{1}{N-1}\left(I_{f^{2}}^{(N)}-\left(I_{f}^{(N)}\right)^{2}\right) }\text{ and }I_{f^{2}}^{(N)}=\frac{1}{N}\sum_{i=1}^{N}f(\mathbf{x}^{(i)})^{2}\,. \tag{32}\]
To evaluate the estimator \(I_{f}^{(N)}\) and the expected error \(\sigma_{f}^{(N)}\) it is necessary to evaluate \(f(\mathbf{x})\) for \(N\) different values of \(\mathbf{x}\). As the random points \(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\in\mathbb{P}^{E}_{+}\) can be obtained quite quickly using Algorithm 1, this evaluation becomes a bottleneck. In the next section, we describe a fast method to perform this evaluation, which is implemented in feyntrop to efficiently obtain Monte Carlo estimates and error terms for the integrals in (23).
### Fast evaluation of (deformed) Feynman integrands
To evaluate the integrals in (23) using a Monte Carlo approach we do not only have to be able to sample from the distribution \(\mu^{\rm tr}\), but we also need to evaluate the remaining integrand (denoted as \(f(\mathbf{x})\) in the last section) rapidly. Explicitly for the numerical evaluation of (23), we have to be able to compute \(X_{e}=x_{e}\exp\big{(}-i\lambda\frac{\partial\mathcal{V}}{\partial x_{e}}(\bm {x})\big{)}\) as well as \(\mathcal{U}(\mathbf{X}),\mathcal{V}(\mathbf{X})\) and \(\det\mathcal{J}_{\lambda}(\mathbf{x})\) for any \(\mathbf{x}\in\mathbb{P}^{E}_{+}\).
Evaluation of the \(\mathcal{U}\) and \(\mathcal{F}\) polynomialsSurprisingly, the explicit polynomial expression for \(\mathcal{U}\) and \(\mathcal{F}\) from eq. (5) are _harder_ to evaluate than the matrix and determinant expression (4) if the underlying graph exceeds a certain complexity. The reason for this is that the number of monomials in (5) increases exponentially with the loop number (see, e.g., [64] for the asymptotic growth rate of
the number of spanning tress in a regular graph), while the size of the matrices in (4) only increases linearly. Standard linear algebra algorithms as the Cholesky or LU decompositions [65] provide polynomial time algorithms to compute the inverse and determinant of \(\mathcal{L}(\mathbf{x})\) and therefore values of \(\mathcal{U}(\mathbf{x})\) and \(\mathcal{F}(\mathbf{x})\) (see, e.g., [15, Sec. 7.1]). In fact, the linear algebra problems on _graph Laplacian_ matrices that need to be solved to compute \(\mathcal{U}(\mathbf{x})\) and \(\mathcal{F}(\mathbf{x})\) fall into a class of problems for which _nearly linear runtime_ algorithms are available [66].
Explicit formulas for the \(\mathcal{V}\) derivativesWe need explicit formulas for the derivatives of \(\mathcal{V}\). These formulas provide fast evaluation methods for \(\mathbf{X}\) and the Jacobian \(\mathcal{J}_{\lambda}(\mathbf{x})\).
Consider the \((|V|-1)\times(|V|-1)\) matrix \(\mathcal{M}(\mathbf{x})=\mathcal{L}^{-1}(\mathbf{x})\,\mathcal{P}\,\mathcal{L}^{-1}( \mathbf{x})\) with \(\mathcal{L}(\mathbf{x})\) and \(\mathcal{P}\) as defined in Section 2.1. For edges \(e\) and \(h\) that connect the vertices \(u_{e},v_{e}\) and \(u_{h},v_{h}\) respectively, we define
\[\begin{split}\mathcal{A}(\mathbf{x})_{e,h}&=\frac{1}{x _{e}x_{h}}\left(\mathcal{M}(\mathbf{x})_{u_{e},u_{h}}+\mathcal{M}(\mathbf{x})_{v_{e},v _{h}}-\mathcal{M}(\mathbf{x})_{u_{e},v_{h}}-\mathcal{M}(\mathbf{x})_{v_{e},u_{h}} \right)\\ \mathcal{B}(\mathbf{x})_{e,h}&=\frac{1}{x_{e}x_{h}} \left(\mathcal{L}^{-1}(\mathbf{x})_{u_{e},u_{h}}+\mathcal{L}^{-1}(\mathbf{x})_{v_{e}, v_{h}}-\mathcal{L}^{-1}(\mathbf{x})_{u_{e},v_{h}}-\mathcal{L}^{-1}(\mathbf{x})_{v_{e},u_{h}} \right),\end{split} \tag{33}\]
where we agree that \(\mathcal{L}^{-1}(\mathbf{x})_{u,v}=\mathcal{M}(\mathbf{x})_{u,v}=0\) if any of \(u\) or \(v\) is equal to \(v_{0}\), the arbitrary vertex that has been removed in the initial expression of the Feynman integral (1). It follows from (4) and the matrix differentiation rule \(\frac{\partial}{\partial x_{e}}\mathcal{L}^{-1}(\mathbf{x})_{u,v}=\left(-\mathcal{ L}^{-1}(\mathbf{x})\frac{\partial\mathcal{L}}{\partial x_{e}}(\mathbf{x})\mathcal{L}^{-1}( \mathbf{x})\right)_{u,v}\) that
\[\frac{\partial\mathcal{V}}{\partial x_{e}}(\mathbf{x})=-\mathcal{A}(\mathbf{x})_{e,e }+m_{e}^{2}\,,\qquad\quad\frac{\partial^{2}\mathcal{V}}{\partial x_{e} \partial x_{h}}(\mathbf{x})=2\delta_{e,h}\,\frac{\mathcal{A}(\mathbf{x})_{e,e}}{x_{e} }-2(\mathcal{A}(\mathbf{x})\circ\mathcal{B}(\mathbf{x}))_{e,h}\,, \tag{34}\]
where we use the _Hadamard_ or _element-wise matrix product_, \((\mathcal{A}(\mathbf{x})\circ\mathcal{B}(\mathbf{x}))_{e,h}=\mathcal{A}(\mathbf{x})_{e,h} \cdot\mathcal{B}(\mathbf{x})_{e,h}\).
Computation of the relevant factors in the integrands of (23)We summarize the necessary steps to compute all the factors in the deformed and \(\epsilon\)-expanded tropical Feynman integral representation (23).
1. Compute the graph Laplacian \(\mathcal{L}(\mathbf{x})\) as defined in Section 2.1.
2. Compute the inverse \(\mathcal{L}^{-1}(\mathbf{x})\) (e.g. by Cholesky decomposing \(\mathcal{L}(\mathbf{x})\)).
3. Use this to evaluate the derivatives of \(\mathcal{V}(\mathbf{x})\) via the formulas in (33) and (34).
4. Compute the values of the deformed \(\mathbf{X}\) parameters: \(X_{e}=x_{e}\exp\big{(}-i\lambda\frac{\partial\mathcal{V}}{\partial x_{e}}( \mathbf{x})\big{)}\).
5. Compute the Jacobian \(\mathcal{J}_{\lambda}(\mathbf{x})\) using the formula in (11).
6. Evaluate \(\det\mathcal{J}_{\lambda}(\mathbf{x})\) (e.g. by using a LU decomposition of \(\mathcal{J}_{\lambda}(\mathbf{x})\)).
7. Compute the deformed graph Laplacian \(\mathcal{L}(\mathbf{X})\).
8. Compute \(\mathcal{L}^{-1}(\mathbf{X})\) and \(\det\mathcal{L}(\mathbf{X})\) (e.g. by using a LU decomposition of \(\mathcal{L}(\mathbf{X})\) as a Cholesky decomposition is not possible, because \(\mathcal{L}(\mathbf{X})\) is not a hermitian matrix in contrast to \(\mathcal{L}(\mathbf{x})\)).
9. Use the formulas (4) to obtain values for \(\mathcal{U}(\mathbf{X})\), \(\mathcal{F}(\mathbf{X})\) and \(\mathcal{V}(\mathbf{X})=\mathcal{F}(\mathbf{X})/\mathcal{U}(\mathbf{X})\).
The computation obviously simplifies if we set \(\lambda=0\), in which case we have \(\mathbf{X}=\mathbf{x}\). We are allowed to set \(\lambda=0\) if we do not need the contour deformation. This is the case, for instance, in the Euclidean or the pseudo-Euclidean regimes. In our implementation we check if we are in these regimes and adjust the evaluation of the integrand accordingly.
The program feyntrop
We have implemented the contour-deformed tropical integration algorithm, which we discussed in the previous sections, in a C++ module named feyntrop. This module is an upgrade to previous code developed by the first author in [15].
feyntrop has been checked against AMFlow[42] and pySecDec[36] for roughly 15 different diagrams with 1-3 loops and 2-5 legs at varying kinematics points, in both the Euclidean and Minkowski regimes, finding agreement in all cases within the given uncertainty bounds. In the Euclidean regime, the original algorithm has been checked against numerous analytic computations that have been obtained at higher loop order using conformal four-point integral and graphical function techniques [67].
Note that our prefactor convention, which we fixed in eqs. (1) and (2), differs from the one in AMFlow and pySecDec by a factor of \((-1)^{|\nu|}\), where \(|\nu|=\sum_{e=0}^{|E|-1}\nu_{e}\). In comparison to FIESTA[19], our convention differs by a factor of \((-1)^{|\nu|}\exp\left(-L\gamma_{\text{E}}\epsilon\right)\).
### Installation
The source code of feyntrop is available in the repository [https://github.com/michibo/feyntrop](https://github.com/michibo/feyntrop) on github. It can be downloaded and built by running the following sequence of commands
git clone --recursive [email protected]:michibo/feyntrop.git cd feyntrop make in a Linux environment. feyntrop is interfaced with python[68] via the library pybind11[69]. Additionally, it uses the optimized linear algebra routines from the Eigen3 package [70] and the OpenMP C++ module [71] for the parallelization of the Monte Carlo sampling step.
feyntrop can be loaded in a python environment by importing the file py_feyntrop.py, located in the top directory of the package. To ensure that feyntrop has been built correctly, one may execute the python file /tests/test_suite.py. This script compares the output of feyntrop against pre-computed values. To do so, it will locally compute six examples with 1-2 loops and 2-4 legs, some in the Euclidean and others in the Minkowski regime.
The file py_feyntrop.py includes additional functionality for the python interface serving three purposes. Firstly, it simplifies the specification of vertices and edges of a Feynman diagram in comparison to the C++ interface of feyntrop. Secondly, it allows for self-chosen momentum variables given by a set of replacement rules, instead of having to manually specify the full scalar product matrix \(\mathcal{P}^{u,v}\) from (4). Lastly, the output of the \(\epsilon\) expansion can be printed in a readable format.
As already indicated in Section 2.1, we employ zero-indexing throughout. This means that edges and vertices are labelled as \(\{0,1,\ldots\}\). This facilitates seamless interoperability with the programming language features of python.
### Basic usage of feyntrop
In this section, we will illustrate the basic workflow of feyntrop with an example. The code for this example can be executed and inspected with jupyter[72] by calling
jupyter notebook tutorial_2L_3pt.ipynb
within the top directory of the feyntrop package.
We will integrate the following 2-loop 3-point graph in \(D=2-2\epsilon\) dimensional spacetime:
The dashed lines denote on-shell, massless particles with momenta \(p_{0}\) and \(p_{1}\) such that \(p_{0}^{2}=p_{1}^{2}=0\). The solid, internal lines each have mass \(m\). The double line is associated to some off-shell momentum \(p_{2}^{2}\neq 0\). For the covenience of the reader, both vertices and edges are labeled explicitly in this example. feyntrop requires us to label the external vertices (as defined in Section 2.2) _before_ the internal vertices. In the current example, the vertices are \(V=V^{\text{ext}}\sqcup V^{\text{int}}=\{0,1,2\}\sqcup\{3\}\).
The momentum space Feynman integral representation (1) with unit edge weights reads
\[\mathcal{I}=\pi^{-2+2\epsilon}\int\frac{\text{d}^{2-2\epsilon}k_{0}\,\text{d}^ {2-2\epsilon}k_{1}}{\big{(}q_{0}^{2}-m^{2}+i\varepsilon\big{)}\big{(}q_{1}^{2} -m^{2}+i\varepsilon\big{)}\big{(}q_{2}^{2}-m^{2}+i\varepsilon\big{)}\big{(}q_{ 3}^{2}-m^{2}+i\varepsilon\big{)}\big{(}q_{4}^{2}-m^{2}+i\varepsilon\big{)}}\,, \tag{35}\]
where we integrated out the \(\delta\) functions in eq. (1) by requiring that \(q_{0}=k_{0}\), \(q_{1}=k_{0}+p_{1}\), \(q_{2}=k_{0}+k_{1}+p_{1}\), \(q_{3}=p_{0}-k_{0}-k_{1}\) and \(q_{4}=k_{1}\). We choose the phase space point
\[m^{2}=0.2\,,\quad p_{0}^{2}=p_{1}^{2}=0\,,\quad p_{2}^{2}=1\,, \tag{36}\]
which is in the Minkowski regime because \(p_{2}^{2}>0\) - see Section 2.2. To begin this calculation, first open a python script or a jupyter notebook and import py_feyntrop:
from py_feyntrop import * Here we are assuming that feyntrop.so and py_feyntrop.py are both in the working directory.
To define the graph, we provide a list of edges with edge weights \(\nu_{e}\) and squared masses \(m_{e}^{2}\):
\[\big{(}\big{(}u_{0},v_{0}\big{)}\,,\,\nu_{0}\,,\,m_{0}^{2}\big{)}\,,\,\ldots \,,\big{(}\big{(}u_{|E|-1},v_{|E|-1}\big{)}\,,\,\nu_{|E|-1}\,,\,m_{|E|-1}^{2} \big{)}\,. \tag{37}\]
The notation \((u_{e},v_{e})\) denotes an edge \(e\) incident to the vertices \(u_{e}\) and \(v_{e}\). We therefore write
edges = [((0,1), 1,'mm'), ((1,3), 1,'mm'), ((2,3), 1,'mm'), ((2,0), 1,'mm'), ((0,3), 1,'mm')]
in the code to input the graph which is depicted above. The ordering of vertices \(\left(u_{e},v_{e}\right)\) in an edge is insignificant. Here we set \(\nu_{e}=1\) for all \(e\). The chosen symbol for \(m^{2}\) is mm, which will be replaced by its value \(0.2\) later on. It is also allowed to input numerical values for masses already in the edges list, for instance by replacing the first element of the list by ((0,1), 1, '0.2').
Next we fix the momentum variables. Recall that the external vertices are required to be labeled \(\{0,1,\ldots,|V^{\mathrm{ext}}|-1\}\), so the external momenta are \(p_{0},\ldots,p_{|V^{\mathrm{ext}}|-1}\). Moreover, the last momentum is inferred automatically by feyntrop using momentum conservation, leaving \(p_{0},\ldots,p_{|V^{\mathrm{ext}}|-2}\) to be fixed by the user. A momentum configuration is then specified by the collection of scalar products,
\[p_{u}\cdot p_{v}\text{ for all }0\leq u\leq v\leq|V^{\mathrm{ext}}|-2\,. \tag{38}\]
In the code, we must provide replacement rules for these scalar products in terms of some variables of choice. For the example at hand, \(|V^{\mathrm{ext}}|=3\), so we must provide replacement rules for \(p_{0}^{2}\), \(p_{1}^{2}\) and \(p_{0}\cdot p_{1}\). In the syntax of feyntrop we thus write
replacement_rules = [(sp[0,0], '0'), (sp[1,1], '0'), (sp[0,1], 'pp2/2')]
where sp[u,v] stands for \(p_{u}\cdot p_{v}\), the scalar product of \(p_{u}\) and \(p_{v}\). We have immediately set \(p_{0}^{2}=p_{1}^{2}=0\) and also defined a variable pp2 which stands for \(p_{2}^{2}\), as, by momentum conservation,
\[p_{2}^{2}=2p_{0}\cdot p_{1}\,. \tag{39}\]
Eventually, we fix numerical values for the two auxiliary variables pp2 and mm. This is done via phase_space_point = [('mm', 0.2), ('pp2', 1)] which fixes \(m^{2}=0.2\) and \(p_{2}^{2}=1\). It is possible to obtain the \(\mathcal{P}^{u,v}\) matrix (as defined in Section 2.1) and a list of all the propagator masses, which are computed from the previously provided data, by P_uv_matrix, m_sqr_list = prepare_kinematic_data(edges, replacement_rules, phase_space_point)
The final pieces of data that need to be provided are
DO = 2 eps_order = 5 Lambda = 7.6 N = int(1e7)
DO is the integer part of the spacetime dimension \(D=D_{0}-2\epsilon\). We expand up to, but not including, eps_order. Lambda denotes the deformation tuning parameter from (9). N is the number of Monte Carlo sampling points.
Tropical Monte Carlo integration of the Feynman integral, with the kinematic configuration chosen above, is now performed by running the command
trop_res, Itr = tropical_integration( N, DO, Lambda, eps_order, edges, replacement_rules, phase_space_point)
If the program runs correctly (i.e. no error is printed), trop_res will contain the \(\epsilon\)-expansion (16) _without_ the prefactor \(\Gamma(\omega)/(\Gamma(\nu_{1})\cdots\Gamma(\nu_{|E|}))=\Gamma(2\epsilon+3)\). Itr is the value of the normalization factor in (22). Running this code on a laptop, we get, after a couple of seconds, the output
Prefactor: gamma(2*eps + 3). (Effective) kinematic regime: Minkowski (generic). Generalized permutahedron property: fulfilled. Analytic continuation: activated. Lambda = 7.6 Started integrating using 8 threads and N = 1e+07 points. Finished in 6.00369 seconds = 0.00166769 hours.
-- eps^0: [-46.59 +/- 0.13] + i * [ 87.19 +/- 0.12] -- eps^1: [-274.46 +/- 0.55] + i * [111.26 +/- 0.55] -- eps^2: [-435.06 +/- 1.30] + i * [-174.47 +/- 1.33] -- eps^3: [-191.72 +/- 2.15] + i * [-494.69 +/- 2.14] -- eps^4: [219.15 +/- 2.68] + i * [-431.96 +/- 2.67] These printed values for the \(\epsilon\) expansion are contained in the list trop_res in the following format:
\[\left[\left(\left(\mathrm{re}_{0},\,\sigma_{0}^{\mathrm{re}}\right),\,\left( \mathrm{im}_{0},\sigma_{0}^{\mathrm{im}}\right)\right),\,\ldots,\left(\left( \mathrm{re}_{4},\sigma_{4}^{\mathrm{re}}\right),\,\left(\mathrm{im}_{4},\sigma _{4}^{\mathrm{im}}\right)\right)\right],\]
where \(\mathrm{re}_{0}\pm\sigma_{0}^{\mathrm{re}}\) is the real part of the 0th order term, and so forth.
The \(\epsilon\)-expansion, with prefactor included, can finally be output via
eps_expansion(trop_res, edges, D0)
giving
174.3842115*i - 93.17486662 + eps*(-720.8731714 + 544.3677186*i) + eps**2*(-2115.45025 + 496.490128*i) + eps**3*(-3571.990969 - 677.5254794*i) + eps**4*(-3872.475723 - 2726.965026*i) + O(eps**5)
If the tropical_integration command fails, for instance because a subdivergence of the input graph is detected, it prints an error message. The command also prints a warning if the kinematic point is too exceptional and convergence cannot be guaranteed due to the \(\mathcal{F}\) polynomial lacking the generalized permutahedron property (see Section 3.3).
### Deformation tuning parameter
The uncertainties on the integrated result may greatly vary with the value of the deformation parameter \(\lambda\) from (9) (what was called Lambda above). Moreover, the optimal value of \(\lambda\) might change depending on the phase space point. It is up to the user to pick a suitable value by trial and error, for instance by integrating several times with a low number of sampling points \(N\). It would be beneficial to automate this procedure, possibly by minimizing the sampling variance with respect to \(\lambda\), for instance by solving \(\partial_{\lambda}\sigma_{f}=0\) with \(\sigma_{f}\) defined in (31), or by tightening the bounds in Assumption 3.2, as discussed in subsequent paragraph there.
Note that \(\lambda\) has mass dimension \(1/\mathrm{mass}^{2}\). Heuristically, this implies that the value of \(\lambda\) should be of order \(\mathcal{O}(1/\Lambda^{2})\), where \(\Lambda\) is the maximum physical scale in the given computation.
Examples of Feynman integral evaluations
In this section, we use feyntrop to numerically evaluate certain Feynman integrals of interest. The first two examples, 6.1 and 6.2, show that feyntrop is capable of computing Feynman integrals at high loop-orders involving many physical scales. The three examples that follow, 6.4, 6.3 and 6.5, show the potential for computing phenomenologically relevant diagrams. The final example, 6.6, is an invitation to study conformal integrals with our code, as they are important for, e.g., \(\mathcal{N}=4\) SYM and the cosmological bootstrap.
We have chosen phase space points which are not close to thresholds to insure good numerical convergence, and expand up to and including \(\epsilon^{2L}\) in all but up the last example.
Every example was computed on a single AMD EPYC 7H12 64-core processor using all cores.
The code for each example can be found on the github repository in the folder examples.
### A 5-loop 2-point zigzag diagram
We evaluate the following 5-loop 2-point function with all masses different in \(D=3-2\epsilon\) dimensions
corresponding to the edge set
edges = [((0,6), 1, '1'), ((0,5), 1, '2'), ((5,6), 1, '3'), ((6,4), 1, '4'), ((5,3), 1, '5'), ((5,4), 1, '6'), ((4,3), 1, '7'), ((4,2), 1, '8'), ((3,2), 1, '9'), ((3,1), 1, '10'), ((2,1), 1, '11')] Here we already input the chosen values for masses, namely \(m_{e}^{2}=e+1\) for \(e=0,\ldots,10\).
There is only a single independent external momentum \(p_{0}\), whose square we set equal to 100 via
replacement_rules = [(sp[0,0], 'pp0')] phase_space_point = [('pp0', 100)] The value \(\lambda=0.02\) turns out to give small errors, which is of order \(\mathcal{O}(1/p_{0}^{2})\) in accordance with the comment at the end of the previous section. Using \(N=10^{12}\) Monte Carlo sampling points, feyntrop's tropical_integration command gives
Prefactor: gamma(5*eps + 7/2). (Effective) kinematic regime: Minkowski (generic). Finished in 20 hours. -- eps^0: [0.000196885 +/- 0.000000032] + i * [0.000140824 +/- 0.000000034] -- eps^1: [-0.00493791 +/- 0.00000040 ] + i * [-0.00079691 +/- 0.00000038 ] -- eps^2: [ 0.0491933 +/- 0.0000025 ] + i * [-0.0154647 +/- 0.0000025 ] -- eps^3: [ -0.253458 +/- 0.000012 ] + i * [ 0.246827 +/- 0.000012 ] -- eps^4: [ 0.587258 +/- 0.000046 ] + i * [ -1.720213 +/- 0.000046 ]
-- eps^5:[ 1.05452 +/- 0.00015 ] + i * [ 7.38725 +/- 0.00015 ] -- eps^6:[ -14.66144 +/- 0.00047 ] + i * [ -20.86779 +/- 0.00046 ] -- eps^7:[ 65.8924 +/- 0.0013 ] + i * [ 35.0793 +/- 0.0013 ] -- eps^8:[ -190.9702 +/- 0.0036 ] + i * [ -4.4620 +/- 0.0034 ] -- eps^9:[ 393.2522 +/- 0.0092 ] + i * [ -183.7431 +/- 0.0087 ] -- eps^10:[ -558.202 +/- 0.023 ] + i * [ 688.556 +/- 0.021 ]
### A 3-loop 4-point envelope diagram
Here, we evaluate a \(D=4-2\epsilon\) dimensional, non-planar, 3-loop 4-point, envelope diagram:
The dots on the crossed lines represent squared propagators, i.e. edge weights equal to 2, rather than vertices. The weighted edge set with corresponding mass variables is thus
edges = [((0,1), 1,'mm0'), ((1,2), 1,'mm1'), ((2,3), 1,'mm2'), ((3,0), 1,'mm3'), ((0,2), 2,'mm4'), ((1,3), 2,'mm5')]
Let us define the two-index Mandelstam variables \(s_{ij}=(p_{i}+p_{j})^{2}\), which are put into feyntrop's replacement rules in the form (sp[i,j], '(sij - ppi - ppj)/2)') for \(0\leq i<j\leq 2\). The chosen phase space point is
\[p_{0}^{2}=1.1\,,\quad p_{1}^{2}=1.2\,,\quad p_{2}^{2}=1.3\,,\quad s _{01}=2.1\,,\quad s_{02}=2.2\,,\quad s_{12}=2.3\,, \tag{40}\] \[m_{0}^{2}=0.05\,,\quad m_{1}^{2}=0.06\,,\quad m_{2}^{2}=0.07\,, \quad m_{3}^{2}=0.08\,,\quad m_{4}^{2}=0.09\,,\quad m_{5}^{2}=0.1\,.\]
With additional settings \(\lambda=1.24\) and \(N=10^{12}\), we find
Prefactor: gamma(3*eps + 2).
(Effective) kinematic regime: Minkowski (generic).
Finished in 10 hours.
-- eps^0:[-10.828047 +/- 0.000086] + i * [-12.723347 +/- 0.000083] -- eps^1:[ 48.03624 +/- 0.00059 ] + i * [-105.05842 +/- 0.00060 ] -- eps^2:[ 413.2390 +/- 0.0023 ] + i * [ 7.4881 +/- 0.0024 ] -- eps^3:[ 372.1192 +/- 0.0066 ] + i * [ 948.6804 +/- 0.0065 ] -- eps^4:[-1413.553 +/- 0.015 ] + i * [ 1327.629 +/- 0.015 ] -- eps^5:[-2730.317 +/- 0.027 ] + i * [-1293.375 +/- 0.027 ] -- eps^6:[ 279.016 +/- 0.043 ] + i * [-3983.291 +/- 0.043 ]
### A 2-loop 4-point \(\mu e\)-scattering diagram
We evaluate a non-planar, 2-loop 4-point diagram appearing in muon-electron scattering [73], which is finite in \(D=6-2\epsilon\) dimensions. It was previously evaluated for vanishing electron mass in [74].
The dashed lines represent photons, the solid lines are electrons with mass \(m\), and the double lines are muons with mass \(M\) (which is approximately 200 times larger than \(m\)). The edge set is
edges = [((0,1), 1, '0'), ((0,4), 1, 'MM'), ((1,5), 1,'mm'), ((5,2), 1,'mm'), ((5,3), 1, '0'), ((4,3), 1, 'MM'), ((4,2), 1, '0')]
where MM and mm stand for \(M^{2}\) and \(m^{2}\) respectively. With a phase space point similar to that of [74, Sec. 4.1.2]
\[p_{0}^{2} =M^{2}=1\,,\quad p_{1}^{2}=p_{2}^{2}=m^{2}=1/200\,,\quad s_{01}=-1/ 7\,, \tag{41}\] \[s_{12} =-1/3\,,\quad s_{02}=2M^{2}-2m^{2}-s_{01}-s_{12}=2.49\]
and settings \(\lambda=1.29\,,\,N=10^{11}\,,\) the result becomes
Prefactor: gamma(2*eps + 1). (Effective) kinematic regime: Minkowski (exceptional). Finished in 1.4 hours. -- eps^0: [1.164498 +/- 0.000025] + i * [0.241199 +/- 0.000020] -- eps^1: [5.52941 +/- 0.00024 ] + i * [2.27705 +/- 0.00021 ] -- eps^2: [15.1155 +/- 0.0015 ] + i * [10.0344 +/- 0.0014 ] -- eps^3: [27.8505 +/- 0.0071 ] + i * [27.9538 +/- 0.0068 ] -- eps^4: [37.970 +/- 0.029 ] + i * [ 56.310 +/- 0.028 ] Note that the momentum configuration is exceptional, so we cannot be sure that the generalized permutahedron property holds - see section 3.3. Nevertheless, we are able to confirm this result by comparing with AMFlow.
The leading order term differs from [74, eq. (4.20)] by roughly 10% due to our inclusion of the electron mass. We do, however, reproduce the result of the former citation when setting \(m=0\) in feyntrop.
### A QCD-like, 2-loop 5-point diagram
This example is a QCD-like, \(D=6-2\epsilon\) dimensional, 2-loop 5-point diagram:
The dashed lines represent gluons, the solid lines are quarks each with mass \(m\), and the double line is some off-shell momentum \(p_{4}^{2}\neq 0\) fixed by conservation. The edge data are
edges = [((0,1), 1, '0'), ((1,2), 1,'mm'), ((2,6), 1, '0'), ((6,3), 1,'mm'), ((3,4), 1, '0'), ((4,5), 1,'mm'), ((5,0), 1, '0'), ((5,6), 1,'mm')]
where mm stands for \(m^{2}\). Let us choose the phase space point
\[p_{0}^{2} =0\,,\quad p_{1}^{2}=p_{2}^{2}=p_{3}^{2}=m^{2}=1/2\,,\quad s_{01}=2. 2\,,\quad s_{02}=2.3\,, \tag{42}\] \[s_{03} =2.4\,,\quad s_{12}=2.5\,,\quad s_{13}=2.6\,,\quad s_{23}=2.7\,,\]
where again \(s_{ij}=(p_{i}+p_{j})^{2}\). Finally, setting \(\lambda=0.28\,,\,N=10^{10}\,,\) we obtain
Prefactor: gamma(2*eps + 2). (Effective) kinematic regime: Minkowski (exceptional). Finished in 0.18 hours. -- eps^0: [0.064389 +/- 0.000051] + i * [-0.082620 +/- 0.000063] -- eps^1: [0.40382 +/- 0.00038 ] + i * [ 0.31900 +/- 0.00045 ] -- eps^2: [-0.7790 +/- 0.0017 ] + i * [ 0.9359 +/- 0.0016 ] -- eps^3: [-1.3253 +/- 0.0053 ] + i * [ -1.2198 +/- 0.0035 ] -- eps^4: [ 1.378 +/- 0.012 ] + i * [ -1.2276 +/- 0.0071 ]
Observe that the kinematic configuration is once again exceptional. We have not been able to compute this example using existing numerical codes for the sake of verification. Nevertheless, we have credence in the result above, given that every other exceptional example, that we have been able to check, has turned out correct.
### A QED-like, 4-loop vacuum diagram
Next we evaluate a QED-like, 4-loop vacuum diagram in \(D=4-2\epsilon\) dimensions:
The dashed lines represent photons, and the solid lines are electrons of mass \(m\). No analytic continuation is required in this case since there are no external momenta - the final result should hence be purely real. We specify
replacement_rules = [] in the code to indicate that all scalar products are zero.
The collection of edges is
edges = [((0,1), 1,'mm'), ((1,2), 1,'mm'), ((2,0), 1,'mm'), ((0,5), 1, '0' ), ((1,4), 1, '0' ), ((2,3), 1, '0' ), ((3,4), 1,'mm'), ((4,5), 1,'mm'), ((5,3), 1,'mm')] where mm stands for \(m^{2}\). Choosing
phase_space_point = [('mm', 1)] and setting \(\lambda=0\,,\,N=10^{11}\,,\) we then find
Prefactor: gamma(4*eps + 1). (Effective) kinematic regime: Euclidean (generic). Finished in 0.70 hours. -- eps^0: [3.019421 +/- 0.000015] + i * [0.0 +/- 0.0] -- eps^1: [-7.063568 +/- 0.000067] + i * [0.0 +/- 0.0] -- eps^2: [20.53898 +/- 0.00023 ] + i * [0.0 +/- 0.0] -- eps^3: [-27.85523 +/- 0.00075 ] + i * [0.0 +/- 0.0] -- eps^4: [62.0516 +/- 0.0023 ] + i * [0.0 +/- 0.0] -- eps^5: [-59.2937 +/- 0.0075 ] + i * [0.0 +/- 0.0] -- eps^6: [155.268 +/- 0.025 ] + i * [0.0 +/- 0.0] -- eps^7: [ -90.452 +/- 0.083 ] + i * [0.0 +/- 0.0] -- eps^8: [404.61 +/- 0.28 ] + i * [0.0 +/- 0.0]
### An elliptic, conformal, 4-point integral
The final example is a 1-loop 4-point conformal integral with edge weights \(\nu_{1,\ldots,4}=1/2\) in \(D=2\) dimensions, the result of which was computed in terms of elliptic \(K\) functions in [75, Sec. 7.2]:
\[\begin{array}{c}x_{0}\\ \end{array}\quad=\frac{4}{\sqrt{-p_{2}^{2}}}\left[K(z)K(1-\bar{z})+K(\bar{z})K( 1-z)\right] \tag{43}\]
The denominator above differs from [75, eq. (7.6)] because we have used conformal symmetry to send \(x_{3}\to\infty\), thereby reducing the kinematic space to that of a 3-point integral. After identifying dual momentum variables \(x_{i}\) in terms of ordinary momenta as \(p_{i}=x_{i}-x_{i+1}\), the conformal cross ratios, with the usual single-valued complex parameterization in terms of \(z\) and \(\bar{z}\), read
\[z\bar{z}=\frac{p_{0}^{2}}{p_{2}^{2}}\,,\quad(1-z)(1-\bar{z})=\frac{p_{1}^{2}}{ p_{2}^{2}}\,. \tag{44}\]
In feyntrop we specify the associated 1-loop 3-point momentum space integral as
edges = [(0,1), 1/2, '0'), ((1,2), 1/2, '0'), ((2,0), 1/2, '0')]
where all internal masses are zero and edge weights are set to \(1/2\).
We choose a momentum configuration in the Euclidean regime:
\[p_{0}^{2}=-2\,,\quad p_{1}^{2}=-3\,,\quad p_{2}^{2}=-5\,. \tag{45}\]
Although feyntrop can compute integrals with rational edge weights in the Minkowski regime, it is is most natural to study conformal integrals in the Euclidean regime.
With \(\lambda=0\) and \(N=10^{9}\), we then obtain
(Effective) kinematic regime: Euclidean (generic). Finished in 11.6 seconds. -- eps^0: [9.971685 +/- 0.000084] + i * [0.0 +/- 0.0]
The result agrees with the analytic expression (43). This example also illustrates the high efficiency of feyntrop in the Eulidean regime where very high accuracies can be obtained quickly.
## 7 Conclusions and outlook
With this article we introduced feyntrop, a general tool to numerically evaluate quasi-finite Feynman integrals in the physical regime with sufficiently general kinematics. To do so, we gave a detailed classification of different kinematic regimes that are relevant for numerical integration. Moreover,
we presented a completely projective integral expression for concretely \(i\varepsilon\)-deformed Feynman integrals and their dimensionally regularized expansions. We used tropical sampling for the numerical integration, which we briefly reviewed, and we discussed the relevant issues on facet presentations of the Newton polytopes of Symanzik polynomials in detail. To be able to perform the numerical integration efficiently, we gave formulas and algorithms for the fast evaluation of Feynman integrals. To give a concise usage manual for feyntrop and to illustrate its capabilities, we gave numerous, detailed examples of evaluated Feynman integrals.
The most important restrictions of feyntrop are 1) it is not capable of dealing with Feynman integrals that have subdivergences (i.e. non-quasi-finite integrals) and 2) it is not capable of dealing with certain highly exceptional kinematic configurations.
The first restriction can be lifted by implementing an analytic continuation of the integrand in the spirit of [24, 25, 48] into feyntrop. Naively, preprocessing input integrals with such a procedure increases the number of Feynman integrals and thereby also the necessary computer time immensely. However, this proliferation of terms comes from the expansion of the derivatives of the \(\mathcal{U}\) and \(\mathcal{F}\) polynomials as numerators. This expansion can be avoided, because also the derivatives of \(\mathcal{U}\) and \(\mathcal{F}\) (mostly) have the generalized permutahedron property, and because we have fast algorithms to evaluate such derivatives. For instance, we derived a fast algorithm to evaluate the first and second derivatives of \(\mathcal{F}\) in Section 4.2. We postpone the elaboration and implementation of this approach to future work.
A promising approach to lift the second restriction is to try to understand the general shape of the \(\mathcal{F}\) polynomial's Newton polytope. Outside of the Euclidean and generic kinematic regimes, this polytope is not always a generalized permutahedron. In these exceptional kinematic situations, it can have new facets that cannot be explained by known facet presentations. It might be possible to explain these new facets with the help of the Coleman-Norton picture of infrared divergences [76] (see, e.g., [77] where explicit per-diagram factorization of Feynman integrals was observed in a position space based framework). An alternative approach to fix the issue is to implement the tropical sampling approach that requires a full triangulation of the respective Newton polytopes (see [15, Sec. 5]).
Besides this there are numerous, desirable, gradual improvements of feyntrop that we also postpone to future works. The most important such improvement would be to use the algorithm in conjunction with a _quasi-Monte Carlo_ approach. The runtime to obtain the value of an integral up to accuracy \(\delta\) currently scales as \(\delta^{-2}\), as is standard for a Monte Carlo method. Changing to a quasi-Monte Carlo based procedure would improve this scaling to \(\delta^{-1}\).
Another improvement would be to find an entirely _canonical_ deformation prescription. Currently, our deformation still relies on an external parameter that has to be fine-tuned to the respective integral. A canonical deformation prescription that does not depend on a free parameter would lift the burden of this fine-tuning from the user and would likely also produce better rates of convergence.
A more technical update of feyntrop would involve an implementation of the tropical sampling algorithm on GPUs or on distributed cluster systems. The current implementation of feyntrop is parallelized and can make use of all cores of a single computer. Running feyntrop on multiple computers in parallel is not implemented, but there are no technical obstacles to write such an implementation, which we postpone to a future research project.
## Acknowledgements
We thank Aaron Hillman, Sebastian Mizera and Erik Panzer for helpful exchanges on facet presentations of Newton polytopes of Symanzik polynomials, and Pierpaolo Mastrolia for stimulating discussions on applications to phenomenology. FT thanks Georgios Papathanasiou for continued support. HJM and FT thank the Institute for Theoretical Studies at the ETH Zurich for hosting the workshop 'Tropical and Convex Geometry and Feynman integrals', which was beneficial for the completion of this work. MB was supported by Dr. Max Rossler, the Walter Haefner Foundation and the ETH Zurich Foundation. Some of our calculations were carried out on the ETH Euler cluster.
|
2303.09962 | Adversarial Counterfactual Visual Explanations | Counterfactual explanations and adversarial attacks have a related goal:
flipping output labels with minimal perturbations regardless of their
characteristics. Yet, adversarial attacks cannot be used directly in a
counterfactual explanation perspective, as such perturbations are perceived as
noise and not as actionable and understandable image modifications. Building on
the robust learning literature, this paper proposes an elegant method to turn
adversarial attacks into semantically meaningful perturbations, without
modifying the classifiers to explain. The proposed approach hypothesizes that
Denoising Diffusion Probabilistic Models are excellent regularizers for
avoiding high-frequency and out-of-distribution perturbations when generating
adversarial attacks. The paper's key idea is to build attacks through a
diffusion model to polish them. This allows studying the target model
regardless of its robustification level. Extensive experimentation shows the
advantages of our counterfactual explanation approach over current
State-of-the-Art in multiple testbeds. | Guillaume Jeanneret, Loïc Simon, Frédéric Jurie | 2023-03-17T13:34:38Z | http://arxiv.org/abs/2303.09962v1 | # Adversarial Counterfactual Visual Explanations
###### Abstract
Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, adversarial attacks cannot be used directly in a counterfactual explanation perspective, as such perturbations are perceived as noise and not as actionable and understandable image modifications. Building on the robust learning literature, this paper proposes an elegant method to turn adversarial attacks into semantically meaningful perturbations, without modifying the classifiers to explain. The proposed approach hypothesizes that Denoising Diffusion Probabilistic Models are excellent regularizers for avoiding high-frequency and out-of-distribution perturbations when generating adversarial attacks. The paper's key idea is to build attacks through a diffusion model to polish them. This allows studying the target model regardless of its robustification level. Extensive experimentation shows the advantages of our counterfactual explanation approach over current State-of-the-Art in multiple testbeds.
## 1 Introduction
The research branch of explainable artificial intelligence has yielded remarkable results, gradually opening the machine learning black boxes. The production of counterfactual explanations (CE) has become one of the promising pipelines for explainability, especially in computer vision [25, 28, 49, 54]. As a matter of fact, CE are an intuitive way to expose how an input instance can be minimally modified to steer the desired change in the model's output. More precisely, CE answers the following: _what does \(X\) have to change to alter the prediction from \(Y\) to \(Y^{\prime}\)?_ From a user perspective, these explanations are easy to understand since they are concise and illustrated by examples. Henceforth, companies have adopted CE as an interpretation methodology to legally justify the decision-making of machine learning models [60]. To better appreciate the potential of CE, one may consider the following scenario: a client goes to a photo booth to take some ID photos, and the system claims the photos are invalid for such usage. Instead of performing random attempts to abide by the administration criteria, an approach based on CE could provide visual indications of what the client should fix.
The main objective of CE is to add minimalistic semantic changes in the image to flip the original model's prediction. Yet, these generated explanations must accomplish several objectives [28, 49, 60]. A CE must be _valid_, meaning that the CE has to change the prediction of the model. Secondly, the modifications have to be _sparse and proximal_ to the input data, targeting to provide simple and concise explanations. In addition, the CE method should be able to generate _diverse_ explanations. If a trait is the most important for a certain class among other features, diverse explanations should change this attribute most frequently. Finally, the semantic changes must be _realistic_. When the CE method inserts out-of-distribution artifacts in the input image, it is difficult to interpret whether the flipping decision was because of the inserted object or because of the shifting of the distribution, making the explanation unclear.
Adversarial attacks share a common goal with CE: flipping the classifier's prediction. For traditional and non-robust visual classifiers, generating these attacks on input instances creates imperceptible noise. Even though it has been shown that it contains meaningful changes [24] and that adversarial noise and counterfactual perturbations are related [13, 23], adversarial attacks have lesser value. Indeed, the modifications present in the adversaries are unnoticeable by the user and leave him with no real feedback.
Contrary to the previous observations, many papers (_e.g._, [47]) evidenced that adversarial attacks toward _robust_ classifiers generate semantic changes in the input images. This has led works [51, 70] to explore robust models to produce data using adversarial attacks. In the context of counterfactual explanations, this is advantageous [52, 5] because the optimization will produce semantic changes to induce the flipping of the label.
Then two challenges arise when employing adversarial attacks for counterfactual explanations. On the one hand, when studying a classifier, we must be able to explain its behavior regardless of its characteristics. So, a naive ap
plication of adversarial attacks is impractical for non-robust models. On the other hand, according to [57], robustifying the classifier yields an implicit trade-off by lowering the _clean accuracy_, as referred by the adversarial robustness community [10], a particularly crucial trait for high-stakes areas such as the medical field [40].
The previous remarks motivate our endeavor to mix the best of both worlds. Hence, in this paper, we propose robustifying brittle classifiers _without_ modifying their weights to generate CE. This robustification, obtained through a filtering preprocessing leveraging diffusion models [19], allows us to keep the performance of the classifier untouched and unlocks the production of CE through adversarial attacks.
We summarize the novelty of our paper as follows: (i) We propose Adversarial Counterfactual Explanations, ACE in short, a novel methodology based on adversarial attacks to generate semantically coherent counterfactual explanations. (ii) ACE performs competitively with respect to the other methods, beating previous state-of-the-art methods in multiple measurements along multiple datasets. (iii) Finally, we point out some defects of current evaluation metrics and propose ways to remedy their shortcomings. (iv) To show a use case of ACE, we study ACE's meaningful and plausible explanations to comprehend the mechanisms of classifiers. We experiment with ACE findings producing actionable modifications in real-world scenarios to flip the classifier decision.
Our code and models are available on GitHub.
## 2 Related Work
**Explainable AI.** The main dividing line between the different branches of explainable artificial intelligence stands between _Ad-Hoc_ and _Post-Hoc_ methods. The former promotes architectures that are interpretable by design [3, 4, 50, 21] while the latter considers analyzing existing models as they are. Since our setup lies among the Post-Hoc explainability methods, we spotlight that this branch splits into global and local explanations. The former explains the general behavior of the classifier, as opposed to a single instance for the latter. This work belongs to the latter. There are multiple local explanations methods, from which we highlight saliency maps [68, 61, 32, 35, 8, 26], concept attribution [33, 15, 31] and model distillation [55, 14]. Concisely, these explanations try to shed light on _how_ a model took a specific decision. In contrast, we focus on the on-growing branch of counterfactual explanations, which tackles the question: _what_ does the model uses for a forecast? We point out that some novel methods [63, 62, 17, 59] call themselves counterfactual approaches. Yet, these systems highlight regions between a pair of images without producing any modification.
**Counterfactual Explanations.** CE have taken momentum in recent years to explain model decisions. Some methods rely on prototypes [37] or deep inversion [56], while other works explore the benefits of other classification models for CE, such as Invertible CNNs [22] and Robust Networks [52, 5]. A common practice is using generative tools as they give multiple benefits when producing CE. In fact, using generation techniques is helpful to generate data in the image manifold. There are two modalities to produce CE using generative approaches. Many methods use conditional generation techniques [58, 37, 54] to fit what a classification model learns or how to control the perturbations. Conversely, unconditional approaches [67, 44, 30, 28, 49, 53] optimize the latent space vectors.
We'd like to draw attention to Jeanneret _et al_. [28]'s counterfactual approach, which uses a modified version of the guided diffusion algorithm to steer image generation towards a desired label. This modification affects the DDPM generation algorithm itself. In contrast, while we also use DDPM, we use it primarily as a regularizer before the classifier. Instead of controlling the generation process, we generate semantic changes using adversarial attacks directly on the image space, and then post-process the image using a standard diffusion model. Furthermore, we use a refinement stage to perform targeted edits only in regions of interest.
**Adversarial Attacks and their relationship with CE.** Adversarial attacks share the same main objective as counterfactual explanations: flipping the forecast of a target architecture. On the one hand, _white-box_ attacks [7, 10, 16, 27, 39, 42] leverage the gradients of the input image with respect to a loss function to construct the adversary. In addition, universal noises [41] are adversarial perturbations created for fooling many different instances. On the other hand, _black-box_ attacks [69, 48, 2] restrain their attack by checking merely the output of the model. Finally, Nie _et al_. [45] study DDPMs from a robustness perspective, disregarding the benefits of counterfactual explanations.
In the context of CE for visual models, the produced noises are indistinguishable for humans when the network does not have any defense mechanism, making them useless. This lead works [1, 23, 46] to approach the relationship between these two research fields. Compared to previous approaches, we manage to leverage adversarial attacks to create semantic changes in undefended models to explore their semantic weaknesses perceptually in the images; a difficult task due to the nature of the data.
## 3 Adversarial Counterfactual Explanations
The key contribution of this paper is our novel Adversarial Counterfactual Explanations (ACE) method. ACE produces counterfactual images in two steps, as seen in Figure 1. We briefly introduce these two steps here and detail them in the following sections.
_Step 1. Producing pre-explanation images (SS3.1)._ Let \(L_{class}(x;y)\) be a function measuring the agreement be
tween the sample \(x\) and class \(y\). This function is typically the cross-entropy loss of the classifier we are studying with respect to \(y\). With ACE, generating the pre-explanation image of \((x,y)\) for the target class \(y^{\prime}\neq y\) consists in finding \(x^{\prime}\) minimizing \(L_{class}(F(x^{\prime});y^{\prime})\) using the adversarial attack as the optimizer. Here, \(F(x^{\prime})\) is a filtering function that constrains the attack to stay in the manifold of the training images. In a nutshell, the filtering process \(F\) robustifies the fragile classifier under examination to generate semantic changes _without_ modifying its weights.
_Step 2. Bringing the pre-explications closer to the input images (SS3.2)._ The pre-explanation generation restricts only those pixels in the image that are useful in switching the output label from \(y\) to \(y^{\prime}\). The rest of the pixels are only implicitly constrained by the design of \(F\). Accordingly, the purpose of this second step is to keep these non-explicitly constrained pixels identical to those of the input image.
### Pre-explanation generation with DDPMs
To avoid generating adversarial noise and producing useful semantics, the previously introduced function \(F\) should have two key properties. (i) Removing high-frequency information that traditional adversarial attacks generate. Indeed, these perturbations could change the classifier's decision without being actionable or understandable by a human. (ii) Producing in-distribution images without distorting the input image. This property seeks to maintain the image structures not involved in the decision-making process as similar as possible while avoiding giving misleading information to the user.
Denoising Diffusion Probabilistic Models [19], commonly referred to as DDPM or diffusion models, achieve these properties if used properly. On the one hand, each inference through the DDPM is a denoising process; in particular, it removes high-frequency signals. On the other hand, DDPMs generate in-distribution images.
As a reminder, DDPMs rely on two Markov chains, one inverse to the other. The forward chain _adds_ noise from a state \(t\) into \(t+1\) while the reverse chain _removes_ it from \(t+1\) to \(t\). Noting \(x_{t}\) the instance at time step \(t\), the forward chain is directly simulated from a clean instance \(x_{0}\) through
\[x_{t}=\sqrt{\bar{\alpha}_{t}}\,x_{0}+\sqrt{1-\bar{\alpha}_{t}}\,\epsilon,\ \epsilon\sim\mathcal{N}(0,I), \tag{1}\]
where \(\bar{\alpha}_{t}\) is a time-dependent constant. At inference, the DDPM produces a mean \(\mu_{t}(x_{t})\) and a deviation matrix \(\Sigma_{t}(x_{t})\). Using these variables, the next less noisy image is sampled from
\[x_{t-1}=\mu_{t}(x_{t})+\Sigma_{t}(x_{t})\,\epsilon,\ \epsilon\sim\mathcal{N}(0,I). \tag{2}\]
Thus, the DDPM denoising algorithm iterates the previous step until \(t=0\) arriving at an image without noise. Please refer to previous works [12, 19] for a thorough understanding of diffusion models.
**ACE pre-explanation generation.** Starting from a query image \(x\), we can obtain a filtered version by applying the forward DDPM process up to level \(\tau\) (Eq. 1) and then denoise it recursively thanks to the iterative DDPM denoising steps (Eq 2) starting from level \(t=\tau\). In this case, to highlight the use of this intermediate step \(\tau\), we denote the diffusion filtering process as \(F=F_{\tau}\) (Figure 0(a)). Thus, we optimize the image through the DDPM filtering process, \(F_{\tau}\), before computing the classification loss. Henceforth,
Figure 1: **Pre-explanation Construction and Refinement** ACE generates the counterfactual explanation in a two-step sequence. Initially, (a) To generate semantic updates in the input image, the DDPM processes the instance before computing the loss function \(L_{class}(F_{\tau}(x^{\prime});y^{\prime})\), where \(y^{\prime}\) is the target label. To simplify the process, we omit the distance loss between the perturbed image \(x^{\prime}\) and the input image \(x\). Then, we compute the gradients with respect to \(x^{\prime}\) and update it using the adversarial attack. Finally, (b) we generate a binary mask using the magnitude’s difference between the explanation and input image to refine the pre-explanation using RePaint’s inpainting method.
we obtain the pre-explanations by optimizing
\[\operatorname*{argmin}_{x^{\prime}}L_{class}(F_{\tau}(x^{\prime});y^{\prime})+ \lambda_{d}\,d(x^{\prime},x) \tag{3}\]
using the adversarial attack of choice. Here, \(\lambda_{d}\) is a regularization constant and \(d\) a distance function.
### Bringing the pre-explanations closer to the input images
By limiting the value of \(\tau\), the DDPM will not go far enough to generate a normal distribution, and the reconstruction will somehow preserve the overall structure of the image. However, we noted that a post-processing phase could help keep irrelevant parts of the image untouched. For example, in the case of faces, the denoising process may change the hairstyle while targeting the smile attribute. Since hairstyle is presumably uncorrelated with the smile feature, the post-process should neutralize those unnecessary alterations.
To this end, we first compute a binary mask \(m\) delineating regions that qualify for modifications. To do so, we consider the magnitude difference between the pre-explanation and the original mask, we dilate this gray-scale image and threshold it, yielding the desired mask. This matter being settled, we need to fuse the CE inside the mask along with the input outside the mask to accomplish our objective.
With that aim, a natural strategy is using inpainting methods. So, we leverage RePaint's recent technique [38], originally designed for image completion, and adapt it to our _picture-in-picture_ problem (Figure 0(b)). This adaptation straightforward and integrates very well with the rest of our framework. It starts from the noisy pre-explanations \(x_{\tau}\) and iterate the following altered denoising steps:
\[x_{t-1}=\mu_{t}(x^{\prime}_{t})+\Sigma_{t}(x^{\prime}_{t})\,\epsilon,\; \epsilon\sim\mathcal{N}(0,I), \tag{4}\]
where \(x^{\prime}_{t}=x_{t}\cdot m+x^{i}_{t}\cdot(1-m)\) is the raw collage of the current noisy reconstruction \(x_{t}\) and the noisy version of the initial instance \(x^{i}_{t}\) at the same noise level \(t\), obtained with Eq. 1. The final image, \(x_{0}\), will be our counterfactual explanation - identical to the input sample outside the mask, and very similar to the pre-explanation within the mask. In the supplementary material, we added an overview of ACE.
## 4 Experimentation
### Evaluation Protocols and Datasets
**Datasets.** In line with the recent literature on counterfactual images [28, 49, 54, 29], first, we evaluate ACE on CelebA [36], with images of size of \(128\times 128\) and a DenseNet121 classifier [20], for the'smile' and 'age' attributes. Following Jacob _et al_. [25], we experimented on CelebA HQ [34] and BDD100k [65]. CelebA HQ has a higher image resolution of \(256\times 256\). BDD100k contains complex traffic scenes as \(512\times 256\) images; the targeted attribute is 'forward' vs'slow down'. The decision model is also a DenseNet121, trained on the BDD-IOA [64] extension dataset. Regarding the classifiers for which we want to generate counterfactuals, we took the pre-trained weights from DiME [28] source for CelebA and from STEEX [25] for CelebA HQ and BDD100k, for fair comparisons.
**Evaluation criteria for quantitative evaluation.**
_Validity of the explanations_ is commonly measured with the Flip Rate (ER), how often the CE is classified as the targeted label.
_Diversity_ is measured by extending the diversity assessment from Mothilal _et al_. [43]. As suggested by Jeanneret _et al_. [28], the diversity is measured as the average LPIPS [66] distance between pairs of counterfactuals (\(\sigma_{L}\)).
_Sparsity or proximity_ has been previously evaluated with several different metrics [49, 54], in the case of face images and face attributes. On the one hand, the mean number of attributes changed (MNAC) measures the smallest amount of traits changed between the input-explanation pair. Similarly, this metric leverages an oracle network pretrained on VGGFace2 [6] and then fine-tuned on the dataset. Further, Jeanneret _et al_. [28] showed the limitations of the MNAC evaluation and proposed the CD metric to account for the MNAC's limitations. On the other hand, to measure whether an explanation changed the identity of the input, the assessment protocol uses face verification accuracy [6] (FVA). To this end, the evaluation uses a face verification network. However, FVA has 2 main limitations: i) it can be applied to face related problems only, ii) it works at the level of classifier decisions which turns out to be too rough when comparing an image to its CE, as it involves only a minimal perturbation. For face problems, we suggest skipping the thresholding and consider the mean cosine distance between the encoding of image-counterfactual pairs, what we refer to as Face Similarity (FS). To tackle non-face images, we propose to extend FS by relying on self-supervised learning to encode image pairs. To this end, we adopted SimSiam [9] as an encoding network to measure the cosine similarity. We refer to this extension as SimSiam Similarity (\(\underline{S^{3}}\)). Finally, also for classifiers that are not related to faces, Khorram _et al_. [30] proposed COUT to measure the transition probabilities between the input and the counterfactual.
_Realism of counterfactual images_[54] is usually evaluated by the research community with the FID[18] between the original set and the valid associated counterfactuals. We believe there is a strong bias as most of the pixels of counterfactuals are untouched and will dominate the measurement, as observed in our ablation studies (Sec. 4.6). To remove this bias, we split the dataset into two sets, generating the CE for one set and measuring the FID between the generated explanations and the other set, iterating this process ten times and taking the mean. We call this metric sFID.
**Implementation details.** One of the main obstacles of diffusion models is transferring the gradients through all the iterations of the iterative denoising process. Fortunately, diffusion models enjoy a time-step re-spacing mechanism, allowing us to reduce the number of steps at the cost of a quality reduction. So, we drastically decreased the number of sampling steps to construct the pre-explanation. For CelebA [36], we instantiate the DDPM [12] model using DiME's [28] weights. In practice, we set \(\tau=5\) out of 50 steps. For CelebA HQ [34], we fixed the same \(\tau\), but we used the re-spaced time steps to 25 steps. For BDD100k [65], we follow the same settings as STEEX [25]: we trained our diffusion model on the 10.000 image subset of BDD100k. To generate the explanations, we used 5 steps out of 100. Additionally, all our methods achieve a success ratio of 95% at minimum. We will detail in the supplementary material all instructions for each model on every dataset. We adopted an \(\ell_{1}\) or \(\ell_{2}\) distance for the distance function. Finally, for the attack optimization, we chose the PGD [39] without any bound and with 50 optimization steps.
### Comparison Against the State-of-the-Art
In this section, we quantitatively compare ACE against previous State-of-the-Art methods. To this end, we show the results for CelebA [36] and CelebA HQ [34] datasets in Table 1 and Table 2, respectively. Additionally, we experimented on the BDD100k [65] dataset (Table 3). To extend the study of BDD, we further evaluated our proposed approach on the BDD-IOA [64] validation set, also presented in Table 3. Since DiME [28] showed superior performance over the literature [29, 54, 49], we compare only to DiME.
DiME experimented originally on CelebA only. Hence, they did not tune their parameters for CelebA HQ and BDD100k. By running their default parameters, DiME achieves a flip rate of 41% in CelebA HQ. We fix this by augmenting the scale hyperparameter for their loss function. DiME's new success rate is 97% for CelebA HQ. For BDD100k, our results showed that using fewer steps improves the quality. Hence, we used 45 steps out of their re-spaced 200 steps. Unfortunately, we only managed to increase their success ratio to 90.5%.
These experiments show that the proposed methodology beats the previous literature on most metrics for all datasets. For instance, ACE, whatever the chosen distance, outmatches DiME on all metrics in CelebA. For the CelebA HQ, we noticed that DiME outperforms ACE only for the COUT and CD metrics. Yet, our proposed method remains comparable to theirs. For BDD100k, we remark that our method consistently outperforms DiME and STEEX.
Two additional phenomena stand out within these results. On the one hand, we observed that the benefit of favoring \(\ell_{1}\) over \(\ell_{2}\) depends on the characteristics of the target attribute. We noticed that the former generates sparser modifications, while the latter tends to generate broader editing. This makes us emphasize that different attributes require distinct modifications. On the other hand, these results validate the extensions for the FVA and FID metrics. Indeed, the difference between the FVA values on CelebA are small (from 98.3 to 99.9). Yet, the FS shows a major increase. Further, for the Age attribute on CelebA HQ, ACE \(\ell_{2}\) shows a better performance than DiME for the FID metric. The situation is reversed with sFID as DiME is slightly superior.
To complement our extensive experimentation, we tested ACE on a small subset of classes on ImageNet [11] with a ResNet50. We selected three pairs of categories for the assessment, and the task is to generate the CE targeting the contrary class. For the FID computation, we used only the instances from both categories but not external data since we are evaluating the in-class distribution.
We show the results in Table 4. Unlike the previous benchmarks, ImageNet is extremely complex and the classifier needs multiple factors for the decision-making process. Our results reflect this aspect. We believe that current advancements in CE still need an appropriate testbed to validate the methods in complex datasets such as ImageNet. For instance, the model uses the image's context for forecasting. So, choosing the target class without any previous information is unsound.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c c} \hline \hline & \multicolumn{6}{c|}{**Smile**} & \multicolumn{6}{c}{**Age**} \\ \hline Method & FID & sFID & FVA & FS & MNAC & CD & COUT & FID & sFID & FVA & FS & MNAC & CD & COUT \\ \hline DiVE & 29.4 & - & 97.3 & - & - & - & 33.8 & - & 98.2 & - & 4.58 & - & - \\ DiVE\({}^{100}\) & 36.8 & - & 73.4 & - & 4.63 & 2.34 & - & 39.9 & - & 52.2 & - & 4.27 & - & - \\ STEEX & 10.2 & - & 96.9 & - & 4.11 & - & - & 11.8 & - & 97.5 & - & 3.44 & - & - \\ DiME & 3.17 & 4.89 & 98.3 & 0.729 & 3.72 & 2.30 & 0.5259 & 4.15 & 5.89 & 95.3 & 0.6714 & _3.13_ & 3.27 & 0.4442 \\ \hline ACE \(\ell_{1}\) & **1.27** & **3.97** & **99.9** & **0.874** & _2.94_ & _1.73_ & **0.7828** & **1.45** & **4.12** & **99.6** & _0.7817_ & 3.20 & _2.94_ & **0.7176** \\ ACE \(\ell_{2}\) & _1.90_ & _4.56_ & **99.9** & _0.867_ & **2.77** & **1.56** & _0.6235_ & _2.08_ & _4.62_ & **99.6** & **0.7971** & **2.94** & **2.82** & _0.5641_ \\ \hline \hline \end{tabular}
\end{table}
Table 1: **CelebA Assessment.** Main results for CelebA dataset. We extracted the results from DiME and STEEX papers. In **bold** and _italic_ we show the best and second-best performances. ACE outperforms all methods in every assessment protocol.
### Diversity Assessment
In this section, we explore ACE's ability to generate diverse explanations. Diffusion models are, by design, capable of generating distributions of images. Like [28], we take advantage of the stochastic mechanism to generate perceptually different explanations by merely changing the noise for each CE version. Additionally, for a fair comparison, we do not use the RePaint's strategy here because DiME does not have any local constraints and can, as well, change useless structures, like the background. To validate our approach, we follow [28] assessment protocol. Numerically, we obtain a diversity score of \(\sigma_{L}=0.110\) while DiME reports 0.213. Since DiME corrupts the image much more than ACE, the diffusion model has more opportunities to generate distinct instances. In contrast, we do not go deep into the forward noising chain to avoid changing the original class when performing the filtering.
To circumvent the relative lack of diversity, we vary the re-spacing at the refinement stage and the sampled noise. Note that later in the text, we show that using all steps without any re-spacing harms the success ratio. So, we set the new re-spacing such that it respects the accuracy of counterfactuals and fixed the variable number of noise to maintain the ratio between \(\tau\) and the re-spaced number of sampling steps (\(5/_{50}\) in this case). Our diversity score is then of 0.1436. Nevertheless, DiME is better than ACE in terms of diversity, but this is at the expense of the other criteria, because its diversity comes, in part, from regions of the images that should not be modified (for example, the background).
### Qualitative Results
We show some qualitative results in Figure 2 for all datasets, included some ImageNet examples. From an attribute perspective, some have sparser or coarser characteristics. For instance, age characteristics cover a wider section of the face, while the smile attribute is mostly located in small regions of the image. Our qualitative results expose that different distance losses impose different types of explanations. For this case, \(\ell_{1}\) loss exposes the most local and concrete explanations. On the other hand, the \(\ell_{2}\) loss generates coarser editing. This feature is desired for certain classes, but it is user-defined. Additionally, we note that the generated mask is useful to spot out the location of the changes. This is advantageous as it exemplifies which changes were needed and where they were added. Most methods do not indicate the localization of the changes, making them hard to understand. In the supplementary material, we included more qualitative results.
### Actionability
Counterfactual explanations are expected to teach the user plausible modifications to change the classifier's prediction. In this section, we study a batch of counterfactual
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & FID & sFID & S\({}^{3}\) & COUT & FR \\ \hline \multicolumn{5}{|c|}{**Zebra – Sorrel**} \\ \hline ACE \(\ell_{1}\) & 84.5 & 122.7 & 0.9151 & -0.4462 & 47.0 \\ ACE \(\ell_{2}\) & 67.7 & 98.4 & 0.9037 & -0.2525 & 81.0 \\ \hline \multicolumn{5}{|c|}{**Cheeta – Cougar**} \\ \hline ACE \(\ell_{1}\) & 70.2 & 100.5 & 0.9085 & 0.0173 & 77.0 \\ ACE \(\ell_{2}\) & 74.1 & 102.5 & 0.8785 & 0.1203 & 95.0 \\ \hline \multicolumn{5}{|c|}{**Egyptian Cat – Persian Cat**} \\ \hline ACE \(\ell_{1}\) & 93.6 & 156.7 & 0.8467 & 0.2491 & 85.0 \\ ACE \(\ell_{2}\) & 107.3 & 160.4 & 0.7810 & 0.3430 & 97.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **ImageNet Assessment.** We test our model in ImageNet. We generated the explanations for three sets of classes. Producing CE for these classes remains a challenge.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c c} \hline \hline & \multicolumn{8}{c|}{**Smile**} & \multicolumn{8}{c}{**Age**} \\ \hline Method & FID & sFID & FVA & FS & MNAC & CD & COUT & FID & sFID & FVA & FS & MNAC & CD & COUT \\ \hline DiVE & 107.0 & - & 35.7 & - & 7.41 & - & - & 107.5 & - & 32.3 & - & 6.76 & - & - \\ STEEX & 21.9 & - & _97.6_ & - & 5.27 & - & - & 26.8 & - & 96.0 & - & 5.63 & - & - \\ DiME & 18.1 & 27.7 & 96.7 & 0.6729 & 2.63 & **1.82** & **0.6495** & 18.7 & _27.8_ & _95.0_ & 0.6597 & 2.10 & _4.29_ & **0.5615** \\ \hline ACE \(\ell_{1}\) & **3.21** & **20.2** & **100.0** & **0.8941** & **1.56** & 2.61 & 0.5496 & **5.31** & **21.7** & **99.6** & **0.8085** & **1.53** & 5.4 & 0.3984 \\ ACE \(\ell_{2}\) & _6.93_ & _22.0_ & **100.0** & _0.8440_ & _1.87_ & _2.21_ & _0.5946_ & _16.4_ & 28.2 & **99.6** & _0.7743_ & _1.92_ & **4.21** & _0.5303_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: **CelebAHQ Assessment.** Main results for CelebA HQ dataset. We extracted the results from STEEX’s paper. In **bold** and _italic_ we show the best and second-best performances, respectively. ACE outperforms most methods in many assessment protocols.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & FID & sFID & S\({}^{3}\) & COUT & FR \\ \hline \multicolumn{5}{|c|}{**BDD-OIA**} \\ \hline DiME & 13.70 & 26.06 & 0.9340 & 0.3188 & 91.68 \\ ACE \(\ell_{1}\) & **2.09** & **22.13** & **0.9980** & _0.7404_ & 99.91 \\ ACE \(\ell_{2}\) & _3.3_ & _22.75_ & _0.9949_ & **0.7840** & 100.0 \\ \hline \multicolumn{5}{|c|}{**BDD100k**} \\ \hline STEEX & 58.8 & - & - & - & 99.5 \\ DiME & 7.94 & 11.40 & 0.9463 & 0.2435 & 90.5 \\ ACE \(\ell_{1}\) & **1.02** & **6.25** & **0.9970** & _0.7451_ & 99.9 \\ ACE \(\ell_{2}\) & _1.56_ & _6.53_ & _0.9946_ & **0.7875** & 99.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **BDD Assessment.** Main results for BDDOIA and BDD100k datasets. We extracted STEEX’s results from their paper. In **bold** and _italic_ we show the best and second-best performances, respectively.
input tuples generated with our method. If ACE is capable of creating useful counterfactual explanations, we should be qualified to understand some weaknesses or some behaviors of our classifier. Additionally, we should be able to fool the classifier by creating the necessary changes in real life. To this end, we studied the CelebA HQ classifier for the age and smile attributes.
After surveying some images and their explanations, we identified two interesting results (Figure 3). Many of the counterfactual explanations changing from 'young' to 'old' evidence that frowning could change the prediction of the classifier. So, we tested this hypothesis in the real life. We took a photo one individual before and after the frown, avoiding changing the scenery. We were successful and managed to change the prediction of the classifier. For smile, we identified a spurious correlation. Our counterfactual show that the classifier uses the morphological trait of high cheekbones to classify someone as smiling as well as having red cheeks. So, we tested whether the classification model wrongly predicts as smiling someone with high cheekbones even when this person is not smiling. We also tested whether we can enhance it with some red make up in the cheeks. Effectively, our results show that having high cheekbones is a realistic adversarial feature toward the smiling attribute for the classifier. Also, the classifier confidence (probability) can be strengthened by adding some red make up in the cheeks. These examples demonstrate the applicability of ACE in real scenarios.
### Ablation Studies
In this section, we scrutinize the differences between the pre-explanation and the refined explanations. Then we explore the effects of using other types of adversarial attacks. Finally, we show that the \(S^{3}\) metric gives similar results as the FVA, as a sanity check.
**Pre-Explanation _vs_ Counterfactual Explanations.** We explore here, quantitatively and qualitatively, the effects of the pre-explanations (Pre-CE). Also, we apply the diffusion model for the explanations with and without the re-spacing method, using no inpainting strategy, referred as FR-CE and F-CE, respectively. Finally, we compare them against the complete model (ACE). To quantitatively compare all versions, we conducted this ablation study on the CelebA dataset for both'smile' and 'age' attributes. We assessed the components using the FID, sFID, MNAC, CD, and FR metrics. We did not include the FVA or FS metrics, as these values did not vary much and do not provide insightful information; the FVA is \(\sim\)99.9 and FS \(\sim\)0.87 for all versions.
We show the results in Table 5. We observe that pre-explanations have a low FID. Nonetheless, their sFID is
Figure 3: **Actionability.** From browsing our counterfactuals, we found two weaknesses of the scrutinized classifier. Row 1: We tested if a frown could change the classification from young to old. Row 2: we checked if having high cheekbones flipped is enough to classify someone as smiling. Both experiments were successful.
Figure 2: **Qualitative Results.** ACE create sparse but realistic changes in the input image. Further, ACE enjoys from the generate mask, which helps in understanding which and where semantic editing were added. The first row displays the input images, the second one the counterfactual explanations and the third the corresponding mask.
worse than the F-CE version. As said before, we noticed that including both input and counterfactual in the FID assessment introduces a bias in the final measurement, and this experiment confirms this phenomenon. Additionally, one can check that the MNAC metric between the pre-explanation and the FR-CE version does not vary much, yet, the CD metric for the FR-CE is much better. This evidences that the generative model can capture the dependencies between the attributes. Also, we notice that the flip rate (FR) is much lower when using all diffusion steps instead of the re-spaced alternative. We expected this behavior, since we create the pre-explanation to change the classifier's prediction with re-spaced time steps within the DDPM.
Qualitatively, we point out to Figure 4, where we exemplify the various stages of ACE. For instance, we see that the pre-explanation contains out of distribution artifacts and how the refinement sends it back to the image distribution. Also, we highlight that the filtering modifies the hair, which is not an important trait for the classifier. The refinement is key to avoid editing these regions.
**Effect of Different Adversarial Attacks.** At the core of our optimization, we have the PGD attack. PGD is one of the most common attacks due to its strength. In this section, we explore the effect of incorporating other attacks. Thus, we tested C&W [7] and the standard gradient descent (GD). Note that the difference between PGD and GD is that GD does not apply the \(sign\) operation.
Our results show that these attacks are capable of generating semantic changes in the image. Although these are as successful as the PGD attack, we require optimizing the pre-explanation for twice as many iterations. Even when our model is faster than [28] -3.6 times faster-, we require about 500 DDPM iterations to generate an explanation.
**Validity of the \(S^{3}\) Metric.** In this experiment, we show that the \(S^{3}\) and the FS metrics are equivalent when used in the same test bed, _i.e._, CelebA HQ. To this end, we assess whether the ordering between ACE, pre-explanation, and DiME are equal. To have a reference value, we evaluate the measurements when using a pair of random images. So, we show the values (ordering) for both metrics in Table 6 for the Age and Smiling attribute. As we expect, the ordering is similar between both metrics. Nevertheless, we stress that FS is adequate for faces since the network was trained for this task.
## 5 Conclusion
In this paper, we proposed ACE, an approach to generate counterfactual explanations using adversarial attacks. ACE relies on DDPMs to enhance the robustness of the target classifier, allowing it to create semantic changes through adversarial attacks, even when the classifier itself is not robust. ACE has multiple advantages regarding previous literature, notably seen in the counterfactual metrics. Moreover, we highlight that our explanations are capable of showing natural feature to find sparse and actionable modifications in
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Metric & Random & ACE & Pre-CE & DiME \\ \hline \hline \multicolumn{5}{|c|}{**Smile**} \\ \hline FS & 0.2649 (4) & 0.8941 (2) & 0.9200 (1) & 0.6729 (3) \\ \(S^{3}\) & 0.4337 (4) & 0.9876 (2) & 0.9927 (1) & 0.9396 (3) \\ \hline \multicolumn{5}{|c|}{**Age**} \\ \hline FS & 0.2649 (4) & 0.7743 (2) & 0.8300 (1) & 0.6597 (3) \\ \(S^{3}\) & 0.4337 (4) & 0.9417 (2) & 0.9870 (1) & 0.9379 (3) \\ \hline \hline \end{tabular}
\end{table}
Table 6: \(S^{3}\) **equivalence to FS.** The \(S^{3}\) metric and the FS are equivalent in a similar context. We show the metric and the order (in parentheses) and observe that both orderings are equal.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \multicolumn{5}{|c|}{**Smile**} \\ \hline Method & FID & sFID & MNAC & CD & FR \\ \hline Pre-CE & 1.87 & 4.63 & 3.48 & 3.05 & 99.82 \\ FR-CE & 8.31 & 10.30 & 3.43 & 1.68 & 99.97 \\ F-CE & 2.64 & 4.61 & 3.16 & 1.56 & 93.37 \\ \hline ACE & 1.27 & 3.97 & 2.94 & 1.73 & 99.86 \\ \hline \multicolumn{5}{|c|}{**Age**} \\ \hline Pre-CE & 3.93 & 6.71 & 3.76 & 3.17 & 99.55 \\ FR-CE & 7.10 & 9.09 & 3.13 & 2.66 & 99.77 \\ F-CE & 4.23 & 6.20 & 3.53 & 3.04 & 93.50 \\ \hline ACE & 2.08 & 4.62 & 2.94 & 2.82 & 99.35 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Refinement Ablation.** We show the importance of each component from ACE. FR stands for flip rate.
Figure 4: **Refinement Ablation.** We observe that pre-explanations can have out-of-distribution artifacts. After filtering them, the diffusion process creates in-distribution data, but there are unnecessary changes such as the background. ACE is capable of changing the key features while avoiding modifying unwanted structures.
real life, a characteristic not presented before. For instance, we were able to fool the classifier with real world changes, as well as finding natural adversarial examples.
**Acknowledgements** Research reported in this publication was supported by the Agence Nationale pour la Recherche (ANR) under award number ANR-19-CHIA-0017.
|
2304.02890 | Precise probing and discrimination of third-generation scalar
leptoquarks | We explore the pair production of third-generation scalar leptoquark at the
Large Hadron Collider to next-to-leading order accuracy in QCD, matched to
parton shower for a precise probing of the stemming model. We propose to tag
two boosted top-like fatjets produced from the decay of heavy leptoquarks in
association with notably large missing transverse momentum and consider them as
the potential signal. Such a signal demonstrates the capability of a robust
discovery prospect in the multivariate analysis with different high-level
observables, including jet substructure variables. Various scalar leptoquark
models predict different chirality of the top quark appearing from the decay of
the leptoquark carrying same electromagnetic charge. We make use of the
polarization variables sensitive to the top quark polarization in order to
identify the underlying theory. | Anupam Ghosh, Partha Konar, Debashis Saha, Satyajit Seth | 2023-04-06T06:46:13Z | http://arxiv.org/abs/2304.02890v2 | # Precise probing and discrimination of third-generation scalar leptoquarks
###### Abstract
We explore the pair production of third-generation scalar leptoquark at the Large Hadron Collider to next-to-leading order accuracy in QCD, matched to parton shower for a precise probing of the stemming model. We propose to tag two boosted top-like fatjets produced from the decay of heavy leptoquarks in association with notably large missing transverse momentum and consider them as the potential signal. Such a signal demonstrates the capability of a robust discovery prospect in the multivariate analysis with different high-level observables, including jet substructure variables. Various scalar leptoquark models predict different chirality of the top quark appearing from the decay of the leptoquark carrying same electromagnetic charge. We make use of the polarization variables sensitive to the top quark polarization in order to identify the underlying theory.
Keywords:Leptoquark, QCD corrections, Boosted top, Jet substructure, Top polarization
## 1 Introduction
The models
Pair production at NLO+PS accuracy
Collider Analysis * 4.1 Background simulation * 4.2 Construction of Jet Substructure Variables * 4.3 Event Selection * 4.4 Multivariate Analysis
* 5 Distinguishing two models
* 5.1 Polarization Variables
* 5.1.1 Angular variable in the rest frame of (anti-)top
* 5.1.2 Energy variables in the Lab frame
* 5.1.3 Distributions of polarization variables
* 5.2 Log-likelihood ratio test
* 6 Conclusions
* A Appendix
* A.1 Distribution of daughter of top quark
* A.2 Relation between \(\cos\ \theta^{\prime}_{\rm b}\) and z
Introduction
Leptoquark (LQ) is a hypothetical particle that couples to quark and lepton together. It carries both baryon number and lepton number and provides a means to unify quarks and leptons. It can appear in many interesting scenarios beyond the Standard Model, for example, Pati-Salam model [1; 2], Grand unified theory [3; 4], Composite model [5]_etc._, and therefore it remains as a very active area in experimental searches. In some of these models, the baryon number gets violated and that allows protons to decay. But the strong constraints from the non-observation of proton decay so far have pushed the masses of the leptoquark to a very high scale, typically around \(10^{16}\) GeV. However, imposing baryon number or lepton number conservation one gets a set of leptoquarks in the Buchmuller-Ruckl-Wyler (BRW) framework [6], which allows leptoquark masses to be in a range accessible to the collider searches. Also, such leptoquarks are favourable to explain anomalies observed in the B-meson decays in BaBar [7], Belle [8; 9; 10] and LHCb experiments [11; 12]. Note that recent results from the LHCb with 9 fb\({}^{-1}\) of data have made the anomaly disappear in the measurement of neutral current observables, _i.e._\(R_{K}\) and \(R_{K^{*}}\)1[13; 14]. Although these new results imply no lepton flavour universality violation in the flavour-changing neutral current (FCNC) from the decay of B-meson, they do not exclude the possibility of the existence of a TeV scale leptoquark. Such results only indicate that, if a TeV-scale leptoquark exists, either it may not couple to the bottom and strange quarks at the same time together with a lepton, or the couplings are such that they get cancelled in the ratio of the decay widths. Further implications of the LHCb results on the parameter space of different leptoquark models in various production channels have been discussed in [15].
Footnote 1: \(R_{K}=\frac{\Gamma(B\to Ke^{+}e^{-})}{\Gamma(B\to Ke^{+}e^{-})}\) and \(R_{K^{*}}=\frac{\Gamma(B\to K^{*}\mu^{+}\mu^{-})}{\Gamma(B\to K^{*}e^{+}e^{-})}\)
In low-energy experiments, leptoquarks may be probed indirectly because they appear as off-shell states. Here optimal ratios are formed to reduce the uncertainty due to hadronic activities. However, the number of theory parameters that enter through such ratios also increases. Instead, in the present and future high-energy collider experiments, leptoquarks can be searched directly and indirectly looking into a specific production channel. In order to explain the observation of anomaly, the leptoquark is needed to be coupled to fermions of different generations, in addition to the same generation quark and lepton. Although in most of the previous direct searches leptoquark couplings were considered generation-wise, recent experimental studies are extended to include different cross generation fermions [16; 17]. Bounds on the first and second generation scalar leptoquarks are obtained at the LHC considering production of either two charged light leptons of the same flavour or one charged light lepton with sizeable missing transverse energy together with a pair of jets. The limits on the parameters of the different components of the leptoquark model are obtained assuming different branching fractions. At 95% CL, ATLAS collaboration has constrained the mass of first two generation scalar leptoquarks up to 1400 GeV assuming 100% branching into certain decay modes with 36.1 fb\({}^{-1}\) data [18]. The CMS collaboration has also excluded masses below 1430 GeV and 1530 GeV for the first and second generation
respectively, with 35.9 fb\({}^{-1}\) data using the same branching fraction [19; 20]. For the recent bounds on the vector leptoquark from collider searches see Ref. [21; 22; 23].
In this work, we focus on the third-generation scalar leptoquarks. Such leptoquarks are searched by the ATLAS and CMS collaborations using different decay channels [21; 22; 24; 25]. In a recent analysis [26], the ATLAS collaboration extracted the limit for the up-type third-generation scalar leptoquark by reinterpreting the direct pair production of top squarks. Assuming LQ decaying into a top quark and neutrino for an integrated luminosity of 139 fb\({}^{-1}\) at the 13 TeV LHC, a lower limit of 1240 GeV is put on the LQ mass at 95% CL. This paper presents an alternative search strategy considering two top-like fatjets plus significant missing energy in the final state and we employ jet substructure variables and multivariate analysis therein. Given the already constrained parameter space, a relatively heavy leptoquark would naturally produce top quark at the boosted region once produced from its decay. Thus, it is prudent to identify such top quarks as a top-like fatjet from its hadronic decay. Note that the corresponding leptonic decay mode not only suffers from branching ratio suppression, but also identifying such leptons inside a jetty signature is a challenging task and therefore it affects the efficiency significantly. We observe that our result is consistent with the existing search and find that the third-generation LQ can be discovered with a significance of \(\geq 5\sigma\) for masses below 1380 GeV with 3000 fb\({}^{-1}\) data at the High-Luminosity LHC (HL-LHC).
Further, we put limit on the LQ mass up to which HL-LHC can exclude such LQ models with 95% confidence level. For the third generation leptoquark, \(t\bar{t}\) plus missing energy channel was also used in Ref. [27], where the authors found that \(Z\)+jets is the main background, while \(t\bar{t}\)+jets is the negligible one. However, the mono-boson background can be controlled substantially by enforcing at least one b-tagging inside the leading or sub-leading top-like fatjet. Such a demand in our analysis brings the mono-boson background in a similar footing as \(t\bar{t}\)+jets, thereby improving the result significantly.
We also discuss possible probes for distinguishing different scalar leptoquark models based on the same final state signature at the LHC. One proposal has been made to determine different leptoquark types of the same spin and different electric charge by measuring jet charge [28]. In this study, we show that in the context of third-generation up-type leptoquark measuring the polarization of top quark resulting from the leptoquark decay can be an efficient way to distinguish scalar leptoquark models of same electric charge.
As the top quark decays before it hadronizes, its spin information can be obtained from its decay products2[29]. Top quark polarization has been studied for more than last thirty years [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. Determination of the polarization of boosted top-quark is studied in [43]. The possibility of distinguishing two models in the \(t\bar{t}\tau\bar{\tau}\) channel was explored before for scalar leptoquark in Ref. [44] without signal-to-background study. The prospect of distinguishing a scalar leptoquark from the background based on polarization variables at the LHC was shown to be small [45]. The potential of discriminating two specific BSM scenarios in mono-top search at the LHC using top polarization has been explored in [46].
We set our probe strategy based on two chosen leptoquark models namely \(S_{3}\) and \(R_{2}\), that produce same pair production cross-section; but the top quark is produced as left and right chiral for these models, respectively. We find the difference in the kinematic distributions of these two models due to different chirality rendering minimal effect in separating the signal from the background. It leads to the almost identical mass limit for exclusion and discovery potential of these two models. However, one can use the polarization variables like the ratio of the b-jet energy to the reconstructed top jet energy to distinguish two models at 14 TeV LHC and a futuristic 27 TeV collider (HE-LHC).
We study the signal events at the next-to-leading order (NLO) in QCD matched to parton shower (PS) for reduced scale uncertainties and realistic results. By matching the fixed-order (FO) NLO correction with the parton shower (PS), we get more accurate results for different kinematic distributions as it resums large leading logarithms in the collinear region. We show the effect of NLO+PS calculations on different kinematic distributions of the leptoquarks.
The rest of the paper is organized as follows. In Sec. 2, we describe the third-generation scalar leptoquark models. In Sec. 3, we show the effect of NLO calculations. We study the impact of parton shower over the fixed-order (FO) NLO calculation, k-factor variation in differential distributions, and reduction of scale uncertainties at the NLO+PS accuracy. In Sec. 4, we describe our search strategy and provide details on multivariate analysis used to discriminate the signal and the background. In Sec. 5, we discuss how the polarization observables can be instrumental in distinguishing two above mentioned models. Finally, we summarize and conclude in Sec. 6.
## 2 The models
Under the Standard Model (SM) gauge group \(SU(3)_{c}\otimes SU(2)_{L}\otimes U(1)_{Y}\), there are total six species of scalar leptoquarks, namely \(S_{3}\), \(R_{2}\), \(\tilde{R}_{2}\), \(\tilde{S}_{1}\), \(S_{1}\) and \(\tilde{S}_{1}\). Since a quark transforms as a triplet of \(SU(3)_{c}\), a leptoquark should also transform as the same multiplet of \(SU(3)_{c}\) in order to form gauge invariant interaction terms. In Tab. 1, we show the SM quantum numbers of all scalar leptoquarks. The subscripts on the model name denote their \(SU(2)_{L}\) quantum numbers. If two or more models have same \(SU(2)_{L}\) quantum number but different hypercharges, a tilde or bar is used to identify them. The component fields of the electroweak multiplets are written in the third column of the table with superscripts denoting their electric charges. In this article we are interested in studying the third generation scalar leptoquarks only. Various decay channels of the component fields for third generation leptoquarks are written inside parentheses.
It is interesting to notice that in only two models fields transform as \(\mathbf{3}\) and for the rest of the models they transform as \(\mathbf{\bar{3}}\) under \(SU(3)_{c}\). Let us, for example, consider the \(S_{3}\) model in which fields transform as \(\mathbf{\bar{3}}\). The reason behind this transformation is that the fields in \(S_{3}\) should couple to a quark doublet Q and lepton doublet L, as it transforms as \(\mathbf{3}\) under \(SU(2)_{L}\)3. But it can couple to only \(\bar{Q^{C}}L\), not with \(\bar{Q}L\), since the latter is zero. As
\(\bar{Q}^{C}\) transforms as \({\bf 3}\), \(S_{3}\) would transform as \(\bf\bar{3}\)4. Obviously the conjugate of \(S_{3}\) transforms as \({\bf 3}\), however for that lepton doublet precedes the quark doublet in the Lagrangian. Here we label a leptoquark as field (as opposed to the conjugate field) if in the interaction term quark precedes lepton. Transformation properties of the other leptoquarks under \(SU(3)_{c}\) can be understood in a similar way.
Footnote 4: For \(R_{2}\) model, fields transform as \({\bf 2}\) under \(SU(2)_{L}\) and therefore one fermion needs to be doublet, while the other one needs to be singlet. The doublet and singlet are left and right handed respectively. Hence, in this case, interaction with charge conjugated quark field vanishes, while the one without charge conjugation survives. This explains why \(R_{2}\) transforms as \({\bf 3}\) under \(SU(3)_{c}\).
One might be interested to probe third generation up-type scalar leptoquark component fields which have \(\frac{2}{3}e\) electric charge. There are four such component fields, namely \(S_{3}^{-\frac{2}{3}}\), \(R_{2}^{\frac{2}{3}}\), \(\tilde{R}_{2}^{\frac{2}{3}}\), and \(\bar{S}_{1}^{-\frac{2}{3}}\). At the LHC, the first two fields can give two top fatjets plus missing energy as the signature, whereas the last two, depending on the right handed heavy neutrino decay mechanisms, will give more complicated and model dependent signatures. In this paper, we are interested to study the phenomenology of the \(S_{3}^{-\frac{2}{3}}\) and \(R_{2}^{\frac{2}{3}}\) fields only.
The kinetic term for the generic scalar Leptoquark (S) can be written as,
\[{\cal L}_{\rm kin}=(D_{\mu}S)^{\dagger}(D^{\mu}S)-M_{S}^{2}S^{ \dagger}S\,. \tag{1}\]
Here the covariant derivative \(D_{\mu}\) is given as,
\[D_{\mu}=\partial_{\mu}-ig_{s}\lambda^{a}G_{\mu}^{a}\,, \tag{2}\]
where \(g_{s}\) is the strong coupling, \(\lambda^{a}\) and \(G^{a}\) (\(a=1,...,8\)) denote the Gellman matrices and gluon fields respectively. The above Lagrangian gives rise to the following two vertices - (_i_) gluon-LQ-LQ, (_ii_) gluon-gluon-LQ-LQ. The Feynman rules for these vertices are independent of the type of leptoquarks.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Models & \({}_{(SU(3)_{c},\,SU(2)_{L},\,U(1)_{Y})}\) & Components \& Decay \\ \hline \(S_{3}\) & \((\bar{3},3,\frac{1}{3})\) & \(S_{3}^{\frac{4}{3}}(\tilde{b},\tau^{+})\), \(S_{3}^{\frac{1}{3}}((\tilde{t},\tau^{+}),(\tilde{b},\tilde{\nu}_{\tau}))\), \(S_{3}^{-\frac{2}{3}}(\tilde{t},\tilde{\nu}_{\tau})\) \\ \hline \(R_{2}\) & \((3,2,\frac{7}{6})\) & \(R_{2}^{\frac{5}{3}}(t,\tau^{+})\), \(R_{2}^{\frac{2}{3}}((t,\tilde{\nu}_{\tau}),(b,\tau^{+}))\) \\ \hline \(\tilde{R}_{2}\) & \((3,2,\frac{1}{6})\) & \(\tilde{R}_{2}^{\frac{2}{3}}((t,\tilde{N}_{\tau}),(b,\tau^{+}))\), \(\tilde{R}_{2}^{-\frac{1}{3}}((b,\tilde{\nu}_{\tau}),(b,\tilde{N}_{\tau}))\) \\ \hline \(\tilde{S}_{1}\) & \((\bar{3},1,\frac{4}{3})\) & \(\tilde{S}_{1}^{\frac{4}{3}}(\tilde{b},\tau^{+})\) \\ \hline \(S_{1}\) & \((\bar{3},1,\frac{1}{3})\) & \(S_{1}^{\frac{1}{3}}((\tilde{t},\tau^{+}),(\tilde{b},\tilde{\nu}_{\tau}),(\tilde {b},\tilde{N}_{\tau}))\) \\ \hline \(\bar{S}_{1}\) & \((\bar{3},1,-\frac{2}{3})\) & \(\bar{S}_{1}^{-\frac{2}{3}}(\tilde{t},\tilde{N}_{\tau})\) \\ \hline \end{tabular}
\end{table}
Table 1: All the possible scalar leptoquark models which give gauge invariant terms in the Lagrangian under the SM gauge group transformations. To learn about the naming convention used for the models, see the text.
The quantum numbers of leptoquark \(S_{3}\) is such that it can allow diquark coupling. However, without baryon or lepton number conservation, for TeV-scale leptoquark, this coupling has to be too tiny as otherwise it would lead to proton decay 5. As this coupling is too constrained, in our analysis they do not play any role6. The interaction terms for the third generation scalar leptoquarks \(S_{3}\) and \(R_{2}\) of charge \(\frac{2}{3}e\), with a quark and a lepton, are given by [47],
Footnote 5: The models \(R_{2}\) and \(\tilde{R}_{2}\) which do not allow any diquark coupling are called genuine leptoquark. The rest four scalar leptoquark models allow diquark couplings.
Footnote 6: The diquark coupling can also be forbidden by demanding either baryon number or lepton number conservation.
\[\mathcal{L}_{\text{Int}}^{S_{3}^{\frac{2}{3}}} =y_{S_{LL}}*\bar{t_{L}^{C}}\ v_{\tau}\ S_{3}^{-\frac{2}{3}}+h.c., \tag{3}\] \[\mathcal{L}_{\text{Int}}^{R_{2}^{\frac{2}{3}}} =y_{R_{RL}}*\bar{t_{R}}v_{\tau}\ R_{2}^{\frac{2}{3}}+y_{R_{LR}}* \bar{b_{L}}\tau_{R}\ R_{2}^{\frac{2}{3}}+h.c., \tag{4}\]
where "RL" in \(y_{R_{RL}}\) signifies that the chiralities of the quark and lepton are right-handed and left-handed, respectively. Other subscripts also carry the same convention. As \(S_{3}^{\frac{2}{3}}\) has only one decay channel (_i.e.,_\(S_{3}^{\frac{2}{3}}\to t_{L}\nu_{\tau}\)), it has 100% branching fraction for it. Although \(R_{2}^{\frac{2}{3}}\) has two decay channels, in our present analysis we shall assume 100% branching fraction to its \(t_{R}\tilde{\nu}_{\tau}\) decay mode, which can easily be scaled to other values as required7.
Footnote 7: For other branching fraction, the production cross-section of leptoquark pair will also depend on \(y_{R_{LR}}\) in the five flavor scheme, since a t-channel production diagram will appear when \(y_{R_{LR}}\) is non-zero. However, in Ref [48] it has been shown that the dependence of the cross-section on this parameter is quite small.
## 3 Pair production at NLO+PS accuracy
We consider signal events at the NLO in QCD matched to parton shower. The production of events at the NLO(FO) QCD accuracy requires calculating amplitudes of LO, virtual and real-emission Feynman diagrams. We show all the LO and a few virtual Feynman diagrams for the pair production of scalar leptoquarks in Fig. 1. The real-emission diagrams are not shown which are tree level diagrams with an extra gluon or light quark. The diagrams are drawn using _JaxoDraw_ package [49]. The events are produced using _MadGraph5_aMC@NLO_[50]. For the signal, we first write the model in _FeynRules_[51] and use _NLOCT_[52] package8 to produce the UFO model [54]. This UFO model is then used in _MadGraph5_aMC@NLO_ to generate events at the NLO(FO) accuracy. To account for the infrared divergence in real emission processes, _MadGraph5_aMC@NLO_ uses the FKS subtraction method [55; 56].
Footnote 8: The NLOCT package calculates the UV and R2 terms of the OPP method[53].
In Tab. 2, we show the cross-sections for the production of pair of scalar leptoquarks of mass \(\text{M}_{\text{LQ}}=1300\) GeV at 14 TeV LHC at LO and NLO(FO). The cross-sections for both the models \(S_{3}^{\frac{2}{3}}\) and \(R_{2}^{\frac{2}{3}}\) are same up to Monte Carlo uncertainty, as expected from the discussion in the previous section. Corrections due to the NLO QCD effects are around 10%.
e have used _NNPDF23_lo_as_0119_qed_ and _NNPDF23_nlo_as_0119_qed_ parton distribution functions, respectively, for the LO and NLO calculations. The partonic center of mass energy is used as the central choice for the renormalisation and factorization scales. For the scale variation study, we vary the renormalization and factorization scales up and down by a factor of two, resulting in total nine points including the central choice. The upper and lower envelopes of the variations of the cross-section due to these different choices of scales are shown as the percentage change from the central cross-section in the superscript and subscript, respectively. From the table, we see the NLO QCD correction here reduces the scale uncertainty by around a factor of two.
The NLO(FO) results discussed in the above two paragraphs can give distributions of different kinematic variables using weighted events, but unweighting of the these events cannot be done as the matrix elements are not bounded in this case [50]. Also in this case, result is not physical for low \(p_{T}\) region. However, it can produce unweighted events while matched to the parton shower making use of the MC@NLO formalism [57]. Results at the NLO+PS accuracy give correct description of the low \(p_{T}\) region. For showering of events, we use _Pythia8_[58]. In Fig. 2, we see that NLO+PS calculation over the fixed order one reduces the cross-section at the lower transverse momentum region of the leptoquark pair system \(\rm p_{T}(S_{3}^{+\frac{2}{3}}S_{3}^{-\frac{2}{3}})\) due to the Sudakov suppression.
\begin{table}
\begin{tabular}{|c|c|c|} \hline model order & LO & NLO(FO) \\ \hline \(S_{3}^{\frac{2}{3}}\) & \(0.6621^{+37.8\%}_{-25.8\%}\) & \(0.7229^{+14.5\%}_{-14.7\%}\) \\ \hline \(R_{2}^{\frac{2}{3}}\) & \(0.6631^{+37.8\%}_{-25.8\%}\) & \(0.7163^{+14.9\%}_{-14.8\%}\) \\ \hline \end{tabular}
\end{table}
Table 2: Cross-sections for the pair production of scalar leptoquarks of mass \(\rm M_{\rm LQ}=1300\) GeV at the 14 TeV LHC. The scale variation are shown in subscript and superscript.
Figure 1: In the upper row, all possible prototype born diagrams are shown. In the lower row, only few prototype virtual diagrams are shown.
n Fig. 3, we show LO+PS and NLO+PS normalized distributions of MET and \(\rm{Log}_{10}\) [\(\rm{p_{T}(S_{3}^{+\frac{2}{3}}S_{3}^{-\frac{2}{3}})}\)] on the upper panels of two subfigures. The shapes of MET distributions for LO+PS and NLO+PS are identical and they peak around 700 GeV. For \(\rm{p_{T}(S_{3}^{+\frac{2}{3}}S_{3}^{-\frac{2}{3}})}\) distribution in the right figure, the peak for NLO+PS is slightly shifted towards left of LO+PS one and they peak in the range of 100-300 GeV. On the lower panels, we show the \(k\)-factor for differential distribution, i.e. the ratio of differential NLO+PS cross-section to LO+PS one. In the left figure for MET, we see that for the shown range the k-factor at different bins stays nearly same and takes a value around 1.1. In the right figure, the differential k-factor is not flat for \(\rm{Log}_{10}\) [\(\rm{p_{T}(S_{3}^{+\frac{2}{3}}S_{3}^{-\frac{2}{3}})}\)] and therefore scaling the leading order events by a constant k-factor would not give precise results.
On the upper panel of Fig. 4, we show differential distribution of cross-section with respect to the top transverse momentum at the LO+PS and NLO+PS level for the central
scale choice. We see that the NLO+PS corrections lead to increased cross-section at every bin. In the lower panel, the effect of scale variation is shown as red and blue bands, where a band is drawn between the upper and lower envelopes of different results for different scale choices. It can be seen that the scale variation of NLO+PS result is significantly smaller compared to the LO+PS one, confirming that the NLO QCD correction leads to more accurate result in addition to the enhancement in the cross-section.
## 4 Collider Analysis
We consider pair production of \(\frac{2}{3}e\)-charged third-generation scalar leptoquarks (\(S_{3}^{2/3}\) and \(R_{2}^{2/3}\)) and try to probe them at the 14 TeV LHC with two top-like fatjets plus large missing transverse momentum. Third-generation scalar leptoquark pair production is possible only through gluon fusion and \(q\bar{q}\) annihilation, and hence the cross-section is independent of any model-dependent coupling and depends only on the leptoquark mass. We consider NLO QCD corrections matched to parton shower of the LQ pair production channel and few representative diagrams are already shown in Fig. 1. Equations 3 and 4 show the decay modes of \(S_{3}^{2/3}\)and \(R_{2}^{2/3}\), respectively. We consider decay of \(R_{2}^{2/3}\) fully into a top quark and a neutrino. Since the current ATLAS study [26] excludes the third-generation LQ of mass lower than 1.24 TeV, the top quark originating from the decay of heavy LQ will have a high boost. The top quark will decay further, and all the decay components will start collimated resulting into a boosted large-radius jet, called top fatjet (\(J_{t}\)). We consider the hadronic decay of the top quarks. So, in the final state, we have two boosted top-like fatjets and a significant missing transverse momentum. We use jet substructure variables, missing energy, and other high-level observables to distinguish the signal from
Figure 4: In the upper panel, we show distribution of \(\left(\text{p}_{\text{T}}\right)_{\text{T}\text{op}}\) at LO+PS and NLO+PS accuracies. The bands in the lower panel show the scale variation of the distribution with respect to central value. The bands are drawn between the envelopes of the different distributions arising from the different scale choices.
the SM background. The signal topology is given below,
\[pp\to S_{3}^{2/3}S_{3}^{-2/3}\;[\text{QCD}]\to(t\nu_{\tau})(\bar{t} \bar{\nu}_{\tau})j\Rightarrow 2J_{t}+\not{E}_{T}+X\,, \tag{28}\] \[pp\to R_{2}^{2/3}R_{2}^{-2/3}\;[\text{QCD}]\to(t\bar{\nu}_{\tau})( \bar{t}\nu_{\tau})j\Rightarrow 2J_{t}+\not{E}_{T}+X\,,\]
where the top quarks coming from the \(S_{3}^{2/3}\) and \(R_{2}^{2/3}\) decay are respectively left and right chiral.
### Background simulation
All the background processes that can potentially mimic the signal are included in our analysis. Each background process is generated with two to four additional QCD jets and matched according to the MLM scheme [59; 60] with virtually-ordered Pythia shower. PDF sets, renormalization, and factorization scales that are used in our analysis remain same as described in Sec. 3. The showered events are then passed through Delphes3[61] for detector simulation purpose, and we use the default CMS card provided there. Particle-flow towers and tracks are clustered to form anti-kT jets of radius parameter 0.5. Fatjets (\(J\) or \(J_{t}\)) of radius 1.5 are constructed with the Cambridge-Achen (CA) algorithm [62] using Fastjet 3.2.2[63].
\(t\bar{t}+\) jets: One of the main backgrounds for our signal process is the pair production of top quarks when one of the top quarks decays hadronically and the other decay leptonically. The top quark that decays hadronically is reconstructed as top-fatjet. The neutrino from the leptonic decay of the other top quark and the lepton that escapes detection provide missing energy (MET or \(\not{E}_{T}\)), while another fatjet comes from the QCD radiation or b-jet. Hadronic decay of both top quarks can give two boosted top-fatjets; however, the requirement of significant missing energy reduces this background compared to the previous setup by a factor of 100, since the MET comes from the mis-measurement of the hadronic activities. This background is produced with two additional radiations and matched with the MLM matching scheme.
\(Z+\) jets: Another main background of our signal is the inclusive Z-boson production, where the \(Z\)-boson decays invisibly. This process is generated with four extra partons, and the MLM matching is used. Two fatjets essentially originate from the QCD jets.
\(W+\) jets: It contributes considerably but is smaller than \(Z+\) jets background. When the W boson decays leptonically, the missing energy comes from the neutrino and the lepton that escape detection. This background is also generated with four partons following MLM matching and here also the fatjets come from the extra radiations.
Since our analysis requires large missing energy, we generate \(Z+\) jets and \(W+\) jets backgrounds with a generation-level hard-cut \(\not{E}_{T}>100\) GeV for better statistics.
\(tW+\) jets: Single top quark production at the LHC in association with the \(W\) boson, contributes considerably as a background, which is generated with two extra parton using
MLM matching. Top quark decays hadronically to give rise to a boosted top-like fatjet, while another fatjet comes from the QCD radiation. The neutrino with the missing lepton from \(W\) decay is the source of the missing energy.
\(VV\)+ jets: A small contribution can come from the diboson production, which can be classified into three different categories, \(WZ\), \(WW\), and \(ZZ\), where all of these are matched with two extra partons applying MLM matching scheme. \(WZ\) contributes the most among these three, where \(Z\) boson decays invisibly to produce missing energy and hadronic decay of the \(W\) boson gives one fatjet. Even though \(WW\) and \(ZZ\) contribute almost negligibly we keep these backgrounds in our analysis. In either case, one of them decays hadronically and the other one decays leptonically (\(W\)) or invisibly (\(Z\)). In all these three processes, another fatjet comes due to the QCD radiation.
\(t\bar{t}Z\): The cross-section of \(t\bar{t}Z\) is smaller than any of the above mentioned background processes, but we keep this too in our analysis. This process becomes signal like when \(Z\)-boson decays invisibly and two tops are reconstructed as top-like fatjets. This process gives almost negligible contribution compared to \(Z\)+ jets and \(t\bar{t}\)+ jets backgrounds. We omit \(t\bar{t}W\) background since its contribution is found to be even more suppressed.
QCD background: The di-jet production cross-section is vast at the LHC; even after constructing two fatjets, huge events remain from this background. The requirement of large missing energy gives additional suppression of order 100 since MET here can only occur due to the mis-measurement of hadronic activities. An additional suppression of order 50 comes from the requirement of b-tagged fatjet. So, QCD backgrounds are found to be negligible compared to the other backgrounds and therefore we do not include this in our analysis.
The background processes considered in our analysis are normalized with the available higher-order QCD corrected production cross-section, as presented in Tab. 3.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Background & Ref & \(\sigma\) (pb) \\ \hline \(t\bar{t}\)+ jets & [64] & 988.57 [\(N^{3}\)LO] \\ \hline \(tW\)+ jets & [65] & 83.1 [\(N^{2}\)LO] \\ \hline \(Z\)+ jets & [66, 67] & \(6.33\times 10^{4}\) [\(N^{2}\)LO] \\ \hline \(W\)+ jets & & \(1.95\times 10^{5}\) [NLO] \\ \hline \(ZZ\)+ jets & & 17.72 [NLO] \\ \hline \(WW\)+ jets & [68] & 124.31 [NLO] \\ \hline \(WZ\)+ jets & & 51.82 [NLO] \\ \hline \end{tabular}
\end{table}
Table 3: Higher-order QCD corrected production cross-sections of different background processes at the 14 TeV LHC used in our analysis, where the order of QCD correction is presented in brackets.
### Construction of Jet Substructure Variables
Jet substructure variables provide good efficiencies when analyzing boosted topologies. The substructure variables that we use in our analysis are listed below.
Pruned Jet Mass:Jet mass is a good variable in separating a boosted top-like fatjet from the boosted \(W/Z\) boson or the QCD fatjets. Additional soft and wide angle radiations from the underlying QCD interactions can contribute to the fatjet mass. So for realistic predictions, one needs to remove those contributions. Pruning, filtering, and trimming [69; 70; 71; 72] are different jet grooming techniques and we use pruning in our analysis. The fatjet mass is defined as \(M_{J}=(\sum_{i\epsilon J}p_{i})^{2}\), where the four-momentum of the \(i\)-th constituent is denoted as \(p_{i}\). After clustering a fatjet using the CA algorithm, we de-cluster its constituents in each recombination step and remove the soft and wide-angle radiations from the fatjet. The merging of \(i\)-th and \(j\)-th proto-jets into the fatjet is vetoed, and the softer one is removed, if the following conditions are achieved,
\[Z=\min(P_{Ti},P_{Tj})/(P_{Ti}+P_{Tj})<Z_{\rm cut},\ \ \text{and}\ \ \Delta R_{ij}>R_{\rm fact}. \tag{4.2}\]
The angular separation between two proto-jets is \(\Delta R_{ij}\), and we choose \(R_{\rm fact}=0.86\sim\frac{m_{\rm top}}{P_{T,\rm top}}\)[72]. \(Z\) and \(P_{Ti}\) are the softness parameter and the transverse momentum of the \(i\)-th proto-jet respectively. We set \(Z_{\rm cut}=0.1\)[71] in our analysis.
N-subjettiness ratio:N-subjettiness is a jet shape variable that measures how the energy of a fatjet is distributed around different subjet axes and is defined as follows [73; 74],
\[\tau_{N}=\frac{1}{\mathcal{N}_{0}}\sum_{i}P_{T,i}\min\{\Delta R_{i,1},\Delta R _{i,2},\cdots\Delta R_{i,N}\}. \tag{4.3}\]
The summation runs over all the constituent particles of the jet. \(\mathcal{N}_{0}\) is the normalization factor, defined as \(\mathcal{N}_{0}=\sum\limits_{i}P_{T,i}R\), where \(P_{T,i}\) is the transverse momentum of the \(i\)-th constituent of the jet of radius \(R\). \(\Delta R_{i,K}=\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}\) is the angular separation of the \(i\)-th constituent of the jet from its \(K^{\rm th}\)-subjet axis in the pseudorapidity-azimuthal angle, _i.e.,_\(\eta-\phi\) plane. Rather than \(\tau_{N}\), the ratio \(\frac{\tau_{N}}{\tau_{N-1}}\) is a more effective discriminating variable between N-prong fatjets and SM background [73]. Our analysis uses \(\tau_{32}=\frac{\tau_{3}}{\tau_{2}}\) and \(\tau_{31}=\frac{\tau_{3}}{\tau_{1}}\) to differentiate top-fatjets from the SM background.
### Event Selection
Baseline-Selection Criteria:We apply the following pre-selection cuts (C1) to select events for further analysis.
* The radius parameter of the top fatjet is \(R\sim\frac{2m_{t}}{P_{T}}\), where \(P_{T}\) and \(m_{t}\) are the transverse momenta and top quark's mass, respectively. For each event, we reconstruct at least two fatjets using CA algorithm of radius parameter 1.5 with minimum transverse momentum \(P_{T}(J_{0}),P_{T}(J_{1})>200\) GeV
* The missing energy of each event should be greater than 100 GeV
* Since lepton is not present in the final state of our signal, we veto the events which contain any lepton of transverse momentum \(P_{T}(l)>10\) GeV and pseudorapidity \(|\eta(l)|<2.4\)
* A minimal cut on the azimuthal separation between any fatjet and the missing momentum \(\Delta\phi(J_{i},\not{E}_{T})>0.2\) is applied to minimise the hadronic mis-measurement contribution
Final selection cuts: After the primary selection, we apply the following cuts before passing events for multivariate analysis (MVA).
* Missing energy cut is raised from 100 GeV to 150 GeV, which reduces the background sharply
* We do additional b-tagging inside the leading (\(J_{0}\)) or subleading (\(J_{1}\)) fatjets
* We demand pruned mass of both the leading \(M_{J_{0}}\) and subleading \(M_{J_{1}}\) fatjets to be greater than 120 GeV
Tab. 4 displays the cut flow along with the cut efficiencies, anticipated number of events (in fb, multiplying with the luminosity gives the expected event numbers) for the signal and the background processes for the 14 TeV LHC. One can see that the higher missing energy cut, b-tagging within a fatjet, and the pruned fatjet masses are very effective in significantly reducing backgrounds while maintaining good signal acceptance. The principal backgrounds \(Z+\) jets and \(W+\) jets are drastically reduced when a b-jet is tagged within the leading or subleading fatjet, and their effects are nearly identical to that of the \(t\bar{t}+jets\) background (see the rows up to C3 in table 4).
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Cuts} & \(S_{3}\) & \(R_{2}\) & \(Z\) & \(W\) & \(t\bar{t}\) & \(tW\) & \(WZ\) & \(WW\) & \(ZZ\) & \(t\bar{t}Z\) & tot \\ & & +jets & +jets & +jets & +jets & +jets & +jets & +jets & +jets & BG \\ & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) \\ \hline \hline \multirow{2}{*}{C1} & 0.2315 & 0.232 & 2517.99 & 1366.91 & 690.65 & 366.91 & 93.53 & 25.90 & 11.51 & 5.24 & 5078.64 \\ & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) \\ \hline \multirow{2}{*}{C2} & 0.2258 & 0.2262 & 1640.29 & 762.59 & 302.16 & 152.52 & 58.35 & 11.51 & 6.973 & 3.96 & 2938.36 \\ & \([97.54\%]\) & \([97.55\%]\) & \([65.14\%]\) & \([55.79\%]\) & \([43.75\%]\) & \([41.57\%]\) & \([62.39\%]\) & \([44.44\%]\) & \([60.58\%]\) & \([75.57\%]\) & \([57.86\%]\) \\ \hline \multirow{2}{*}{C3} & 0.1810 & 0.1801 & 241.73 & 117.99 & 230.94 & 114.39 & 10.79 & 2.45 & 1.92 & 3.28 & 723.48 \\ & \([78.19\%]\) & \([77.63\%]\) & \([9.60\%]\) & \([8.63\%]\) & \([33.44\%]\) & \([31.18\%]\) & \([11.54\%]\) & \([9.46\%]\) & \([16.69\%]\) & \([62.60\%]\) & \([14.25\%]\) \\ \hline \multirow{2}{*}{C4} & 0.1047 & 0.1033 & 25.38 & 17.33 & 64.23 & 27.45 & 1.24 & 0.33 & 0.2 & 1.474 & 137.634 \\ & \([45.23\%]\) & \([44.53\%]\) & \([1.01\%]\) & \([1.27\%]\) & \([9.30\%]\) & \([7.48\%]\) & \([1.33\%]\) & \([1.27\%]\) & \([1.74\%]\) & \([28.13\%]\) & \([2.71\%]\) \\ \hline \end{tabular}
\end{table}
Table 4: The expected number of events (in fb, multiplying with the luminosity gives the expected event numbers) and cut efficiency for the signal \(S_{3}\) and \(R_{2}\) (1.3 TeV mass of leptoquark for both models) and all the background processes that contribute to the fatjets \(+\not{E}_{T}\) final state after implementing the corresponding cuts at the 14 TeV LHC are shown. The effectiveness of different kinematic cuts can be followed from top to bottom after applying (C1) Preselection cuts, (C2) \(\not{E}_{T}>150\) GeV, (C3) requiring at least one b-tag within \(J_{0}\) or \(J_{1}\), and finally (C4) \(M_{J_{0}},M_{J_{1}}>120\) GeV. After applying C4 cut, the remaining events are passed for the multivariate analysis.
Figure 5: After imposing \(\not\!\!E_{T}>150\) GeV and b-tagging inside \(J_{0}\) or \(J_{1}\), together with preselection cuts as indicated in the text, the normalized distribution of kinematic variables of the signal \(S_{3}\) (solid red), \(R_{2}\) (dashed black), and bin-wise stacked histogram of all the background processes are shown.
The normalised distributions of various kinematic variables of the signal \(S_{3}\) and \(R_{2}\), as well as bin-wise stacked histograms of all background processes after imposing \(\not{E}_{T}>150\) GeV and b-tagging inside \(J_{0}\) or \(J_{1}\), together with preselection cuts, are shown in Fig. 5, where leptoquark mass is set at 1.3 TeV. The contributions of individual background processes are represented by different colours: blue, green, orange, olive, and magenta, for \(t\bar{t}+\)jets, \(Z+\)jets, \(W+\)jets, \(tW+\)jets, and \(t\bar{t}Z\), respectively. Each background process is weighted by its effective cross-section after applying the cuts listed and normalized to the total cross-section.
The distributions of the leading and subleading fatjets' pruned masses are depicted in Fig. 4(a) and Fig. 4(b) respectively. \(Z+\) jets background demonstrates no peak in the \(M_{J_{0}}\) and \(M_{J_{1}}\) distributions, near the \(Z\)-boson mass or around the top mass as expected, since the fatjets originate from the QCD radiations. One of the tops in the \(t\bar{t}+\) jets background decays hadronically and is reconstructed as a top-fatjet, while the other fatjet comes because of the QCD radiation. As a result, the \(M_{J_{0}}\) distribution exhibits a peak near the top mass, but the \(M_{J_{1}}\) distribution does not exhibit a peak near the top mass.
It is also interesting to note that fatjet mass distributions are slightly different for the two signals while other kinematic variables remain similar. This is a direct implication of two different polarization. The bottom quark and the \(W\)-boson travel in the opposite direction in the top quark's rest frame to conserve the linear momentum. As in the \(S_{3}\) model the top quark is left chiral, the majority of the \(b\)-quark in the top quark's rest frame lie in the same direction of the boost (this will be further discussed in the next section). This means that the majority of the \(W\) boson emerges at an angle greater than 90 degrees to the boost. However, in the \(R_{2}\) model (top quark is right chiral), most of the \(b\)-quarks are found in the direction opposite to the boost in the top quark's rest frame. This suggests that most of the W bosons exist around the boost direction. As a result, in the lab frame, the quarks from the hadronic decay of the \(W\) boson are more collimated in the \(R_{2}\) model compared to the \(S_{3}\) model. When \(W\) and the \(b\)-quark get combined to form a single large radius three-prong fatjet, the \(S_{3}\) model produces fewer events than the \(R_{2}\). Because the \(W\) boson is heavier than the \(b\)-quark, \(S_{3}\) needs more boost to bring back all the \(W\) bosons along the boost direction compared to \(R_{2}\). As a result, \(R_{2}\) model exhibits larger peaks in both the leading and subleading fatjet mass distributions around the top quark mass than the \(S_{3}\) model. Moreover, for the \(R_{2}\) model, we observe also a distinct peak at the \(W\)-boson mass in either of the fatjet mass distribution. This is because most \(R_{2}\) events carry \(W\) bosons along the boost direction in the top quark's rest frame, and in lab frame decay products of W boson, are more collimated compared to \(S_{3}\) events. However, we see more \(S_{3}\) events than \(R_{2}\) between the \(W\) boson and top quark mass because the overall cross-section is the same for both models.
Figures 4(c) and 4(d) respectively depict the transverse momentum of \(J_{0}\) and \(J_{1}\). From these distributions, we can observe that the signal is substantially harder than the background. Fig. 4(e) displays the \(M_{\rm eff}\) distribution, where \(M_{\rm eff}\) is the scalar sum of the total transverse momentum of the visible jets plus MET.
\[M_{\rm eff}=\not{E}_{T}+\sum|\overrightarrow{P}_{iT}|\,, \tag{4.4}\]
where the summation runs over all the visible jets and \(\overrightarrow{P}_{iT}\) is the transverse momentum of the \(i\)-th jet. Global and inclusive quantities are used to define \(\sqrt{\hat{s}_{\rm min}}\)[75], the minimum partonic center-of-mass energy, and its distribution is shown in Fig. 5f. Neutrinos are the missing particles in our system, and the definition of \(\sqrt{\hat{s}_{\rm min}}\) is given by
\[\sqrt{\hat{s}_{\rm min}}=\sqrt{E^{2}-P_{Z}^{2}}+\not{E}_{T} \tag{4.5}\]
where \(E\) and \(P_{Z}\) are the total energy and longitudinal component of the total visible momentum in the event, respectively. Here visible means all the visible objects in the detector, e.g., jets, electrons, photons, and muons. The signal has a peak towards a larger value of \(\sqrt{\hat{s}_{\rm min}}\) compared to the background since the signal requires more partonic center-of-mass energy to produce two heavy LQs that subsequently decay into the top quark and neutrino.
The N-subjettiness variables, \(\tau_{32}\), for both the leading and subleading fatjets are shown in figures 5g and 5h. \(\tau_{N}\) tries to quantify the number of subjets inside the fatjet. One would anticipate a smaller value of \(\tau_{32}\) for a boosted top-fatjet since the value of \(\tau_{3}\) for a three-prong fatjet is small and the value of \(\tau_{2}\) is large, therefore their ratio produces a smaller value. In contrast, backgrounds are mostly QCD dominated (1-prong) or coming from the weak bosons (2-prong), so the value of \(\tau_{2}\) is small for both QCD jets and fatjets originating from weak bosons, giving larger \(\tau_{32}\). The distributions show that the signal has considerably lower \(\tau_{32}\) values 9 than the backgrounds, indicating that the signal has a more three-prong structure than the background. Different chirality of the top quarks accounts for the slight difference in these distributions for \(S_{3}\) and \(R_{2}\) models. The distributions of \(\tau_{31}\) for \(J_{0}\) and \(J_{1}\) are shown in figures 5i and 5j. The distributions show that both the signal and the background peak at a lower value of \(\tau_{31}\), indicating that it is not as good as \(\tau_{32}\) for distinguishing the signal from the background.
Footnote 9: Although the signal peaks at a lower value of \(\tau_{32}\) than the background, the peak emerges at roughly 0.6, which is rather substantial. The three subjets of the top quark are highly collimated, therefore the \(\tau_{2}\) value is also small for the top fatjets, which causes the three-prong top fatjets peak to arise for the signal at a significantly large value of \(\tau_{32}\).
The distribution of missing transverse momentum is shown in Fig. 5k, where the background can be seen to drop sharply for large MET. In the case of signal, both the neutrinos from the decay of LQs, have equal access to the phase space, resulting in a nearly uniform distribution of the missing transverse momentum. Figures 5l, 5m, and 5n show, respectively, the distributions of the azimuthal separation of the leading and subleading fatjets from the \(\not{E}_{T}\) and the relative separation between the fatjets in the \(\eta-\phi\) plane. The distribution of \(M_{T2}\)[76; 77] is shown in Fig. 5o. \(M_{T2}\) is useful in measuring the mass of the parent particle, which is pair-produced at the collider, and subsequently decays into one visible object and one missing particle from the end-point of the distribution, and it is defined as follows
\[M_{T2}=\min_{\overrightarrow{p_{iT}}^{\rm invi}+\overrightarrow{p_{iT}}^{\rm invi }=E_{T}}[\max\{M_{T}^{(1)},M_{T}^{(2)}\}]. \tag{4.6}\]
\(M_{T}^{(i)}\) (\(i=1,2\)) are the transverse masses of the LQ and anti-LQ as defined below,
\[(M_{T}^{(i)})^{2}=m_{i}^{2}+M_{\rm invi}^{2}+2(E_{iT}E_{iT}^{\rm invi}- \overrightarrow{p_{iT}}^{\rm invi}),\hskip 28.452756pt\{i=1,2\}\,. \tag{4.7}\]
Since LQ decays into a top quark and massless neutrino, we set \(M_{\rm invi}^{2}=M_{\nu}^{2}=0\) and \(E_{iT}^{\rm invi}=|\overrightarrow{p_{iT}}^{\rm invi}|\), where \(\overrightarrow{p_{iT}}^{\rm invi}\) is the transverse momentum of an individual neutrino. \(\overrightarrow{p_{iT}}^{\rm invi}\) is constrained by the measured missing transverse momentum,
\[\overrightarrow{p_{1T}}^{\rm invi}+\overrightarrow{p_{2T}}^{\rm invi}= \overrightarrow{E}_{T}. \tag{4.8}\]
\(m_{i}\), and \(\overrightarrow{P_{iT}}\) (\(i=1,2\)) are the reconstructed mass and the transverse momentum of the (sub)leading top-fatjets, respectively. \(E_{iT}\) is the transverse energy of the fatjets defined as \(E_{iT}=\sqrt{m_{i}^{2}+\overrightarrow{P_{iT}}^{2}}\). One can observe from the distribution Fig. 5o that its end point correctly predicts the mass of the LQ (1.3 TeV). Since the SM particles have masses that are significantly less than the LQ mass, the background and signal distributions are quite well separated. So this variable not only predicts LQ mass, but also helps in background reduction.
### Multivariate Analysis
In the previous subsection, distribution of several observables (without C4 cut), that can be used as input variables for sophisticated multivariate analysis using the gradient boosting technique, are described. For MVA input, we use a loose-cut (up to C4), as mentioned in the preceding subsection. The last row of Tab. 4 shows the estimated amount of signals (in fb) from two models, the contribution of different background processes, and the total background at the 14 TeV LHC after applying MVA selection cut (C4). For MVA, we use the adaptive Boosted Decision Tree (BDT) algorithm and construct two statistically independent signal and background event samples. The background is the weighted sum of individual SM background processes. MVA picks a subset of kinematic variables from a larger collection based on the linear correlation among the variables and their relative importance in distinguishing the signal from the background.
As expected by Eq. 4.4, we notice that \(P_{T}(J_{0})\) and \(P_{T}(J_{1})\) have large correlations with \(M_{\rm eff}\), and \(\sqrt{\hat{s}_{\rm min}}\) also exhibits high correlations with \(M_{\rm eff}\) due to their linear dependence on MET, as shown by Eqs. 4.4 and 4.5. We keep \(M_{\rm eff}\) because of its high relative importance compared to \(P_{T}(J_{0})\), \(P_{T}(J_{1})\), and \(\sqrt{\hat{s}_{\rm min}}\). A high correlation exists between \(M_{T2}\) and MET; however, we retain MET because it has the highest relative importance than any other variables in separating the signal from the background. Although \(M_{\rm eff}\) and MET exhibit a significant correlation in both the signal and background (as predicted by Eq. 4.4), we keep them both in our study since they have exceptionally high separation powers to distinguish the signal from the background. Fig. 6 exhibits the linear correlation coefficients between different variables for signal \(S_{3}\) (top left panel) and the corresponding background (top right panel). The bottom left and bottom right panels depict the signal \(R_{2}\) and its corresponding background. Positive and negative coefficients indicate whether two variables are correlated or anti-correlated. In the TMVA package [78], the linear correlation coefficient is calculated using the following formula,
\[\rho(x,y)=\frac{\text{cov}(x,y)}{\sigma_{x}\sigma_{y}}, \tag{4.9}\]
where the covariance between \(x\) and \(y\) is \(\text{cov}(x,y)=\langle xy\rangle-\langle x\rangle\langle y\rangle\) and \(\sigma_{x}\), \(\sigma_{y}\) are the standard deviation of these variables.
The separation power of different kinematic variables for the two models used in MVA, is presented in Tab. 5. This table shows that the order of the variables in for distinguishing
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Variable & \(\mathbf{E}_{T}\) & M\({}_{\rm eff}\) & \(\Delta R(J_{0},J_{1})\) & \(\Delta\phi(J_{1},\mathbf{E}_{T})\) & \(M(J_{0})\) & \(\tau_{32}(J_{1})\) & \(\tau_{32}(J_{0})\) & \(\Delta\phi(J_{0},\mathbf{E}_{T})\) & \(M(J_{1})\) & \(\tau_{31}(J_{1})\) & \(\tau_{31}(J_{0})\) \\ \hline \hline \(S_{3}\) & 59.98 & 49.43 & 23.44 & 21.42 & 5.99 & 4.15 & 3.99 & 3.83 & 2.33 & 0.99 & 0.95 \\ \hline \hline \(R_{2}\) & 59.33 & 50.63 & 21.97 & 20.87 & 6.42 & 5.85 & 4.82 & 4.36 & 2.49 & 1.49 & 1.11 \\ \hline \end{tabular}
\end{table}
Table 5: Before employing at MVA, the method unspecific relative importance (separation power) of the individual variables.
Figure 6: Linear correlation coefficients (%) between different variables for signal \(S_{3}\) (top left panel) and corresponding background (top right panel); same for signal \(R_{2}\) (bottom left panel) and corresponding background (bottom right panel). Positive and negative coefficients show that two variables are correlated or anti-correlated, respectively. Missing entries indicate an insignificant correlation of less than one.
the leptoquark signal from the overwhelming background are the MET, \(M_{\text{eff}}\), relative separation between the fatjets in \(\eta-\phi\) plane, and azimuthal separation between the subleading fatjet and MET. Due to improper selection of various (BDT-specific) parameters during training, the BDT method may result in overtraining. Overtraining can be prevented if the Kolmogorov-Smirnov probability is checked throughout training. We train the algorithm separately for the \(S_{3}\) and \(R_{2}\) models and ensure that there is no overtraining in our analysis. The top left panel of Fig. 7 shows the normalised distribution of the BDT output for the signal \(S_{3}\) (blue) and its background (red) for both training and testing samples, whereas the bottom left plot shows the same for the \(R2\) model. We observe that for both models, signal and background are well separated. In the same figure, top right plot illustrates the signal \(S_{3}\) (blue) and background (red) efficiencies, as well as statistical significance (green) as a function of the cut applied to BDT output, while the bottom right plot depicts the same for the \(R_{2}\) model.
The statistical significance of the two models at 3 ab\({}^{-1}\) integrated luminosity at the 14
Figure 7: The top-left plot depicts the distribution (normalized) of the BDT output for the training and testing samples for both the signal \(S_{3}\) (blue) and background (red) classes. The right plot depicts signal \(S_{3}\) (blue) and background (red) efficiencies, as well as statistical significance (\(\frac{N_{S}}{\sqrt{N_{S}+N_{R}}}\)) as a function of the cut applied to BDT output. The same for the \(R_{2}\) model is shown in the bottom left and bottom right plots.
TeV LHC and the signal-to-background ratio are shown in Tab. 6. There, \(N_{S}^{bc}\)10 and \(N_{SM}\) represent the total number of events for the signal and background before applying any cut to the BDT output, while \(N_{S}\) and \(N_{B}\) represent the same after applying an optimal cut BDT\({}_{\rm opt}\) to the BDT response. We observe that both models at the HL-LHC have discovery potential for the 1.3 TeV scalar leptoquark. The \(5\sigma\)-discovery and \(2\sigma\)-exclusion limits of these two models at the HL-LHC are presented in Tab. 7. There is a slight difference in the discovery potential of these two models because of their polarization. Note that explicitly the polarization variables have a negligible role compared to other variables such as MET, \(\Delta R(J_{0},J_{1})\), etc., as given in Tab. 5 for discovering the leptoquark signal at the LHC. However, we see significant differences in the \(J_{0}\) and \(J_{1}\) mass distribution and slight in N-subjettiness distributions because of the different chirality of the top quarks of the two models. So, once the LQ signal is discovered at the LHC, we can use the polarization variables to distinguish these two models, which are described in detail in the next section. In our analysis, we find that a scalar LQ of mass 1270 GeV or smaller can be rejected with \(2\sigma\) with an integrated luminosity of 140 fb\({}^{-1}\), which is compatible with the existing ATLAS search and analysis. We also find that a luminosity around 1600 fb\({}^{-1}\) is required for the \(5\sigma\) discovery of 1.3 TeV scalar LQ.
Footnote 10: Although we use full NLO events, if one uses LO events but normalizes with the total NLO cross-section, \(N_{S}^{bc}\) number for both models decreases by around 2%.
## 5 Distinguishing two models
If a leptoquark signature is observed at the collider in some particular final state, the next goal will be to distinguish different models in order to probe its genesis. In the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(N_{S}^{bc}\) (fb) & BDT\({}_{opt}\) & \(N_{S}\) (fb) & \(N_{B}\) (fb) & \(\frac{N_{S}}{\sqrt{N_{S}+N_{B}}}\) (\(\frac{N_{S}}{\sqrt{N_{B}}}\)), 3 ab\({}^{-1}\) & \(\frac{N_{S}}{N_{B}}\) \\ \hline \hline \(S_{3}\) & 0.1047 & 0.4080 & 0.03403 & 0.04850 & 6.5 (8.5) & 0.702 \\ \hline \(R_{2}\) & 0.1033 & 0.5303 & 0.04047 & 0.05677 & 7.1 (9.3) & 0.713 \\ \hline \(N_{SM}\) & 137.634 & & & & & \\ \hline \end{tabular}
\end{table}
Table 6: The table shows the effectiveness of the present search in terms of statistical significance for \(S_{3}\) and \(R_{2}\) models. Before applying any cuts to the BDT output, the total number of events for different models and the combined background are \(N_{S}^{bc}\) and \(N_{SM}\), respectively (as shown in Tab. 4). For the 14 TeV LHC, after employing an optimum cut (BDT\({}_{opt}\)) on the BDT response, the surviving number of signal and background events are provided by \(N_{S}\) and \(N_{B}\) (in fb), respectively. For quick access, the statistical significances corresponding to 3 ab\({}^{-1}\) luminosity are also shown.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\mathcal{L}=3ab^{-1}\) & \(S_{3}^{\frac{7}{2}}\) & \(R_{2}^{\frac{7}{2}}\) \\ \hline \(5\sigma\) discovery & 1380 GeV & 1370 GeV \\ \hline \(2\sigma\) exclusion & 1520 GeV & 1520 GeV \\ \hline \end{tabular}
\end{table}
Table 7: Discovery and exclusion reach at 14 TeV LHC for 3 \(ab^{-1}\) luminosity.
above section, we have seen that pair production of both \(S_{3}^{\frac{2}{3}}\) and \(R_{2}^{\frac{2}{3}}\) models can give two fatjets plus large missing energy signature. These two models decay to top quarks with different chiralities and this feature can help us in distinguishing them at the colliders. Top quark's polarization can be probed by studying the distribution of some particular kinematic variables of its decay products, which can in turn allow us to probe the type of the leptoquark. In the following subsection, we discuss some such polarization variables that can address the leptoquark identity.
### Polarization Variables
There are different variables which can show top quark polarization dependence. In the following, we discuss a few of them.
#### 5.1.1 Angular variable in the rest frame of (anti-)top
In the rest frame of top quark, if \(\theta_{i}\) be the angle between the decay particle \(i\) and the direction of boost of the top quark, the differential distribution of the decay width \(\Gamma\) with respect to the angular variable \(\cos\theta_{\rm i}\) is given by,
\[\frac{1}{\Gamma}\frac{d\Gamma}{d\cos\theta_{i}}=\frac{1}{2}(1+{\rm P_{t}}\ {\rm k_{i}}\ \cos\theta_{\rm i}), \tag{5.1}\]
where \({\rm P_{t}}\) is the top quark polarization, which is \(+1\) for the right-handed top and \(-1\) for the left-handed top. \(k_{i}\) is the spin analyzing power of \(i\)-th decay particle. In Tab. 8, we show the spin analyzing power of different decay particles. In Appendix A.1, the spin analyzing power of bottom quark is derived. Similar distribution in the anti-top rest frame can be written as,
\[\frac{1}{\bar{\Gamma}}\frac{d\bar{\Gamma}}{d\cos\bar{\theta}_{\bar{i}}}=\frac {1}{2}(1+\bar{\rm P_{\bar{t}}}\ \bar{\rm k_{\bar{j}}}\ \cos\bar{\theta}_{\bar{\rm i}}), \tag{5.2}\]
where the entities with bar are the corresponding quantities for the anti-top quark. Here as well, \(\bar{P}_{\bar{t}}\) is \(+1\) for right-handed anti-top and -1 for left-handed antitop. \(\bar{k}_{\bar{i}}\) is given by \(\bar{k}_{\bar{i}}=-k_{i}\). So it is evident that the distribution of the \(i\)-th decay particle coming for the right-handed top will be same as the distribution of \(\bar{i}\)-th decay product of the left-handed antitop.
The decay of top quark gives rise to mostly left handed (\(\lambda_{b}=-1\)) b-quark and the other component, _i.e._ the right handed one, is heavily suppressed because of small mass of b-quark11. It is known that the top quark decays 70% of the time to longitudinal
\begin{table}
\begin{tabular}{|c|c|c|} \hline Daughters & b & \(W^{+}\) \\ \hline \(k_{i}\) & -0.41 & +0.41 \\ \hline \end{tabular}
\end{table}
Table 8: Spin analyzing power of bottom quark and \(W^{+}\) coming from top decay.
(\(\lambda_{W^{+}}=0\)) and 30% of the time to one of the transverse (\(\lambda_{W^{+}}=-1\)) component of the W boson [79; 80]12. So for top quark, essentially only two decay configurations exist. In Fig. 8, we show these two configurations for decay of a right-handed top quark in its frame. To conserve the total spin in the decay process, the total spin of the b-quark and W boson system must be equal to \(\dfrac{1}{2}\). Moreover, we can write the spin state of the b-quark and W boson system in the basis of \(\left|+\right\rangle_{\hat{z}}\) and \(\left|-\right\rangle_{\hat{z}}\) states13 (with positive z-axis along the top boost direction). So to conserve third component of spin, only \(\left|+\right\rangle_{\hat{z}}\) component can contribute, as for the right-handed top quark spin is along the the boost direction. For the left diagram, the total spin of b-quark and W boson system makes an angle \((180-\theta)\) with the boost direction, whereas for the right diagram it makes angle \(\theta\). So the left diagram follows a \(\sin^{2}\frac{\theta}{2}\) distribution, whereas the right diagram follows a \(\cos^{2}\frac{\theta}{2}\) distribution14. Obviously, the weighted sum of these two distributions should lead to Eq. 5.115.
Footnote 12: The top quark decay to other transverse component (\(\lambda_{W^{+}}=+1\)) is almost negligible, as this requires right-handed b-quark (which is heavily suppressed) to conserve spin angular momentum.
Footnote 13: \(\left|+\right\rangle_{\hat{n}}=\cos\frac{\Theta}{2}\left|+\right\rangle_{\hat {z}}+\sin\frac{\Theta}{2}e^{i\Phi}\left|-\right\rangle_{\hat{z}}\), where \(\hat{n}\) is a unit vector along (\(\Theta\),\(\Phi\)) direction.
Footnote 14: \(\left(\cos\frac{\Theta}{2}\right)^{2}\left|\left(\theta_{180-\theta}=\sin \frac{\theta}{2}\right.\right.\)
Footnote 15: \(0.7\)\(\sin^{2}\frac{\theta}{2}+0.3\)\(\cos^{2}\frac{\theta}{2}=0.5-0.2\)\(\cos\theta\)
#### 5.1.2 Energy variables in the Lab frame
In the literature [44; 46; 36; 41], two most discussed energy variables for the polarisation study are \(z=\frac{E_{b}}{E_{t}}\) and \(u=\frac{E_{l}}{(E_{l}+E_{b})}\). However, the variable \(z=\frac{E_{b}}{E_{t}}\), which is the fraction of energy of the top quark carried by the b quark in the lab frame, is only the relevant one here as the W boson originating from top quark decays hadronically in our study. The variable "z" and \(\cos\theta_{\rm b}\) are fully correlated and they are related by the following relation [36] (see Appendix A.2),
\[\cos\theta_{\rm b}=\frac{1}{\beta_{\rm t}}\Big{(}\frac{2\rm m_{t}^{2}}{\rm m_ {t}^{2}-m_{W}^{2}}\rm z-1\Big{)}, \tag{5.3}\]
Figure 8: Decay diagram of right handed top quark in its rest frame. Black dot represents top quark. Thick colored arrows denote spin of the particles. For b-quark, essentially \(\lambda_{b}=-1\) component gets produced and the other component \(\lambda_{b}=+1\) is heavily suppressed, because of its small mass. The top decays to \(\lambda_{W^{+}}=0\) and \(\lambda_{W^{+}}=-1\) helicity components of \(W^{+}\) 70% and 30% times, respectively. As the other transverse component W boson, \(\lambda_{W^{+}}=-1\), requires right handed b-quark to conserve spin, it is also suppressed. So in effect only the two diagrams shown here contribute to the the top quark decay.
where \(\beta_{t}\) represents the boost of the top quark in the lab frame. The distribution of decay width with respect to z (using Eq. 12 and Eq. 13) can be given as [44],
\[\frac{1}{\Gamma}\frac{d\Gamma}{dz}=\frac{1}{\beta_{t}}\frac{m_{t}^{2}}{m_{t}^{2} -m_{W}^{2}}\,\left(1-P_{t}\ k_{f}\frac{1}{\beta_{t}}+P_{t}\ k_{f}\frac{1}{ \beta_{t}}\frac{2m_{t}^{2}}{m_{t}^{2}-m_{W}^{2}}z\right). \tag{14}\]
The same expression will hold for antitop particle with every element replaced by their corresponding barred element.
#### 5.1.3 Distributions of polarization variables
In Fig. 9, we show truth level normalized distributions of \(\cos\theta_{\rm b}\) and \(\frac{E_{b}}{E_{t}}\) at LO at the left and right subfigures, respectively. The distribution with respect to \(\cos\theta_{\rm b}\) can be understood from Eq. 12. Therefore in \(S_{3}\) model, for most of the events in the rest frame of the top quark, the b-quark moves in the same direction as the boost of the top quark. Obviously, the opposite happens for the \(R_{2}\) model. For the \(z=\frac{E_{b}}{E_{t}}\) variable, we see for the \(S_{3}\) and \(R_{2}\) models, the distribution peak near the right and left end of the plots, respectively. This can also be understood from the \(\cos\theta_{\rm b}\) distribution as well. As for the \(R_{2}\) model, in the rest frame, for majority of events, the b-quarks move in the direction opposite to the boost and their energy \(E_{b}\) will be less. Therefore the distribution in this case peaks towards the left. The reverse happens for the \(S_{3}\) model. Another interesting thing to observe in the right figure is that the cross-section is zero after \(z=0.8\). This happens because all the top quark energy cannot be carried by the b-quark only, as the W boson needs at least its rest mass energy, \(M_{W}\). In Fig. 10, we show these distributions after including NLO calculation, showering effect, and applying various cuts up to C4 (discussed in Subsection 4.3) in Delphes simulation. Here the distribution of b-jet is found to be different from that of b-quark because of showering effects and formation of jets. Near the boost direction, i.e. near \(\cos\ \theta_{b}\sim 1\), the difference between the b-quark and b-jet distributions is striking as there b-jet gets contaminated with the particles originating from W boson because of very large boost of top quark.
Figure 9: The distributions of \(\cos\ \theta_{\rm b}\) and \(\frac{E_{b}}{E_{t}}\) at LO without parton shower. The mass of the leptoquark has been taken to be \(1300\ {\rm GeV}\).
### Log-likelihood ratio test
In this subsection, we study the prospect of distinguishing two models, if in the future, a scalar leptoquark of mass 1300 GeV is observed. It will take around 1600 fb\({}^{-1}\) of data for a 5\(\sigma\) discovery. At this mass, for \(\mathcal{L}=3000\)fb\({}^{-1}\), with the optimized cuts choosen by BDT, the number of signal and background events are found to be (102,145) for the \(S_{3}\) model and (121,170) for the \(R_{2}\) model 16. For these number of events we find the distribution of events with respect to \(\frac{E_{b}}{E_{t}}\), where 'b' means a b-jet and 't' indicates reconstructed top fatjet. We use log-likelihood ratio (LLR) hypothesis test for distinguishing two models17. The likelihood function is given by the product of Poisson distribution functions at all bins. That is, for \(O_{i}\) being the observed data and \(E_{i}\) being the expected data, the likelihood function \(\mathcal{L}\) is given as,
Footnote 16: multiplying luminosity with the cross-sections given in Tab. 6 gives these event numbers.
Footnote 17: We have also checked with \(\chi^{2}\) hypothesis test and got similar kind of results.
\[\mathcal{L}(\text{E}|\text{O})=\prod_{i=1}^{n}e^{-E_{i}}E_{i}^{O_{i}}/\Gamma( O_{i}+1)\,. \tag{5.5}\]
The exclusion significance of a model M1, when another model M2 is observed, is given as
\[Z_{M1|M2}=\sqrt{-2ln\frac{\mathcal{L}(\text{M1}|\text{M2})}{ \mathcal{L}(\text{M2}|\text{M2})}}\,. \tag{5.6}\]
Figure 10: The distributions of \(\cos\ \theta_{\text{b}}\) and \(\frac{E_{b}}{E_{t}}\) after Delphes simulation and applying cuts up to C4 mentioned in Subsection. 4.3. The effect of radiation causes significant changes in the distribution compared to truth level results. For \(\cos\ \theta_{\text{b}}\sim 1\) and for z around 0.8 and more, the distributions are strikingly different from the truth level results because of the contamination in the b-jet from W decay products, owing to very large boost of top quark.
We have considered both the scenarios when either of the models is observed and the other one is predicted for which we want to find the exclusion significance. To find distribution for events numbers for the predicted model, the signal events of it are scanned through the same BDT-model used of the observed model. In Fig. 11, we show \(\frac{E_{b}}{E_{t}}\) distribution for event numbers for observed and predicted models at 14 TeV LHC with 3 ab\({}^{-1}\) of data. For the analysis, we have taken first 8 bins, starting from the left, of the \(\frac{E_{b}}{E_{t}}\) distribution18, given in Fig. 11. We obtain an exclusion significance (Z) of 0.98 \(\sigma\), when \(S3+B\) is taken as observed at the LHC and \(R2+B\) is considered as the predicted one. For the reverse case, we obtain Z value as 1.01 \(\sigma\), see Tab. 9. As the exclusion significance is quite low, it shows that two models can not be distinguished well at the LHC. However, it is prompting to see whether these two models can be distinguished at 27 TeV (HE-LHC) collider for the same mass of the leptoquark. To do this study, we assume that the shape of the signal and individual background distributions will remain same at the 27 TeV LHC as that of the 14 TeV collider. We then scale the distributions by overall factors after calculating their total cross-sections at these two different center of mass energy colliders. In Fig. 12, we show the plot for exclusion significance vs. required luminosity at the HE-LHC. We find that with moderate amount of luminosity (around 1800 fb\({}^{-1}\)) at this collider, either of the models can be excluded at 5\(\sigma\) significance when the other one appears as observed. In the last column of Tab. 9, we show the exclusion significances for 3 ab\({}^{-1}\) data at this collider.
Figure 11: The signal+background event distributions in \(\frac{E_{b}}{E_{t}}\) for observed and predicted models data after applying an optimal BDT cut (given in Tab. 6) with 3000 fb\({}^{-1}\). To find the events for the predicted model, the signal events of it are passed through the same BDT model used for finding the event numbers of the observed model.
## 6 Conclusions
TeV-scale leptoquarks that can emerge from various models are well-motivated and phenomenologically interesting to be searched at high-energy collider experiments. Present work investigates the pair production of third-generation \(\frac{2}{3}e\)-charged scalar leptoquark at the LHC using NLO QCD accuracy, matched to parton shower for precise probing. Among different potential scalar leptoquark models, two primary interests - \(S_{3}\) and \(R_{2}\) can be probed by looking at their decay into a top with a tau neutrino, thus producing a compelling signature of a pair of top-like fatjets along with substantial missing transverse energy. Here tops, created from heavy leptoquarks, are naturally boosted and therefore considering them as boosted jets is quite meaningful.
With a precise understanding of jet physics, it is now possible to study the intrinsic substructure and properties of such jets, thereby pointing out the origin of these jets with a high degree of accuracy. Therefore the considered channel has excellent potential for separating the tiny signal from the overwhelming SM background. Parton shower effects are included in our study and its usefulness in the low transverse momentum region is seen in Fig. 2. We also demonstrate that the factorization and renormalization scale uncertainties for the NLO+PS events are much lower than that of LO+PS events (see Fig. 4 and Tab. 2).
For accurate prediction, we include all the relevant background processes with two to four extra QCD radiations and normalize them using the available higher-order QCD
Figure 12: The exclusion significance vs. required luminosity at 27 TeV collider by projecting the distributions at 14 TeV collider to 27 TeV collider. The mass of the leptoquark has been taken to be \(M_{LQ}=1300\) GeV.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\mathcal{L}\) & predicted & observed & Rejection Prob. (Z) & Rejection Prob. (Z) \\ & & & (14 TeV) & (27 TeV) \\ \hline \multirow{2}{*}{\(3ab^{-1}\)} & \(R_{2}+B\) & \(S_{3}+B\) & 0.98 \(\sigma\) & 6.45 \(\sigma\) \\ \cline{2-5} & \(S_{3}+B\) & \(R_{2}+B\) & 1.01 \(\sigma\) & 6.59 \(\sigma\) \\ \hline \end{tabular}
\end{table}
Table 9: Probability of excluding one model when other model is the observed model at 14 TeV LHC and 27 TeV HE-LHC with \(\mathcal{L}=3\ ab^{-1}\).
corrected production cross-section. Different high-level variables, such as MET, \(M_{\text{eff}}\), \(\Delta R(J_{0},J_{1})\), \(\Delta\phi(J_{i},\not{E}_{T})\), jet substructure based pruned jet mass and N-subjettiness are proved to be efficacious to pinpoint the signal. Multivariate analysis is carried out individually for these two models and we show that at the 14 TeV LHC with an integrated luminosity of 3000 \(fb^{-1}\), the leptoquarks of mass 1380 GeV can be discovered (5\(\sigma\)), and up to 1520 GeV can be excluded (2\(\sigma\)).
Among the two scalar leptoquark models considered here, it is interesting to note that the top quarks resulting from the decay of leptoquarks possess different chiralities. Most of the high-level variables utilized for multivariate analysis are not sensitive to this polarization. Only the jet mass variables acquire some minor effect due to the modified distribution pattern in the decay process. However, these are insignificant enough, thereby providing almost equivalent mass constraints for both models.
We further construct different polarization sensitive variables to distinguish these scalar leptoquark models of the same charge. We exhibit the effectiveness of such variables in terms of (_i_) an angular variable in the top quark's rest frame, (_ii_) the ratio of the energy variables \(\frac{E_{b}}{E_{t}}\). Such effects are demonstrated at the truth level and after including parton shower and (fat)jet formation (see Figs. 9, 10). Significant distortion is noticeable following detector simulation and (fat)jet formation. This is primarily attributed to the contamination and poor measurement efficiency of the b-jet momenta within a highly collimated top-like fatjet. The log-likelihood-ratio (LLR) hypothesis test is used to distinguish the models in the presence of combined background events. We find that the statistical exclusion significance remains low at around 1\(\sigma\) confidence level at the LHC. However, it is shown that the 27 TeV collider can play a promising role and it is estimated that the required luminosity would be around 300 \(fb^{-1}\) (1800 \(fb^{-1}\)) to distinguish these two models with 2\(\sigma\) (5\(\sigma\)) significance.
## Acknowledgements
We thank Rinku Maji and Saurabh K. Shukla for fruitful discussions. Computational works are performed using the HPC resources (Vikram-100 HPC) and the TDP project at PRL.
## Appendix A Appendix
### Distribution of daughter of top quark
Here we will find the differential distribution of decay width of right handed top in its rest frame. The distribution for left handed particle can be obtained similarly.
The matrix element can be written as
\[M=\bar{u}_{b}(p_{1})\frac{ig}{\sqrt{2}}\gamma_{\mu}P_{L}u_{t}(m_{t})\epsilon^{\mu *}_{W^{+}}(p_{2}) \tag{108}\]
So, \(|M|^{2}\) can be written as
\[|M|^{2} =\frac{g^{2}}{2}\bar{u}_{b}(p_{1})\gamma_{\mu}P_{L}u_{t}(m_{t}) \bar{u}_{t}(m_{t})P_{R}\gamma_{\nu}u_{b}(p_{1})\epsilon^{\mu*}_{W^{+}}(p_{2}) \epsilon^{\nu}_{W^{+}}(p_{2})\] \[\sum_{final\ spins}|M|^{2} =-\frac{g^{2}}{2}Tr[\gamma_{\mu}P_{L}u_{t}(m_{t})\bar{u}_{t}(m_{t} )P_{R}\gamma_{\nu}(p_{1}+m_{b})](g^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{M_{W}^{2}}) \tag{109}\]
In the Weyl basis, \(\gamma^{\mu}=\begin{pmatrix}0&\sigma^{\mu}\\ \bar{\sigma}^{\mu}&0\end{pmatrix}\) and \(\gamma^{5}=\begin{pmatrix}-I&0\\ 0&I\end{pmatrix}\), where \(\sigma^{\mu}=(I,\mathbf{\sigma})\) and \(\bar{\sigma}^{\mu}=(I,-\mathbf{\sigma})\). The spinor in the rest frame is given by
\[u_{t}^{s}(m_{t})=\sqrt{m_{t}}\begin{pmatrix}\xi^{s}\\ \xi^{s}\end{pmatrix}\]
Using above expressions, Eq. 109 can be written as
\[\sum_{final\ spins}|M|^{2} =-\frac{g^{2}m_{t}}{2}Tr[\begin{pmatrix}0&0\\ \bar{\sigma}_{\mu}&0\end{pmatrix}\begin{pmatrix}\xi^{s}\xi^{s\dagger}&\xi^{s} \xi^{s\dagger}\\ \xi^{s}\xi^{s\dagger}&\xi^{s}\xi^{s\dagger}\end{pmatrix}\begin{pmatrix}0&0\\ \bar{\sigma}_{\nu}&0\end{pmatrix}p_{1}^{\mu}](g^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{ M_{W}^{2}})\] \[=-\frac{g^{2}m_{t}}{2}Tr[\bar{\sigma}^{\mu}\xi^{s}\xi^{s\dagger} \bar{\sigma}_{\mu}\sigma\cdot p_{1}-\frac{\bar{\sigma}\cdot p_{2}\xi^{s}\xi^{ s\dagger}\bar{\sigma}\cdot p_{2}\sigma\cdot p_{1}}{M_{W}^{2}}] \tag{110}\]
For the spin up top \(\xi^{s}\xi^{s\dagger}\) can be written as \(\frac{I+\sigma^{3}}{2}\). So Eq. 110 can be written as
\[\sum_{final\ spins}|M|^{2}=-\frac{g^{2}m_{t}}{4}Tr[\bar{\sigma}^{\mu}(I+ \sigma^{3})\bar{\sigma}_{\mu}\sigma\cdot p_{1}-\frac{\bar{\sigma}\cdot p_{2}( I+\sigma^{3})\bar{\sigma}\cdot p_{2}\sigma\cdot p_{1}}{M_{W}^{2}}] \tag{111}\]
Using \(\sigma^{i}\sigma^{j}+\sigma^{j}\sigma^{i}=2\delta^{ij}I\), the following can be proven:
\[\bar{\sigma}^{\mu}\bar{\sigma}_{\mu} =-2I \tag{112}\] \[\bar{\sigma}^{\mu}\sigma^{3}\bar{\sigma}_{\mu} =2\sigma^{3}\] (113) \[\bar{\sigma}\cdot p_{2}\bar{\sigma}\cdot p_{2} =(p_{2}^{0})^{2}I+\vec{p_{2}}^{2}I+2\sigma^{i}p_{2}^{i}p_{2}^{0}\] (114) \[\bar{\sigma}\cdot p_{2}\sigma^{3}\bar{\sigma}\cdot p_{2} =(p_{2}^{0})^{2}\sigma^{3}+2p_{2}^{3}p_{2}^{2}I-\sigma^{3}\vec{p_{ 2}}^{2}+2\sigma^{i}p_{2}^{3}p_{2}^{i} \tag{115}\]
Figure 13: The Feynman diagram for top decay.
Again the following relations can be proven easily
\[Tr(\sigma^{i}\sigma^{j}) =2\delta^{ij}\] (A.9) \[Tr(\sigma^{i}) =0\] (A.10) \[Tr(\sigma\cdot a) =2a^{0}\] (A.11)
The different parts of Eq. A.4 can be obtained using Eq. A.5-Eq. A.11. After using them, we have
\[Tr[\bar{\sigma}^{\mu}\bar{\sigma}_{\mu}\sigma\cdot p_{1}] =-4p_{1}^{0}\] (A.12) \[Tr[\bar{\sigma}^{\mu}\sigma^{3}\bar{\sigma}_{\mu}\sigma\cdot p_ {1}] =-4p_{1}^{3}\] (A.13) \[Tr[\bar{\sigma}\cdot p_{2}\bar{\sigma}\cdot p_{2}\sigma\cdot p_ {1}] =2p_{1}^{0}(p_{2}^{0})^{2}+2p_{1}^{0}\vec{p_{2}}^{2}-4\vec{p_{1}} \cdot\vec{p_{2}}p_{2}^{0}\] (A.14) \[Tr[\bar{\sigma}\cdot p_{2}\sigma^{3}\bar{\sigma}\cdot p_{2} \sigma\cdot p_{1}] =-2(p_{2}^{0})^{2}p_{1}^{3}+4p_{2}^{3}p_{2}^{0}p_{1}^{0}+2p_{1}^{3} \vec{p_{2}}^{2}-4p_{2}^{3}\vec{p_{1}}\cdot\vec{p_{2}}\] (A.15)
Using energy conservation in the top rest frame,
\[m_{t} =|\vec{p}_{1}|+\sqrt{\vec{p}_{1}^{2}+m_{W}^{2}}\] \[|\vec{p}_{1}| =\frac{m_{t}^{2}-m_{W}^{2}}{2m_{t}}\] (A.16)
Eq. A.14 can be written as
\[2p_{1}^{0}(p_{2}^{0})^{2}+2p_{1}^{0}\vec{p_{2}}^{2}-4\vec{p_{1}} \cdot\vec{p_{2}}p_{2}^{0}\] \[= 2p_{1}^{0}(m_{W}^{2}+2\vec{p_{1}}^{2})+4\vec{p_{1}}\cdot\vec{p_{ 1}}(m_{t}-p_{1}^{0})\] \[= 2|\vec{p}_{1}|(m_{W}^{2}+2|\vec{p}_{1}|m_{t})=2|\vec{p}_{1}|m_{t} ^{2}\] (A.17)
Eq. A.15 can be written as
\[-2(p_{2}^{0})^{2}p_{1}^{3}+4p_{2}^{3}p_{2}^{0}p_{1}^{0}+2p_{1}^{3 }\vec{p_{2}}^{2}-4p_{2}^{3}\vec{p_{1}}\cdot\vec{p_{2}}\] \[= -2p_{1}^{3}((p_{2}^{0})^{2}-\vec{p_{2}}^{2})+4p_{2}^{3}(p_{2}^{0 }p_{1}^{0}-\vec{p_{1}}\cdot\vec{p_{2}})\] \[= -2p_{1}^{3}(m_{W}^{2})-2p_{1}^{3}(m_{t}^{2}-m_{W}^{2})=-2p_{1}^{3 }m_{t}^{2}\] (A.18)
Using the above formulae in Eq. A.4, we have
\[\sum_{final~{}spins}|M|^{2} =-\frac{g^{2}m_{t}}{4}[-4p_{1}^{0}-4p_{1}^{3}-\frac{2|\vec{p}_{1 }|m_{t}^{2}-2p_{1}^{3}m_{t}^{2}}{M_{W}^{2}}]\] \[=g^{2}m_{t}[(1+\frac{m_{t}^{2}}{2M_{W}^{2}})+(1-\frac{m_{t}^{2}}{ 2M_{W}^{2}})\cos\theta_{b}]\] \[=g^{2}m_{t}(1+\frac{m_{t}^{2}}{2M_{W}^{2}})[1+\frac{2M_{W}^{2}-m_ {t}^{2}}{2M_{W}^{2}+m_{t}^{2}}\cos\theta_{b}]\] \[=g^{2}m_{t}(1+\frac{m_{t}^{2}}{2M_{W}^{2}})[1+k_{b}~{}\cos\theta_ {b}],\]
where \(k_{b}=\frac{2M_{W}^{2}-m_{t}^{2}}{2M_{W}^{2}+m_{t}^{2}}=-0.4\), spin analyzing power of b-quark.
The differential distribution of decay width for right handed top quark is given by
\[\frac{d\Gamma}{d\cos\theta_{b}} =\frac{1}{2m_{t}^{2}}\frac{|\vec{p}_{1}|}{8\pi}\sum_{final~{}spins} |M|^{2}\] \[=\frac{1}{2m_{t}^{2}}\frac{1}{8\pi}\frac{m_{t}^{2}-m_{W}^{2}}{2m_ {t}}g^{2}m_{t}(1+\frac{m_{t}^{2}}{2M_{W}^{2}})[1+k_{b}~{}\cos\theta_{b}]\] \[=\frac{g^{2}}{32\pi}\frac{(m_{t}^{2}-m_{W}^{2})(2M_{W}^{2}+m_{t}^ {2})}{m_{t}^{2}m_{W}^{2}}[1+k_{b}~{}\cos\theta_{b}]\]
For the spin down top, \(\xi^{s}\xi^{s\dagger}\) can be written as \(\frac{I-\sigma^{3}}{2}\). So it is easy follow that in the above expression there will be a minus sign in front of \(k_{b}\) for this case.
### Relation between \(\cos~{}\theta_{\rm b}^{\prime}\) and z
In the following, quantities in the lab frame will be denoted by unprimed symbols, whereas in the top rest frame they will be denoted by primed symbols19. So in the rest frame of the top quark the angle of bottom quark's direction of motion with the boost direction of top quark is given by
Footnote 19: Note in the main text, we did not use any prime for the angle in the rest frame. So the \(\cos~{}\theta_{\rm b}^{\prime}\) here is same as \(\cos~{}\theta_{\rm b}\) in the main text.
\[\cos~{}\theta_{\rm b}^{\prime}=\frac{p_{b}^{z^{\prime}}}{|\vec{p}_{b}^{\prime }|},\] (A.19)
where z and \(z^{\prime}\) axes are along the direction of motion of the top quark in the lab frame.
Using Lorentz transformation between two frames with \(\beta=\frac{|\vec{p}_{t}|}{E_{t}}\) and \(\gamma=\frac{1}{\sqrt{1-\beta^{2}}}=\frac{E_{t}}{m_{t}}\)
\[p_{b}^{z^{\prime}}=-\gamma\beta E_{b}+\gamma p_{b}^{z}\] (A.20)
Using energy conservation in the lab frame,
\[E_{t} =E_{b}+\sqrt{(\vec{p}_{t}-\vec{p}_{b})^{2}+m_{W}^{2}}\] \[p_{b}^{z} =-\frac{m_{t}^{2}+m_{b}^{2}-m_{W}^{2}-2E_{t}E_{b}}{2|\vec{p}_{t}|}\] \[=\frac{E_{b}}{\beta}-\frac{m_{t}^{2}+m_{b}^{2}-m_{W}^{2}}{2\beta \gamma m_{t}}\] (A.21)
Using Eq. A.21 in Eq. A.20, we have
\[p_{b}^{z^{\prime}} =-\gamma\beta E_{b}+\gamma\frac{E_{b}}{\beta}-\frac{m_{t}^{2}+m_{b }^{2}-m_{W}^{2}}{2\beta m_{t}}\] \[=\frac{zE_{t}}{\gamma\beta}-\frac{m_{t}^{2}+m_{b}^{2}-m_{W}^{2}}{2 \beta m_{t}}\] \[=\frac{zm_{t}}{\beta}-\frac{m_{t}^{2}+m_{b}^{2}-m_{W}^{2}}{2\beta m _{t}}\]
Assuming \(m_{b}=0\),
\[p_{b}^{z^{\prime}}=\frac{1}{\beta}(zm_{t}-\frac{m_{t}^{2}-m_{W}^{2}}{2m_{t}}) \tag{112}\]
Using energy conservation in the top rest frame,
\[m_{t} =E_{b}^{\prime}+\sqrt{{\vec{p}_{b}}^{\prime}{}^{2}+m_{W}^{2}}\] \[|{\vec{p}_{b}}^{\prime}| =\frac{m_{t}^{2}-m_{W}^{2}}{2m_{t}} \tag{113}\]
Using Eq. 112 and Eq. 113, in Eq. 114, we have
\[\cos\theta_{\rm b}^{\prime}=\frac{1}{\beta}(\frac{2m_{t}^{2}}{m_{t}^{2}-m_{W}^{ 2}}z-1) \tag{114}\]
|
2305.04110 | Uniqueness of some space dependent coefficients in a wave equation of
nonlinear acoustics | In this paper we prove uniqueness for some parameter identification problems
for the JMGT equation, a third order in time quasilinear PDE in nonlinear
acoustics. The coefficients to be recovered are the space dependent
nonlinearity parameter, sound speed, and attenuation parameter, and the
observation available is a single time trace of the acoustic pressure on the
boundary. This is a setting relevant to several ultrasound based tomography
methods. Our approach relies on the Inverse Function Theorem, which requires to
prove that the forward operator is a differentiable isomorphism in
appropriately chosen topologies and with an appropriate choice of the
excitation. | Barbara Kaltenbacher | 2023-05-06T18:22:04Z | http://arxiv.org/abs/2305.04110v1 | # Uniqueness of some space dependent coefficients in a wave equation of nonlinear acoustics
###### Abstract
In this paper we prove uniqueness for some parameter identification problems for the JMGT equation, a third order in time quasilinear PDE in nonlinear acoustics. The coefficients to be recovered are the space dependent nonlinearity parameter, sound speed, and attenuation parameter, and the observation available is a single time trace of the acoustic pressure on the boundary. This is a setting relevant to several ultrasound based tomography methods. Our approach relies on the Inverse Function Theorem, which requires to prove that the forward operator is a differentiable isomorphism in appropriately chosen topologies and with an appropriate choice of the excitation.
Key words and phrases:nonlinearity parameter tomography, JMGT equation, nonlinear acoustics 2010 Mathematics Subject Classification: Primary: 35R30; Secondary: 35B30, 35Q99 This work was supported by the Austrian Science Fund FWF [http://dx.doi.org/10.13039/501100002428](http://dx.doi.org/10.13039/501100002428) under the grant P36318.
of the wave energy, though. We therefore employ weak damping in terms of the auxiliary quantity \(u^{\tau}\)
\[\tilde{D}[u^{\tau}]=b_{0}u_{t}^{\tau},\text{ that is, }D[u]=b_{0}(\tau u_{t}+u)_{t} \tag{3}\]
with \(b_{0}\geq 0\) in this paper, while alternative (e.g., fractional) damping terms [23] as relevant in ultrasonics might be studied in follow-up work.
This PDE contains several coefficients that are specific to the type of tissue. They thus have to be regarded as function of space and therefore provide a means of imaging. For example, in ultrasound tomography [2, 11, 12, 15, 28, 31], the sound speed \(c_{0}=c_{0}(x)\) is the imaged quantity; in nonlinearity parameter tomography, it is \(\kappa=\kappa(x)\)[4, 7, 8, 14, 34, 39, 42, 43]. Also the attenuation coefficient \(b_{0}=b_{0}(x)\) is known to contain tissue specific information [9, 27, 36]. Mathematically, recovery of these coefficients from the typical observations available in this context, namely measurements of the acoustic pressure at some receiver array outside the body, leads to coefficient identification problems in a PDE of the type (1) from boundary observations of the acoustic pressure \(u\).
The pressure data \(h\) taken at the receivers can be written as a Dirichlet trace on some manifold \(\Sigma\) immersed in the domain \(\Omega\) or attached to its boundary \(\Sigma\in\overline{\Omega}\)
\[h(t,x)=u(t,x),\quad(t,x)\in(0,T)\times\Sigma. \tag{4}\]
\(\Sigma\) models the transducer or hydrophone array and may as well just be a subset of discrete points on a manifold.
We assume (1) to hold in a smooth and bounded domain \(\Omega\subseteq\mathbb{R}^{d}\), \(d\in\{1,2,3\}\) and equip it with initial conditions \(u(t=0)=u_{0}\), \(u_{t}(t=0)=u_{1}\), \(u_{tt}(t=0)=u_{2}\), as well as homogeneous impedance boundary conditions
\[\gamma_{0}u+\gamma_{1}\partial_{\nu}u=0\text{ on }(0,T)\times\partial\Omega \tag{5}\]
with \(\gamma_{0},\gamma_{1}\in L^{\infty}(\partial\Omega)\), \(\gamma_{0},\gamma_{1}\geq 0\), \(\gamma_{0}+\gamma_{1}\geq\underline{\gamma}>0\); the case of vanishing \(\gamma_{1}\) or \(\gamma_{0}\) represents Dirichlet or Neumann conditions, respectively. The space- and time-dependent source term \(r\) in (1) models excitation, for example by a piezoelectric transducer array.
We refer to [1, 22, 23, 40] for results related to the identification of the nonlinearity coefficient \(\kappa\) alone in a classical model of nonlinear acoustics, the Westervelt equation (basically (1) with vanishing relaxation time \(\tau=0\)). In [1] its uniqueness from the whole Neumann-Dirichlet map (instead of the single measurement (4)) is shown; [40] provides a uniqueness and conditional stability result for the linearized problem of identifying \(\kappa\) in a higher order model of nonlinear acoustics in place of the Westervelt equation. In [22, 23] we have proven injectivity of the linearized forward operator mapping \(\kappa\) to \(h\) in the Westervelt equation with classical strong damping and also with some fractional damping models as relevant in ultrasonics.
A proof of injectivity of the linearized forward operator for the simultaneous recovery of \(\kappa(x)\) and \(c_{0}(x)\) in the Westervelt equation in dimension \(d\in\{1,2,3\}\) from measurements with two excitations can be found in [24], where it enables to apply a frozen Newton method and to show its convergence. For reconstruction results in 1-d we refer to [24] and in 2-d (based on a multiharmonic expansion for the Westervelt equation) to [26].
A similar linearized uniqueness proof for JMGT in place of Westervelt will serve as a basis for proving local uniqueness of \(\kappa\) by means of the Inverse Function Theorem here. A considerable part of this paper is devoted to establishing the required well-definedness and differentiability of the forward operator. Beyond this, the aim
of this paper is to study simultaneous identification of \(\kappa\) and \(c_{0}\) or \(b_{0}\) as space variable functions. Indeed, we will show that \(\kappa(x)\) is locally unique even for unknown \(c_{0}(x)\) and also provide results on simultaneous identifiability of \(\kappa(x)\) and \(c_{0}(x)\) or of \(\kappa(x)\) and \(b_{0}(x)\).
### The inverse problem
The pointwise observation setting (4) is an idealized one and will be extended to a more general observation operator \(\mathcal{C}_{\Sigma}\), allowing generalization to, e.g., locally averaging observations, cf. (7). Also the PDE model will be adapted to take into account the fact that for sufficiently small pressure amplitudes, nonlinearity can be neglected, cf. (6).
Therewith we consider identification of the space dependent coefficients \(\kappa(x)\), \(c_{0}(x)\), \(b_{0}(x)\), in the attenuated and switched JMGT equation in pressure form
\[\begin{split}\tau u_{ttt}+\big{(}(1-2\kappa\sigma u)u_{t}\big{)}_ {t}+c^{2}\mathcal{A}_{c}u+\tau c^{2}\mathcal{A}_{c}u_{t}+b_{0}(\tau u_{t}+u)_{ t}&=r\quad t>0\\ u(0)=u_{0},\quad u_{t}(0)=u_{1},\quad u_{tt}(0)&= u_{2},\end{split} \tag{6}\]
from observations
\[h(t)=\mathcal{C}_{\Sigma}u(t),\quad t\in(0,T). \tag{7}\]
with some linear and bounded observation operator \(\mathcal{C}_{\Sigma}:V\mapsto Y\), mapping from a space \(V\) of \(x\)-dependent functions to some data space \(Y\). Typical examples are
\[\begin{split}\mathcal{C}_{\Sigma}:H^{s}(\Omega)\to L^{2}( \Sigma)& v\mapsto\operatorname{tr}_{\Sigma}v\\ \mathcal{C}_{\Sigma}:C(\overline{\Omega})\to\mathbb{R}^{N}& v\mapsto\Big{(}v(x_{i})\Big{)}_{i\in\{1,\dots,N\}}\\ \mathcal{C}_{\Sigma}:L^{2}(\Omega)\to\mathbb{R}^{N}& v\mapsto\Big{(}\int_{\Omega}\eta_{i}(x)v(x)\,dx\Big{)}_{i\in\{1,\dots,N\}}\end{split}\]
with the trace operator \(\operatorname{tr}_{\Sigma}:H^{s}(\Omega)\to L^{2}(\Sigma)\), \(s>\frac{1}{2}\), measurement locations \(x_{i}\in\overline{\Omega}\) and \(\eta_{i}\in L^{2}(\Omega)\) locally supported around \(x_{i}\). In each of these examples, applicability of \(\mathcal{C}_{\Sigma}\) to \(u\) sets different spatial regularity requirements on the PDE solution.
In equation (6), \(c>0\) is the constant mean wave speed, and we make use of the combined elliptic operator
\[\mathcal{A}_{c}=-\frac{c_{0}(x)^{2}}{c^{2}}\Delta\ \ \text{with}\ \ c_{0},\ \frac{1}{c_{0}}\ \in L^{\infty}(\Omega) \tag{8}\]
that contains the possibly spatially varying coefficient \(c_{0}(x)>0\) and is equipped with the boundary conditions (5). To achieve self-adjointness of \(\mathcal{A}_{c}\), we use the weighted \(L^{2}\) inner product with weight function \(c^{2}/c_{0}^{2}(x)\), that is, \(\|v\|_{L^{2}_{c^{2}/c_{0}^{2}}(\Omega)}=\int_{\Omega}\frac{c^{2}}{c_{0}^{2}(x) }\,v^{2}(x)\,dx\). We can think of \(c_{0}\) as being spatially variable and of \(\frac{c_{0}}{c}\sim 1\) to be normalized, while the magnitude of the wave speed is given by the constant \(c\). Due to compactness of its inverse, the operator \(\mathcal{A}_{c}\) has an eigensystem \((\lambda_{j},(\varphi_{j}^{k})_{k\in K^{\lambda_{j}}})_{j\in\mathbb{N}}\) (where \(K^{\lambda_{j}}\) is an enumeration of the eigenspace corresponding to \(\lambda_{j}\)). This allows for a diagonalization of the operator as \((\mathcal{A}_{c}v)(x)=\sum_{j=1}^{\infty}\lambda_{j}\sum_{k\in K^{\lambda_{j}} }\langle v,\varphi_{j}^{k}\rangle\varphi_{j}^{k}(x)\).
**Remark 1**.: Taking into account spatial variability of the mass density, (1) would need to be written as \(\frac{\tau}{\lambda(x)}u_{ttt}+\frac{1}{\lambda(x)}u_{tt}-\nabla\cdot(\frac{1 }{\varrho(x)}\nabla u)-\tau\nabla\cdot(\frac{1}{\varrho(x)}\nabla u_{t})+D[u]= \kappa(u^{2})_{tt}+r\) with \(u\) being the pressure, \(\lambda\) the bulk modulus, \(\varrho\) the mass density, and \(c_{0}=\frac{\lambda}{\varrho}\) the sound speed, (cf., e.g., [3, 28] for the linear case). Usually \(x\)-dependence of \(\varrho\) is not taken into account; Note that our general setting with a spatial differential operator \(\mathcal{A}_{c}\) would be able to cover this dependency as well.
While \(\kappa\), \(c_{0}\), \(b_{0}\), are positive, possibly space dependent coefficients, the relaxation time \(\tau\geq 0\) is constant and the switching factor \(\sigma\in[0,1]\) depends on the magnitude of the pressure and in particular vanishes for small \(u\), see (12) for its precise definition.
It was shown in, e.g., [22, 23], that essential information on the space-dependent coefficients is contained in the residues at the poles of the Laplace transformed observations \(\widehat{h}\). This, in its turn, via the elementary identity
\[\begin{split}\operatorname{Res}(\widehat{h},p)&= \lim_{z\to p}(z-p)\widehat{h}(z)=\lim_{z\to p}\widehat{h^{\prime}-ph}(z)+h(0) \\ &=\lim_{T\to\infty}\lim_{z\to p}\int_{0}^{T}e^{-izt}(h^{\prime}(t) -ph(t))\,dt+h(0)\\ &=\lim_{T\to\infty}\int_{0}^{T}\frac{d}{dt}\Big{(}e^{-ipt}h(t) \Big{)}\,dt+h(0)=\lim_{T\to\infty}e^{-ipT}h(T),\end{split} \tag{9}\]
(provided absolute convergence holds, which allows us to interchange the integrals), leads us to study the asymptotics of solutions as time tends to infinity.
Correspondingly, in the uniqueness proofs of Section 3 we will make use of the forward operator \(\mathbb{F}\) that takes \(\kappa\) to the residues of the Laplace transform \(\widehat{\mathcal{C}_{\Sigma}u}\) at the poles of the Laplace transformed observation \(\widehat{h}\). In order to be able to take Laplace transforms, throughout Section 3 we will assume to have observations on the whole positive timeline, which in case of \(r\) being analytic with respect to time (for example, just vanishing) from a time instance \(\underline{T}\) on, follows by analytic continuation of the Fourier components of \(u\), which will be shown to satisfy linear ODEs from a time instance \(T_{*}\) on in Section 3.
Continuous Frechet differentiability of the forward operator is an essential ingredient of the Inverse Function Theorem, that we plan to employ in our uniqueness proof. Concerning well-posedness of the JMGT equation we can largely rely on existing results in the literature, see, e.g. [5, 6, 17, 18, 21, 29, 32, 33, 35, 37], in particular [19] for the undamped case, which is relevant for the weak (rather than strong) damping we are employing here. Indeed, a proof of Frechet differentiability would fail for the Westervelt equation in the absence of strong damping, due to the loss of regularity arising in that case, see, e.g., [10, 20]. While well-posedness in the above mentioned references is inherently local in time in the above cited references, the switching allows us to prove global in time well-posedness of (13) in Section 2, which is essential for the evaluation of residues according to (9).
Section 3 then contains several uniqueness results that rely on the previously shown differentiability and a proof of the linearized forward operator being an isomorphism in appropriately chosen topologies:
* local uniqueness of \(\kappa\);
* linearized uniqueness of \(\kappa\) and \(c_{0}\);
* linearized uniqueness of \(\kappa\) and \(b_{0}\);
from the single boundary observation (7).
### Notation
Below we will abbreviate \(H^{2}_{\diamondsuit}(\Omega)=H^{1}_{0}(\Omega)\cap H^{2}(\Omega)\) and make use of the spaces \(\dot{H}^{s}(\Omega)\) induced by the norm
\[\|v\|_{H^{s}(\Omega)}=\Big{(}\sum_{j=1}^{\infty}\lambda_{j}^{s}\sum_{k\in K^{ \lambda_{j}}}|\langle v,\varphi_{j}^{k}\rangle|^{2}\Big{)}^{1/2} \tag{10}\]
with the eigensystem \((\lambda_{j},\varphi_{j})\) of the operator \(\mathcal{A}_{c}\).
Moreover, the Bochner-Sobolev spaces \(L^{p}(0,T;Z)\), \(H^{q}(0,T;Z)\) with \(Z\) some Lebesgue or Sobolev space and \(T\) a finite or infinite time horizon will be used. By \(L^{p}(X)\), \(W^{s,p}(X)\) we abbreviate the Bochner spaces over the whole positive real line \(L^{p}(0,\infty;X)\), \(W^{s,p}(0,\infty;X)\).
\(\mathcal{B}_{\rho}(x_{0})^{X}=\{x\in X\,:\,\|x\|_{X}\leq\rho\}\) denotes the closed ball with radius \(\rho>0\) and center \(x_{0}\) in the normed space \(X\).
We denote the Laplace transform of a function \(v\in L^{1}(0,\infty)\) by \(\widehat{v}(z)=\int_{0}^{\infty}e^{-zt}v(t)\,dt\) for all \(z\in\mathbb{C}\) such that this integral exists.
The ordinary \(L^{2}(\Omega)\) inner product and norm will be denoted by \(|\cdot|_{L^{2}(\Omega)}\), \((\cdot,\cdot)_{L^{2}(\Omega)}\), while the weigthed \(L^{2}\)-inner product with weight \(\frac{c^{2}}{c_{0}^{2}}\) (or more generally, the inner product in which \(\mathcal{A}_{c}\) is symmetric) will be abbreviated by \(\langle,\cdot,\cdot\rangle\).
Generic constants will be denoted by \(C\) (possibly indicating in parentheses dependence on certain quantities). Some specific constants are \(C_{\mathrm{PF}}\) as appearing in the Poincare-Friedrichs inequality \(|v|_{L^{2}(\Omega)}\leq C_{\mathrm{PF}}|\nabla|_{L^{2}(\Omega)}\), \(v\in H^{1}_{0}(\Omega)\), and the norm \(C^{\Omega}_{H^{s},L^{p}}\) of the embedding \(H^{s}(\Omega)\to L^{p}(\Omega)\).
### The forward problem
In this section we analyze the forward problem: first of all, in Section 2.1, the parameter-to-state map \(S:(\kappa,c_{0}^{2},b_{0})\mapsto u\); then, in Section 2.2 the total forward operator, which we define by mapping the state \(u\) obtained from \(S\) into the observations (7) and then further into the residues at the poles of \(\widehat{h}\). The latter, along with an appropriately defined topology, will be the right setting for proving uniqueness of \(\kappa\) by means of the Inverse Function Theorem.
### Well-definedness and differentiability of the parameter-to-state map
With a smooth cutoff function \(\chi:[0,\infty)\to[0,1]\) such that for some thresholds \(0<\underline{m}<\overline{m}<\infty\)
\[\chi([0,\underline{m}])=\{0\},\quad\chi([\bar{m},\infty))=\{1\},\quad\chi\in W ^{1,\infty}(0,\infty),\quad\chi^{\prime}\geq 0\text{ a.e.} \tag{11}\]
and
\[\sigma(t)=\chi(|u^{\tau}(t)|^{2}_{L^{2}(\Omega)}). \tag{12}\]
we consider (6), that is
\[\begin{split}\tau u_{tttt}+\big{(}(1-2\kappa\sigma u)u_{t}\big{)} _{t}+c^{2}\mathcal{A}_{c}u+\tau c^{2}\mathcal{A}_{c}u_{t}+b_{0}(\tau u_{t}+u)_ {t}&=r\quad t>0\\ u(0)=u_{0},\quad u_{t}(0)=u_{1},\quad u_{tt}(0)&=u _{2},\end{split} \tag{13}\]
where for simplicity of exposition the operator \(\mathcal{A}_{c}=-\frac{c_{0}^{2}}{c^{2}}\Delta\) is equipped with homogeneous Dirichlet boundary conditions (while the more general case (5) follows analogously). Moreover, in (13) we have positive space-dependent coefficients \(\kappa\), \(c_{0}\), \(b_{0}\), constants \(\tau>0\), \(c>0\), as well as the time dependent function \(\sigma\) as defined in (12). The wave energy
\[\mathcal{E}_{0}[u^{\tau}](t):=\frac{1}{2}\Big{(}|u_{t}^{\tau}(t)|^{2}_{L^{2}( \Omega)}+c^{2}|\nabla u^{\tau}(t)|^{2}_{L^{2}(\Omega)}\Big{)} \tag{14}\]
of \(u^{\tau}=\tau u_{t}+t\) and therewith \(|u^{\tau}(t)|^{2}_{L^{2}(\Omega)}\) will be shown to decay to zero as \(t\to\infty\). Thus, the nonlinearity will be switched off after a certain time instance \(T_{*}\) and therefore the solution follows a linear weakly damped wave equation from that time instance on, so that we have full control over the pole locations and residues of the Laplace transform of its solution.
An important observation here is that \(T_{*}\) can be steered by the magnitude of \(b_{0}\); more precisely, \(T_{*}\) can be made smaller by making \(b_{0}\) larger, while the local existence time \(T\) of (13) does not decrease with increasing \(b_{0}\). This allows us to combine the local in time nonlinear existence interval with the time span of linear wave propagation to obtain global in time well-posedness and differentiability results for (13).
We first of all state an adaption of existing local in time well-posedness results for (13) to the situation of spatially variable coefficients and weak damping, see subsections 2.1.1, 2.1.2.
Additionally, in subsection 2.1.3, we obtain exponential decay of the wave energy \(\mathcal{E}_{0}[u^{\tau}]\) due to the persistent weak damping, which then implies that nonlinearity is switched off after some time instance \(T_{*}\) and \(u\) follows a linear weakly damped wave equation. This is essential for controlling its residues that according to (9) are governed by the asymptotics as \(t\to\infty\).
Finally, in subsection 2.1.4, we prove continuous differentiability of the forward operator in a norm dictated by the uniqueness proof.
Throughout this section we assume \(\Omega\) to be a bounded \(C^{1,1}\) domain so that we can make use of elliptic regularity according to, e.g., [13, Theorem 2.4.2.5].
#### 2.1.1. Energy estimates and well-posedness of the linearization
We start by analyzing a linear version of (13)
\[\tau u_{ttt}+\big{(}(1-\bar{\sigma}\mathfrak{a})u_{t}\big{)}_{t}+ c^{2}\mathcal{A}_{c}u+\tau c^{2}\mathcal{A}_{c}u_{t}+b_{0}(\tau u_{t}+u)_{t}- \mu u_{t}-\eta u =\mathfrak{r}\quad t>0\] \[u(0)=u_{0},\quad u_{t}(0)=u_{1},\quad u_{tt}(0)=u_{2}, \tag{15}\]
with
\[\bar{\sigma}\in W^{1,\infty}(0,\infty),\quad\bar{\sigma}([0,\infty))\subseteq[ 0,1]. \tag{16}\]
Later on, in Theorem 2.1 we will use Banach's Contraction Principle to prove convergence of a fixed point iteration defined by \(u_{n+1}\) solving (15) with \(\bar{\sigma}=\chi(|u_{n}(t)|^{2}_{L^{2}(\Omega)})\), and \(\mathfrak{a}=2\kappa u_{n}\) towards the unique solution of (13).
By adapting the proof of [19, Proposition 3.1] we obtain
**Proposition 1**.: _Let \(T>0\), \(c_{0},\frac{1}{c_{0}}\)\(b_{0}\in L^{\infty}(\Omega)\cap W^{1,3}(\Omega)\), \(b_{0}\geq 0\), \(\mu\in L^{\infty}(0,T;H^{2}_{\diamondsuit}(\Omega))\), \(\eta\in L^{\infty}(0,T;H^{1}_{0}(\Omega))\), \(\bar{\sigma}\in W^{1,\infty}(0,T)\)._
_Then there exists \(\rho_{\mathfrak{a}}>0\) such that for any \(\mathfrak{a}\in L^{\infty}(0,T;L^{\infty}(\Omega))\cap L^{\infty}(0,T;W^{1,3} (\Omega))\cap W^{1,\infty}(L^{2}(\Omega))\) satisfying_
\[\|\mathfrak{a}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}+\|\nabla \mathfrak{a}\|_{L^{\infty}(0,T;L^{3}(\Omega))}+\|(\bar{\sigma}\mathfrak{a})_{ t}\|_{L^{\infty}(0,T;L^{2}(\Omega))} \tag{17}\] \[\quad+\|\nabla b_{0}\|_{L^{3}(\Omega)}+\|\mu\|_{L^{\infty}(0,T;H^ {2}(\Omega)}+\|\nabla\eta\|_{L^{\infty}(0,T;L^{2}(\Omega)}\leq\rho_{\mathfrak{ a}},\]
_and any \(\mathfrak{r}\in L^{2}(0,T;H^{1}_{0}(\Omega))\cap L^{\infty}(0,T;L^{2}(\Omega))\), \((u_{0},u_{1},u_{2})\in H^{2}_{\diamondsuit}(\Omega)\times H^{2}_{\diamondsuit} (\Omega)\times H^{1}_{0}(\Omega)\) the initial boundary-value problem (15) has a unique solution_
\[u\in U_{T}=W^{3,\infty}(0,T;L^{2}(\Omega))\cap W^{2,\infty}(0,T;H^{1}_{0}( \Omega))\cap W^{1,\infty}(0,T;H^{2}_{\diamondsuit}(\Omega)) \tag{18}\]
_on \((0,T)\). Furthermore, this solution satisfies_
\[\|u_{ttt}(t)\|^{2}_{L^{2}}+\|\nabla u_{tt}(t)\|^{2}_{L^{2}}+\| \Delta u_{t}(t)\|^{2}_{L^{2}}+\|\Delta u(t)\|^{2}_{L^{2}} \tag{19}\] \[\leq\tilde{C}(\tau)(1+T)\Big{(}\|u^{\tau}_{tt}(t)\|^{2}_{L^{2}}+ \|\nabla u^{\tau}_{t}(t)\|^{2}_{L^{2}}+\|\Delta u^{\tau}(t)\|^{2}_{L^{2}}\Big{)}\] \[\leq C(\tau)e^{K(\tau)(\rho_{a}^{2}+1)T}\Big{(}\|\nabla u_{2}\|^{ 2}_{L^{2}}+\|\Delta u_{1}\|^{2}_{L^{2}}+\|\Delta u_{0}\|^{2}_{L^{2}}+\|\nabla \mathfrak{r}\|^{2}_{L^{2}(L^{2})}\Big{)},\]
_for all \(t\in(0,T)\), where the constants \(C(\tau)\) and \(K(\tau)\) tend to \(+\infty\) as \(\tau\to 0^{+}\), but are independent of \(\rho_{\mathfrak{a}}\), \(b_{0}\), and the final time \(T\)._
Proof.: The proof relies on an energy estimate that is obtained by testing (15), that is, in terms of \(u^{\tau}=\tau u_{t}+u\),
\[\begin{split}& u_{tt}^{\tau}+c^{2}\mathcal{A}_{c}u^{\tau}+\tilde{b }_{0}u_{t}^{\tau}-\tilde{\mu}u^{\tau}-\tilde{\eta}u=\mathfrak{r},\\ & u^{\tau}(0)=\tau u_{1}+u_{0},\quad u_{t}^{\tau}(0)=\tau u_{2}+u_ {1},\\ &\text{where }\tilde{b}_{0}=b_{0}-\tfrac{1}{\tau}\bar{\sigma} \mathfrak{a},\,\tilde{\mu}=\tfrac{1}{\tau}(\mu+(\bar{\sigma}\mathfrak{a})_{t }-\tfrac{1}{\tau}\bar{\sigma}\mathfrak{a}),\,\tilde{\eta}=\eta-\tilde{\mu}, \end{split} \tag{20}\]
with \(-\Delta u_{t}^{\tau}=-\Delta(\tau u_{t}^{\tau}+u^{\tau})_{t}\) and integrating by parts with respect to space in the first, third and fourth term, which results in the spatial differentiability requirements \(\nabla\mathfrak{a}(t)\), \(\nabla b_{0}\), \(\nabla\mathfrak{a}_{t}(t)\in L^{3}(\Omega)\) with sufficiently small norm.
The additional term \(b_{0}(\tau u_{t}+u)_{t}\) as compared to [19, Proposition 3.1] then clearly yields a nonnegative contribution to the left hand side.
#### 2.1.2. Well-posedness of the switched JMGT equation
Analogously to [19, Theorem 4.1], Banach's Contraction Principle yields local in time well-posedness of the nonlinear problem.
In view of the requirements (17) with \(\mathfrak{a}=2\kappa u\), we first of all consider the coefficient space
\[X_{T}=(L^{\infty}(\Omega)\cap W^{1,3}(\Omega))^{3}\text{ for }(\kappa,c_{0}^{2},b_{0}). \tag{21}\]
**Theorem 2.1**.: _Assume that \(T>0\), \(c>0\), \(b>0\), \(\rho>0\)._
_Then there exists \(\rho_{0}>0\) such that for any \((\kappa,c_{0}^{2},b_{0})\in\mathcal{B}_{\rho}^{X_{T}}(0,c,b)\) and any excitation \(r\in L^{2}(0,T;H_{0}^{1}(\Omega))\cap L^{\infty}(0,T;L^{2}(\Omega))\) as well as initial data \((u_{0},u_{1},u_{2})\in H_{\Diamond}^{2}(\Omega)\times H_{\Diamond}^{2}(\Omega )\times H_{0}^{1}(\Omega)\) with_
\[\|\mathfrak{r}\|_{L^{2}(0,T;H_{0}^{1}(\Omega))\cap L^{\infty}(0,T;L^{2}(\Omega ))}^{2}+|\nabla u_{2}|_{L^{2}(\Omega)}^{2}+|\Delta u_{1}|_{L^{2}(\Omega)}^{2}+ |\Delta u_{0}|_{L^{2}(\Omega)}^{2}\leq\rho_{0}, \tag{22}\]
_there exists a unique solution \(u\in U_{T}\) (as defined in (18)) of (13) and this solution satisfies the energy estimate (19)._
#### 2.1.3. Exponential decay of the wave energy
By testing (6), that is,
\[\begin{split}& u_{tt}^{\tau}-c_{0}^{2}\Delta u^{\tau}+\tilde{b}_{0}u_ {t}^{\tau}-\tilde{\mu}u_{t}=r,\\ & u^{\tau}=\tau u_{t}+u\,,\\ &\text{where }\tilde{b}_{0}=b_{0}-\tfrac{1}{\tau}2\kappa\sigma u,\,\tilde{\mu}=2\kappa(\tfrac{1}{\tau}\sigma u-(\sigma u)_{t}),\end{split}\]
with \(u_{t}^{\tau}+\theta u^{\tau}=(\tau u_{t}+u)_{t}+\theta(\tau u_{t}+u)_{t}\) in place of \(-\Delta u_{t}^{\tau}\), we obtain the energy identity
\[\begin{split}&\tfrac{1}{2}\tfrac{d}{dt}\Big{[}|u_{t}^{\tau}|_{L^{2}( \Omega)}^{2}+|c_{0}\nabla u^{\tau}|_{L^{2}(\Omega)}^{2}+\theta|\tilde{b}_{0}^{1 /2}\,u^{\tau}|_{L^{2}(\Omega)}^{2}+\theta(u_{t}^{\tau},u^{\tau})_{L^{2}(\Omega) }\Big{]}\\ &+|(\tilde{b}_{0}-\theta)^{1/2}\,u_{t}^{\tau}|_{L^{2}(\Omega)}^{2} +\theta|c_{0}\nabla u^{\tau}|_{L^{2}(\Omega)}^{2}\\ &=(r-\tilde{\mu}u_{t}-\nabla u^{\tau}\cdot\nabla c_{0}^{2},u_{t}^{ \tau}+\theta u^{\tau})_{L^{2}(\Omega)},\end{split} \tag{23}\]
provided \(b_{0}\geq\theta+\tfrac{2}{\tau}\kappa\sigma u\). Here, by Young's Inequality
\[|\theta(u_{t}^{\tau},u^{\tau})_{L^{2}(\Omega)}|\leq\tfrac{1}{2}|u_{t}^{\tau}| _{L^{2}(\Omega)}^{2}+\tfrac{1}{2}\theta|\tilde{b}_{0}^{1/2}\,u^{\tau}|_{L^{2}( \Omega)}^{2}\]
since \(\theta\leq\tilde{b}_{0}\) and
\[|(\nabla u^{\tau}\cdot\nabla c_{0}^{2},u_{t}^{\tau}+\theta u^{\tau})_{L^{2}( \Omega)}|\leq\tfrac{1}{2}|(\tilde{b}_{0}-\theta)^{1/2}\,u_{t}^{\tau}|_{L^{2}( \Omega)}^{2}+\tfrac{1}{2}\theta|c_{0}\nabla u^{\tau}|_{L^{2}(\Omega)}^{2}\]
if \(4C_{PF}\|\frac{1}{c_{0}}\|_{L^{\infty}(\Omega)}\|\nabla c_{0}\|_{L^{\infty}( \Omega)}<1,\ \frac{1}{b_{0}-\theta}\leq\tilde{\epsilon}\leq\ \frac{\theta(1-4C_{PF}\|\frac{1}{c_{0}}\|_{L^{\infty}(\Omega)}\|\nabla c_{0}\| _{L^{\infty}(\Omega)})}{4\|\nabla c_{0}\|_{L^{\infty}(\Omega)}^{2}}\) for some \(\tilde{\epsilon}>0\).
This allows us to prove exponential decay of \(\mathcal{E}_{0}[u^{\tau}](t)\) and thus of \(|u^{\tau}(t)|_{L^{2}(\Omega)}^{2}\).
**Corollary 1**.: _Under the assumptions of Theorem 2.1 with_
\[\tfrac{2}{\tau}\|\kappa\|_{L^{\infty}(\Omega)}C_{H^{2}\to L^{\infty}}C(\tau)e ^{K(\tau)(\rho_{a}^{2}+1)T}\rho_{0}+\rho\leq\tfrac{b}{4}, \tag{24}\]
_and_
\[4C_{PF}\|\tfrac{1}{c_{0}}\|_{L^{\infty}(\Omega)}\|\nabla c_{0}\|_{L^{\infty}( \Omega)}\leq\tfrac{1}{2} \tag{25}\]
_there exist \(\alpha=\alpha(b)>0\), \(C>0\) such that for all \((\kappa,c_{0}^{2},b_{0})\in\mathcal{B}_{\rho}^{X}(0,c,b)\), \(r\in L^{2}(0,T;H_{0}^{1}(\Omega))\cap L^{\infty}(0,T;L^{2}(\Omega))\), \((u_{0},u_{1},u_{2})\in H_{\zeta}^{2}(\Omega)\times H_{\zeta}^{2}(\Omega) \times H_{0}^{1}(\Omega)\) with (22), the estimate_
\[e^{\alpha t}\mathcal{E}_{0}[u^{\tau}](t)\leq\mathcal{E}_{0}[u^{\tau}](0)+C \alpha\|e^{\alpha/2}\!\cdot\!r\|_{L^{2}(0,t;L^{2}(\Omega))}^{2}\quad\text{ for all }t\in(0,T) \tag{26}\]
_holds with some constant \(C>0\) independent of \(T\) and \(b\)._
_In particular, for \(T_{*}=T_{*}(b):=\frac{1}{\alpha}\ln(\frac{C_{\text{PP}}}{m}\rho_{1})\) and any \(\kappa\), \(c_{0}\), \(b_{0}\), \(r\), \(u_{0}\), \(u_{1}\) satisfying the above and_
\[\mathcal{E}_{0}[u](0)+C\alpha\|e^{\alpha/2}\!\cdot\!r\|_{L^{2}(0,t;L^{2}( \Omega))}^{2}\leq\rho_{1}^{2},\]
_we have \(\sigma[u](t)=0\) for all \(t\geq T_{*}\)._
_Here \(\alpha(b)\to\infty\) and \(T_{*}(b)\to 0\) as \(b\to\infty\)._
Proof.: Setting \(\bar{\sigma}=\chi(|u^{\tau}(t)|_{L^{2}(\Omega)}^{2})\), \(\mathfrak{a}=2\kappa u\), and using (24), we obtain \(\tilde{b}_{0}\geq\frac{3b}{4}\). Thus, with (25), choosing \(\theta:=b/2\), and applying Young's inequality, we obtain from (23) that
\[\tfrac{d}{dt}\mathcal{E}_{0}[u^{\tau}](t)+\tfrac{b}{4}\mathcal{E}_{0}[u^{\tau }](t)\leq Cb|r(t)|_{L^{2}(\Omega)}^{2},\]
with \(C\) independent of \(b\). Hence
\[\tfrac{d}{dt}\Big{[}e^{(b/4)t}\mathcal{E}_{0}[u^{\tau}](t)\Big{]}=e^{(b/4)t} \Big{\{}\tfrac{d}{dt}\mathcal{E}_{0}[u^{\tau}](t)+\tfrac{b}{4}\mathcal{E}_{0} [u^{\tau}](t)\Big{\}}\leq Cbe^{(b/4)t}|r(t)|_{L^{2}(\Omega)}^{2}\]
After integration over \([0,t]\), this implies the assertion with \(\alpha(b)=b/4\).
Hence, for fixed \(T\), \(\rho_{0}\), \(\rho_{1}\), \(\rho>0\), choosing \(b\) sufficiently large, we can achieve that nonlinearity is switched off at \(T_{*}<T\) and the PDE turns into a linear one (with still spatially variable coefficients) from \(T_{*}\) on, which allows to conclude its global in time well-posedenss. Thus, the assertions of Theorem 2.1 and of Proposition 1 with \(\mathfrak{a}=2\kappa\sigma u\) and \(\sigma\) as in (12) remain valid with \(T\) replaced by \(\infty\) and \(U_{T}\) replaced by \(U_{\infty}\).
#### 2.1.4. Differentiability of the parameter-to-state map
Unter the assumptions
\[\begin{split}& r\in L^{2}(H^{1}(\Omega))\cap L^{\infty}(L^{2}( \Omega)),\ e^{b/8}\!\cdot\!r\in L^{2}(0,t;L^{2}(\Omega)),\\ & u_{0},u_{1}\in H_{\zeta}^{2}(\Omega),u_{2}\in H_{0}^{1}( \Omega)\ \text{with (\ref{eq:1})},\end{split} \tag{27}\]
the parameter-to-state map
\[S:\mathcal{B}_{\rho}^{X}(0)\to U_{\infty},\quad(\kappa,c_{0}^{2},b_{0})\mapsto u \ \ \text{where }u\ \text{solves (\ref{eq:2})} \tag{28}\]
is well defined as a mapping from \(X\) to \(U:=U_{\infty}\) (cf. (18)), due to the above extension of Theorem 2.1 to the whole positive time line. Slightly increasing the requirement coefficient regularity as compared to (21) in view of (25) by setting
\[\begin{split}& X=(L^{\infty}(\Omega)\cap W^{1,3}(\Omega))\times W ^{1,\infty}(\Omega)\times(L^{\infty}(\Omega)\cap W^{1,3}(\Omega))\text{ for }(\kappa,c_{0}^{2},b_{0}),\\ & X=\Big{(}L^{\infty}(\Omega)\cap W^{1,3}(\Omega)\Big{)}^{2}\text { for }(\kappa,b_{0}).\end{split} \tag{29}\]
we separately consider the option of regarding \(c_{0}^{2}\) as fixed and varying \(\kappa\) and \(b_{0}\) only, which in fact will make a difference in the spaces in which we will obtain differentiability.
Its formal derivative (which we will show to be the Frechet derivative in Proposition 2) is given by \(S^{\prime}(\kappa,c_{0}^{2},b_{0})(\underline{d\kappa},\underline{dc}_{0}^{2},\underline{d\underline{b}}_{0})=v\) where \(v\) solves
\[\begin{split}&\tau v_{tttt}+\big{(}(1-2\kappa\sigma[u]u)v_{t} \big{)}_{t}-c_{0}^{2}\Delta v-\tau c_{0}^{2}\Delta v_{t}+b_{0}(\tau v_{tt}+v_{ t})=\mathfrak{r}(v)\\ &\text{with }\mathfrak{r}(v):=2\big{(}\kappa(\sigma[u]v+\sigma^{ \prime}[u][v])u+\underline{d\kappa}\sigma[u]u)u_{t}\big{)}_{t}+\underline{dc}_ {0}^{2}\Delta u^{\tau}-\underline{d\underline{b}}_{0}u_{t}^{\tau}\end{split} \tag{30}\]
with homogeneous initial and boundary conditions. Here and in the following we explicitely indicate dependence of \(\sigma\) on \(u\) by the notation \(\sigma[u]\) and have
\[\sigma^{\prime}[u][v](t)=\chi^{\prime}(|u^{\tau}(t)|_{L^{2}(\Omega)}^{2})(u^{ \tau}(t),v^{\tau}(t))_{L^{2}(\Omega)}\ \text{ with }v^{\tau}=\tau v_{t}+v.\]
Well-posedeness of (30) can be shown by a contraction argument for the (affinely linear) fixed point operator \(\mathcal{T}:\tilde{v}\mapsto v\) solving (30) with \(\mathfrak{r}(\tilde{v})\) in place of \(\mathfrak{r}(v)\).1 To this end, we use the fact that
Footnote 1: Note that Banach’s Contraction Principle in this affinely linear setting boils down to the use of the Neumann series.
\[\|\mathfrak{r}(\tilde{v}_{1})-\mathfrak{r}(\tilde{v}_{2})\|_{L^{2}(L^{2}( \Omega))}\leq\tilde{C}R\|\tilde{v}_{1}-\tilde{v}_{2}\|_{\mathcal{E}},\]
for some constant \(\tilde{C}>0\), and
\[\|v\|_{\mathcal{E}}:=|\sqrt{\mathcal{E}_{0}[\tau v_{t}+v]}|_{L^{\infty}(0, \infty)}\]
which by an estimate analogous to (26) yields
\[\|\mathcal{T}\tilde{v}_{1}-\mathcal{T}\tilde{v}_{2}\|_{\mathcal{E}}\leq C \tilde{C}R\|\tilde{v}_{1}-\tilde{v}_{2}\|_{\mathcal{E}},\]
thus, by smallness of \(R\), contractivity of \(\mathcal{T}\).
Note that since the term \(\underline{dc}_{0}^{2}\Delta u^{\tau}\) is not necessarily contained in \(L^{2}(H^{1}(\Omega))\), which would be needed for an application of Proposition 1, we had to resort to the weaker topology induced by the wave energy, that is,
\[U_{\text{lo}}=\{u\in L^{2}(L^{2}(\Omega))\,:\,\tau u_{t}+u\in W^{1,\infty}(L^{ 2}(\Omega))\cap L^{\infty}(H^{1}_{0}(\Omega))\} \tag{31}\]
and thus obtain existence and uniqueness of \(v=S^{\prime}(\kappa,c_{0}^{2},b_{0})(\underline{d\kappa},\underline{dc}_{0}^ {2},\underline{d\underline{b}}_{0})\) in \(U_{\text{lo}}\) for \((\kappa,c_{0}^{2},b_{0})\in X\), \((\underline{d\kappa},\underline{dc}_{0}^{2},\underline{d\underline{b}}_{0}) \in L^{\infty}(\Omega)^{3}\).
When considering the parameter-to-state-map \(S(\kappa,b_{0})\) needed in the recovery of \(\kappa\) and \(b_{0}\) only (while keeping \(c_{0}^{2}\) fixed), this term is not present and (19) yields
\[\|\mathcal{T}\tilde{v}_{1}-\mathcal{T}\tilde{v}_{2}\|_{U}\leq C\tilde{C}R\| \tilde{v}_{1}-\tilde{v}_{2}\|_{U},\]
hence existence and uniqueness of \(v=S^{\prime}(\kappa,b_{0})(\underline{d\kappa},\underline{d\underline{b}}_{0})\) in the solution space \(U\) for \((\underline{d\kappa},\underline{d\underline{b}}_{0})\in L^{\infty}(\Omega)^{2}\).
Analogously we obtain a Lipschitz estimate of \(S\)
\[\begin{split}&\|S(\tilde{\kappa},\tilde{c}_{0}^{2},\tilde{b}_{0})-S( \kappa,c_{0}^{2},b_{0})\|_{U_{\text{lo}}}\leq L\|(\tilde{\kappa},\tilde{c}_{0}^ {2},\tilde{b}_{0})-(\kappa,c_{0}^{2},b_{0})\|_{L^{\infty}(\Omega)^{3}},\\ &\|S(\tilde{\kappa},\tilde{b}_{0})-S(\kappa,b_{0})\|_{U}\leq L\|( \tilde{\kappa},\tilde{b}_{0})-(\kappa,b_{0})\|_{L^{\infty}(\Omega)^{2}},\end{split} \tag{32}\]
which allows to extend \(S\) to a (Lipschitz continuous) mapping \(S:\mathcal{B}_{\rho}(0)^{\tilde{X}}\to U_{\mathrm{lo}}\) or \(U\), respectively, where 2
Footnote 2: Note that \(L^{\infty}(\Omega)\cap W^{1,3}(\Omega)\) is not dense in \(L^{\infty}(\Omega)\) for otherwise density of \(C(\Omega)\) in \(W^{1,3}(\Omega)\) would imply \(C(\Omega)=L^{\infty}(\Omega)\).
\[\tilde{X}=\left(\overline{L^{\infty}(\Omega)\cap W^{1,3}(\Omega)}^{L^{\infty }}\right)^{3}\text{ with }\|\cdot\|_{\tilde{X}}=\|\cdot\|_{L^{\infty}(\Omega)^{3}}. \tag{33}\]
To prove that \(S^{\prime}\) is indeed the Frechet derivative of \(S\), we consider the PDE that is satisfied (along with homogeneous initial and boundary conditions) by the first order Taylor remainder \(w=S(\tilde{\kappa},\tilde{c}_{0}^{2},\tilde{b}_{0})-S(\kappa,c_{0}^{2},b_{0})- S^{\prime}(\kappa,c_{0}^{2},b_{0})((\tilde{\kappa},\tilde{c}_{0}^{2},\tilde{b}_{0}) -(\kappa,c_{0}^{2},b_{0}))=\tilde{u}-u-v\), namely
\[\tau w_{ttt}+\left((1-2\kappa\sigma[u]u)w_{t}\right)_{t}-c_{0}^{2}\Delta w- \tau c_{0}^{2}\Delta w_{t}+b_{0}(\tau w_{t}+w)=\mathfrak{r}(\tilde{u},u,v) \tag{34}\]
where \(\tilde{u}=S(\tilde{\kappa})\), \(u=S(\kappa)\), \(v=S^{\prime}(\kappa)(\tilde{\kappa}-\kappa)\),
\[\begin{split}&\mathfrak{r}(\tilde{u},u,v)\\ &:=2\Big{(}(\tilde{\kappa}\sigma[\tilde{u}]\tilde{u}-\kappa\sigma[u ]u)\tilde{u}_{t}-\big{(}\kappa(\sigma[u]v+\sigma^{\prime}[u][v]u)+(\tilde{ \kappa}-\kappa)\sigma[u]u\big{)}u_{t}\Big{)}_{t}\\ &\quad+c_{0}^{2}\Delta w^{\tau}-(\tilde{c}_{0}^{2}\Delta\tilde{ u}^{\tau}-c_{0}^{2}\Delta u^{\tau}-c_{0}^{2}\Delta v^{\tau}-(\tilde{c}_{0}^{2}-c_{0}^{ 2})\Delta u^{\tau})\\ &\quad+b_{0}w_{t}^{\tau}-(\tilde{b}_{0}\tilde{u}_{t}^{\tau}-b_{0 }u_{t}^{\tau}-b_{0}v_{t}^{\tau}-(\tilde{b}_{0}-b_{0})u_{t}^{\tau})\\ &=2\Big{(}(\tilde{\kappa}-\kappa)(\sigma[\tilde{u}]\tilde{u}- \sigma[u]u)\tilde{u}_{t}+\kappa(\sigma[\tilde{u}]\tilde{u}-\sigma[u]u)(\tilde {u}-u)_{t}\\ &\quad\quad\quad\quad+\kappa\big{(}\sigma[\tilde{u}]\tilde{u}- \sigma[u]u-\sigma[u]v-\sigma^{\prime}[u][v]u\big{)}u_{t}\Big{)}_{t}\\ &\quad-(\tilde{c}_{0}^{2}-c_{0}^{2})\Delta(\tilde{u}^{\tau}-u^{ \tau})-(\tilde{b}_{0}-b_{0})(\tilde{u}_{t}^{\tau}-u_{t}^{\tau})\end{split} \tag{35}\]
and
\[\begin{split}&\sigma[\tilde{u}]\tilde{u}-\sigma[u]u=(\sigma[ \tilde{u}]-\sigma[u])\tilde{u}+\sigma[u](\tilde{u}-u)\\ &\sigma[\tilde{u}]\tilde{u}-\sigma[u]u-\sigma[u]v-\sigma^{\prime }[u][v]u\\ &=(\sigma[\tilde{u}]-\sigma[u])(\tilde{u}-u)+\sigma[u]w+(\sigma[ \tilde{u}]-\sigma[u]-\sigma^{\prime}[u][\tilde{u}-u])u+\sigma^{\prime}[u][w])u.\end{split}\]
The energy estimates (26), (19) yield \(\|w\|_{U_{\mathrm{lo}}}=O(\|(\tilde{\kappa},\tilde{c}_{0}^{2},\tilde{b}_{0})-( \kappa,c_{0}^{2},b_{0})\|_{L^{\infty}(\Omega)^{3}}^{2})\) or \(\|w\|_{U}=O(\|(\tilde{\kappa},\tilde{b}_{0})-(\kappa,b_{0})\|_{L^{\infty}( \Omega)^{2}}^{2})\) and analogously we obtain a Lipschitz estimate of \(S^{\prime}\)
\[\begin{split}&\|S^{\prime}(\tilde{\kappa},\tilde{c}_{0}^{2}, \tilde{b}_{0})-S^{\prime}(\kappa,c_{0}^{2},b_{0})\|_{L(\tilde{X},U_{\mathrm{lo} })}\leq L\|(\tilde{\kappa},\tilde{c}_{0}^{2},\tilde{b}_{0})-(\kappa,c_{0}^{2},b_{0})\|_{L^{\infty}(\Omega)^{3}},\\ &\|S^{\prime}(\tilde{\kappa},\tilde{b}_{0})-S^{\prime}(\kappa,b_{0 })\|_{L(\tilde{X},U)}\leq L\|(\tilde{\kappa},\tilde{b}_{0})-(\kappa,b_{0})\|_{ L^{\infty}(\Omega)^{2}}.\end{split} \tag{36}\]
Altogether we have proven
**Proposition 2**.: _Assume that \(c>0\), \(\rho>0\), \(b>0\), the latter being sufficiently large._
_Then there exists \(\rho_{0}>0\) such that for any \(r\), \((u_{0},u_{1},u_{2})\) satisfying (27), the operator \(S\) is well-defined as a mapping \(\mathcal{B}_{\rho}^{X}(0,c^{2},b)\to U=U_{\infty}\), cf. (18). If additionally \(\chi\in W^{2,\infty}(0,\infty)\) then \(S\) satisfies the Lipschitz estimate (32) and can be extended to a (Lipschitz continuous) mapping \(S:\mathcal{B}_{\tilde{\rho}}^{\tilde{X}}(0,c^{2},b)\to U_{\mathrm{lo}}\), which is Frechet differentiable with Lipschitz continuous derivative (36). The assertions remain valid with \(U_{\mathrm{lo}}\) replaced by \(U\) if \(S\) is considered as a function of \((\kappa,b_{0})\) only._
Since Frechet differentiability remains valid under strengthening of the topology in preimage space, the assertions of Proposition 2 remain valid if we simply stay with the stronger space (29) in place of (33).
Well-definedness and differentiability of the forward operator for fixed \(c_{0}(x)\) and constant \(b_{0}(x)=b\)
As a preparation for the use of the Inverse Function Theorem for proving uniqueness of \(\kappa\), we will now study the operator \(\mathbb{F}\) that maps \((\kappa,c_{0}^{2},b_{0})\) to the residues of \(\widehat{\varSigma u}\) at the poles of \(\widehat{h}\).
In our uniqueness proof, in order to take residues at poles \(p\) of the Laplace transformed solution to the PDE and its linearization, we need sufficiently fast decay as time tends to infinity according to the identity (9)
\[\mathrm{Res}(\widehat{f},p)=\lim_{T\to\infty}e^{-ipT}f(T).\]
Note that the poles we consider will be single and their real parts will be negative, so that \(|e^{-pt}|\) is exponentially increasing as \(t\to\infty\). In particular, for an integrable function with finite support \([0,T]\) in time, the residue vanishes (which is clear in view of the fact that the Laplace transform of a finitely supported function has no poles).
In view of this, we will study the large time behavior of solutions. Due to the exponential decay according to Corollary 1 and the switching by means of \(\chi\), from \(T_{*}\) on, the solution \(u\) to (13) coincides with the (time shifted) solution \(u_{*}\) of a linear PDE
\[u(t-T_{*})=u_{*}(t)\quad t>T_{*}, \tag{37}\]
where
\[u_{*}^{\tau}+c^{2}\mathcal{A}_{c}u_{*}^{\tau}+b_{0}u_{*t}^{\tau}=r_{*}\quad t >0\]
\[u_{*}^{\tau}(0)=\tau u_{*1}+u_{*0},\quad u_{*t}^{\tau}(0)=\tau u_{*2}+u_{*1} \tag{38}\]
\[u^{\tau}=\tau u_{t}+u\]
In order to be able to apply separation of variables, we assume the (possibly space dependent) sound speed \(c_{0}(x)\) to be fixed and \(b_{0}\) to be constant \(b_{0}(x)\equiv\bar{b}\). 3 By means of an eigensystem \((\lambda_{j},(\varphi_{j}^{k})_{k\in K^{\lambda_{j}}})_{j\in\mathbb{N}}\) (where \(K^{\lambda_{j}}\) is an enumeration of the eigenspace corresponding to \(\lambda_{j}\)) of \(\mathcal{A}_{c}\), we can then write \(u_{*}^{\tau}(x,t)=\sum_{j=1}^{\infty}\sum_{k\in K^{\lambda_{j}}}u_{sj}^{\tau k }(t)\varphi_{j}^{k}(x)\) with \(u_{*j}^{\tau k}(t)=\langle u_{*}^{\tau}(t),\varphi_{j}^{k}\rangle\) solving the relaxation equation
Footnote 3: Sligtly more generally, we could assume the three operators id, \(\mathcal{A}_{c}\), \(b_{0}\cdot\) to be simultaneously diagonizable
\[u_{*j}^{\tau k^{\prime\prime}}+c^{2}\lambda_{j}u_{*j}^{\tau k}+\bar{b}u_{*j}^ {\tau k^{\prime}}=r_{*j}^{k}\quad t>0\]
\[u_{*j}^{\tau k}(0)=u_{*j0}^{\tau k},\quad u_{*j}^{\tau k^{\prime}}(0)=u_{*j1 }^{\tau k},\]
whose Laplace transformed solutions are given by
\[\widehat{u}_{*j}^{\tau k}(z)=\frac{\widehat{r}_{*j}^{k}(z)+u_{*j1}^{k}+(z+ \bar{b})u_{*j0}^{k}}{\omega_{\lambda_{j}}(z)},\ \ \text{with}\ \omega_{\lambda}(z)=z^{2}+\bar{b}z+c^{2}\lambda\,. \tag{39}\]
The poles and residues of the resolvent functions \(\frac{1}{\omega_{\lambda_{j}}(z)}\) compute explicitly as
\[\begin{split} p_{j}^{\pm}&=-\frac{\bar{b}}{2}\pm \imath\sqrt{c^{2}\lambda_{j}-\frac{\bar{b}^{2}}{4}}\,,\\ \mathrm{Res}(\tfrac{1}{\omega_{\lambda_{j}}};p_{j}^{+})& =\lim_{z\to p_{j}^{+}}\frac{z-p_{j}^{+}}{(z-p_{j}^{+})(z-p_{j}^{-} )}=\frac{1}{\imath\sqrt{4c^{2}\lambda_{j}-\bar{b}^{2}}}\,.\end{split} \tag{40}\]
In particular, we have the following two essential properties
**Lemma 2.2**.: _The poles of \(\frac{1}{\omega_{\lambda}}\) differ for different \(\lambda\)._
**Lemma 2.3**.: _The residues of the poles of \(\frac{1}{\omega_{\lambda}}\) do no vanish._
These properties remain valid for more general damping models involving fractional derivatives, see [25, Lemmas 11.4, 11.5].
**Remark 2**.: From the formula for the poles and residues it becomes evident why strong damping needs to be switched off eventually. Indeed, with
\[u_{*tt}+c^{2}\mathcal{A}_{c}u_{*}+b_{0}u_{*t}+b_{1}\mathcal{A}_{c}u_{*t}\]
in place of (38) we would have
\[p_{j}^{\pm} =-\frac{b_{0}+b_{1}\lambda_{j}}{2}\pm\sqrt{\frac{(b_{0}+b_{1} \lambda_{j})^{2}}{4}-c^{2}\lambda_{j}}\,,\] \[\operatorname{Res}(\tfrac{1}{\omega_{\lambda_{j}}};p_{j}^{+}) =\frac{1}{2\sqrt{\frac{(b_{0}+b_{1}\lambda_{j})^{2}}{4}-c^{2} \lambda_{j}}}\,.\]
in place of (40). Since \(\lambda_{j}\to\infty\) as \(j\to\infty\), the poles \(p_{j}^{+}\) (up to finitely many) would be real and negative; in case \(b_{1}>0\) their real parts would tend to \(-\infty\) as \(j\to\infty\).
The explicit representation (40) together with (37), (9) allows us to compute the residues of the Laplace transformed components \(u_{j}^{\tau k}=\langle u^{\tau},\varphi_{j}^{k}\rangle\) at the poles \(p_{j}\) as follows. Since
\[u_{j}^{\tau k}(t)=\operatorname{\mathbf{I}}_{[0,T_{*})}(t)u_{j}^{ \tau k}(t)+\operatorname{\mathbf{I}}_{[T_{*},\infty)}(t)u_{*j}^{\tau k}(t-T_{ *})=:u_{[0,T_{*})}+u_{[T_{*},\infty)}\] \[\text{where }\widehat{u_{[0,T_{*})}}(z)=0\text{ and }\widehat{u_{[T_{*},\infty)}}(z)= \int_{T_{*}}^{\infty}e^{-zt}u_{*j}^{\tau k}(t-T_{*})\,dt\] \[=\int_{0}^{\infty}e^{-z(s+T_{*})}u_{*j}^{\tau k}(s)\,ds=e^{-zT_{*} }\widehat{u}_{*j}^{k}(z)\]
due to (9), (39), (40), we have
\[\operatorname{Res}(\widehat{u_{j}^{\tau k}};p_{j}^{+}) =0+e^{-p_{j}^{+}T_{*}}\operatorname{Res}(\widehat{u}_{*j}^{\tau k };p_{j}^{+})=\frac{e^{-p_{j}^{+}T_{*}}(\widehat{r}_{*j}^{k}(p_{j}^{+})+u_{*j 1}^{\tau k}+(p_{j}^{+}+\bar{b})u_{*j0}^{\tau k})}{\nicefrac{{1}}{{4c^{2} \lambda_{j}-\bar{b}^{2}}}} \tag{41}\]
In here, have
\[|e^{-p_{j}^{+}T_{*}}\widehat{r}_{*j}^{k}(p_{j}^{+})| =|e^{-p_{j}^{+}T_{*}}\int_{0}^{\infty}e^{-p_{j}^{+}s}r_{j}^{k}(s+ T_{*})\,ds|=|\int_{T_{*}}^{\infty}e^{-p_{j}^{+}t}r_{j}^{k}(t)\,dt| \tag{42}\] \[\leq\int_{T_{*}}^{\infty}e^{-\Re(p_{j}^{+})t}|r_{j}^{k}(t)|\,dt= \|e^{(\bar{b}/2)\cdot}r_{j}^{k}|_{[T_{*},\infty)}\|_{L^{1}(0,\infty)}\,.\]
Moreover with \(u_{*j0}=\langle u(T_{*}),\varphi_{j}^{k}\rangle\), \(u_{*j1}=\langle u_{t}(T_{*}),\varphi_{j}^{k}\rangle\) according to (37), we have
\[|e^{-p_{j}^{+}T_{*}}(u_{*j1}^{\tau k}+p_{j}^{+}u_{*j0}^{\tau k})| \leq e^{-\Re(p_{j}^{+})T_{*}}\big{(}|\langle u_{t}^{\tau}(T_{*}), \varphi_{j}^{k}\rangle|+|p_{j}^{+}+\bar{b}|\,|\langle u^{\tau}(T_{*}),\varphi_ {j}^{k}\rangle|\big{)}\] \[\leq e^{(\bar{b}/2)T_{*}}\big{(}|\langle u_{t}^{\tau}(T_{*}), \varphi_{j}^{k}\rangle|+c\sqrt{2\lambda_{j}}\|\langle u^{\tau}(T_{*}),\varphi_ {j}^{k}\rangle|\big{)}\] \[\leq 2e^{(\bar{b}/2)T_{*}}\sqrt{\mathcal{E}_{0}}[\langle u^{\tau}, \varphi_{j}^{k}\rangle\varphi_{j}^{k}](T_{*}).\]
This contains the wave energy of \(u^{\tau}\) as it has evolved nonlinearly up to \(T_{*}\) according to (13). To estimate it, we employ Corollary 1 from section 2.1.
Thus with
\[\left|\nicefrac{{1}}{{4c^{2}\lambda-\bar{b}^{2}}}\right|\geq c\sqrt{\lambda} \quad\text{ for all }\lambda\geq\frac{\bar{b}^{2}}{3c^{2}} \tag{43}\]
in (41) we have shown the following.
**Proposition 3**.: _Under the conditions of Theorem 2.1, Corollary 1, the solution to (13) exists for all \(t>0\) and the generalized Fourier coeffcients of its Laplace transform satisfy the estimate_
\[\sum_{j=1}^{\infty}\lambda_{j}\sum_{k\in K^{\lambda_{j}}}\mathrm{ Res}(\widehat{u_{j}^{\tau k}};p_{j}^{+})^{2}\] \[\leq Ce^{(\bar{b}-\alpha)T_{*}}\Big{(}\mathcal{E}_{0}[u](0)+\|e^{ \alpha/2}\cdot r\|_{L^{2}(0,T_{*};L^{2}(\Omega))}\Big{)}+\|e^{(\bar{b}/2)\cdot }r|_{[T^{*},\infty)}\|_{L^{1}(L^{2}(\Omega))}^{2}\]
_on \(u_{j}^{\tau k}(t)=\langle u^{\tau}(t),\varphi_{j}^{k}\rangle\). Here \(C>0\), \(\alpha\), \(T_{*}\) can be chosen independent of the individual inital data and excitation but depending only on the radii \(\rho\), \(\rho_{0}\), \(\rho_{1}\) in Corollary 1._
To prove differentiability with respect to \(\kappa\) at any point \((\kappa,c_{0}^{2},\bar{b})\in\mathcal{B}_{\rho}^{X}(0,c^{2},b)\), with \(\nabla\bar{b}=0\), we write the forward operator in two alternative ways:
\[\mathbb{F}=\mathcal{R}_{c_{0}^{2},\bar{b}}\circ\mathcal{C}_{\Sigma}\circ S= \mathbb{A}_{c_{0}^{2},\bar{b}}\circ S_{*}+\vec{a}_{c_{0}^{2},\bar{b}}\]
where \(\mathcal{C}_{\Sigma}\) and \(S\) are defined as in (7), (28) and
\[\mathcal{R}_{c_{0}^{2},\bar{b}}:h\mapsto\Big{(}(\mathrm{Res}(\widehat{h};p_{m }^{+}))_{m\in\mathbb{N}},(\mathrm{Res}(\widehat{h};p_{m}^{-}))_{m\in\mathbb{N}} \Big{)}\]
\[S_{*}:X\to H^{2}(\Omega)\times H^{1}_{0}(\Omega),\quad(\kappa,c_{0}^{2},b_{0}) \mapsto(u^{\tau}(T_{*}),u_{t}^{\tau}(T_{*})), \tag{44}\]
(where attainment of \((u^{\tau}(T_{*}),u_{t}^{\tau}(T_{*}))\) in \(H^{2}(\Omega)\times H^{1}_{0}(\Omega)\) can be justified analogously to attainment of initial data in the well-posedness proof of JMGT, see e.g., [20]),
\[\vec{a}_{c_{0}^{2},\bar{b}}=\Big{(}(\mathrm{Res}(\frac{1}{\omega_{\lambda_{m}} };p_{m}^{+})\widehat{r}_{*m}(p_{m}^{+})B_{\lambda_{m}}[(1,\ldots,1)])_{m\in \mathbb{N}},\]
\[\mathrm{Res}(\frac{1}{\omega_{\lambda_{m}}};p_{m}^{-})\widehat{r}_{*m}(p_{m}^ {-})B_{\lambda_{m}}[(1,\ldots,1)])_{m\in\mathbb{N}}\Big{)}\]
where \(\mathrm{Res}(\frac{1}{\omega_{\lambda_{m}}};p_{m}^{+})=-\mathrm{Res}(\frac{1}{ \omega_{\lambda_{m}}};p_{m}^{-})=\frac{1}{\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}\)
\[\mathbb{A}_{c_{0}^{2},\bar{b}}:(u_{s0}^{\tau},u_{s1}^{\tau})\mapsto \Big{(}(\mathrm{Res}(\frac{1}{\omega_{\lambda_{m}}};p_{m}^{+})B_{ \lambda_{m}}[(u_{s\tau 1}^{\tau k}+(p_{j}^{+}+\bar{b})u_{s\tau 0}^{\tau k})_{k\in K ^{\lambda_{m}}})_{m\in\mathbb{N}},\]
\[\mathrm{Res}(\frac{1}{\omega_{\lambda_{m}}};p_{m}^{-})B_{\lambda_{m}}[(u_{s \tau 1}^{\tau k}+(p_{j}^{+}+\bar{b})u_{s\tau 0}^{\tau k})_{k\in K^{\lambda_{m}}}])_{m\in \mathbb{N}}\Big{)}\]
\[B_{\lambda}:\mathbb{R}^{|K^{\lambda}|}\to E_{\lambda}^{\Sigma}:=\mathrm{span}(( \mathcal{C}_{\Sigma}\varphi_{k})_{k\in K^{\lambda}}),\quad(\beta_{k})_{k\in K ^{\lambda}}\mapsto\sum_{k\in K^{\lambda}}\beta_{k}\mathcal{C}_{\Sigma}\varphi _{k}. \tag{45}\]
Here the operators \(\mathcal{R}_{c_{0}^{2},\bar{b}}\), \(\mathbb{A}_{c_{0}^{2},\bar{b}}\), \(\vec{a}_{c_{0}^{2},\bar{b}}\) and also \(B_{\lambda_{m}}\) depend on \(c_{0}^{2},\bar{b}\) via dependence of the eigenfunctions and poles \(p_{m}^{\pm}\) on \(c_{0}^{2},\bar{b}\).
To resolve the contributions within each eigenspace in our uniqueness proof, as in [25] we impose a linear independence assumption on the eigenspace \(E_{\lambda}^{\Sigma}\) of each eigenvalue \(\lambda\)
\[\left(\sum_{k\in K^{\lambda}}\beta_{k}\varphi_{k}(x)=0\ \ \text{for all}\ x\in\Sigma\right)\ \Longrightarrow\ \big{(}\beta_{k}=0\ \text{for all}\ k\in K^{\lambda}\big{)}. \tag{46}\]
That is, for each eigenvalue \(\lambda\) of \(\mathcal{A}_{c}\) with eigenfunctions \((\varphi_{k})_{k\in K^{\lambda}}\), the eigenfunctions, when restricted to the observation manifold \(\Sigma\) (or more generally, when \(\mathcal{C}_{\Sigma}\) is applied) keep their linear independence. This is a similar to usual conditions imposed in coefficient identification problems in wave type equations that combine assumptions on the observation geometry with a non-trapping condition on the sound speed. We refer to, e.g., [23] for a discussion.
This implies that for each eigenvalue \(\lambda\), the mapping \(B_{\lambda}\) defined by (45) is bijective and (as a mapping between finite dimensional spaces) its inverse is bounded. We use this to define the norm in image space as
\[\|(\widehat{h}^{+},\widehat{h}^{-})\|_{\mathbb{Y}}^{2}:=\sum_{m=1}^{\infty} \lambda_{m}^{2}\|B_{\lambda_{m}}^{-1}\mathrm{Proj}_{E^{\Sigma}_{\lambda_{m}}}^{ Y}\widehat{h}^{+}_{m}\|_{\mathbb{R}^{K\lambda_{m}}}^{2}+\sum_{m=1}^{\infty} \lambda_{m}^{2}\|B_{\lambda_{m}}^{-1}\mathrm{Proj}_{E^{\Sigma}_{\lambda_{m}}}^{ Y}\widehat{h}^{-}_{m}\|_{\mathbb{R}^{K\lambda_{m}}}^{2}.\]
With the so defined space \(\mathbb{Y}\), the operator \(\mathbb{A}_{c_{0}^{2},\bar{b}}:H^{2}(\Omega)\times H^{1}_{0}(\Omega)\to\mathbb{Y}\) is linear and bounded and therefore, due to differentiability of \(S_{*}\) (which is shown analogously to differentiability of \(S\)), the forward operator \(\mathbb{F}:\mathcal{B}_{\rho}^{X}(0,c^{2},b)\to\mathbb{Y}\) is Frechet differentiable with respect to \(\kappa\).
In our uniqueness proof we will use a slightly modified definition of the forward operator, namely
\[\widetilde{\mathbb{F}}=\widetilde{\mathcal{R}}_{c_{0}^{2},\bar{b}}\circ \mathcal{C}_{\Sigma}\circ S=\widetilde{\mathbb{A}}_{c_{0}^{2},\bar{b}}\circ S _{*}+\widetilde{\mathbb{a}}_{c_{0}^{2},\bar{b}}\]
where still \(\mathcal{C}_{\Sigma}\), \(S\), \(S_{*}\) are defined as in (7), (28), (44) and, with two given time dependent functions \(f_{1}\), \(f_{2}\),
\[\widetilde{\mathcal{R}}_{c_{0}^{2},\bar{b}}:h\mapsto\Big{(}( \widehat{f}_{1}(p_{m}^{-})\mathrm{Res}(\widehat{h};p_{m}^{+})+\widehat{f}_{1}( p_{m}^{+})\mathrm{Res}(\widehat{h};p_{m}^{-}))_{m\in\mathbb{N}},\] \[(\widehat{f}_{2}(p_{m}^{-})\mathrm{Res}(\widehat{h};p_{m}^{+})+ \widehat{f}_{2}(p_{m}^{+})\mathrm{Res}(\widehat{h};p_{m}^{-}))_{m\in\mathbb{N}} \Big{)}\]
\[\widetilde{\mathbb{A}}_{c_{0}^{2},\bar{b}}=\Big{(}( \tfrac{1}{\imath\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}(\widehat{f}_{1}(p_{m}^ {-})\widehat{r}_{*m}(p_{m}^{+})-\widehat{f}_{1}(p_{m}^{+})\widehat{r}_{*m}(p_{ m}^{-}))B_{\lambda_{m}}[(1,\dots,1)])_{m\in\mathbb{N}},\] \[(\tfrac{1}{\imath\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}(\widehat{ f}_{2}(p_{m}^{-})\widehat{r}_{*m}(p_{m}^{+})-\widehat{f}_{2}(p_{m}^{+})\widehat{r}_{*m}( p_{m}^{-}))B_{\lambda_{m}}[(1,\dots,1)])_{m\in\mathbb{N}}\Big{)}\]
\[\widetilde{\mathbb{A}}_{c_{0}^{2},\bar{b}}:(u_{*0},u_{*1})\mapsto\] \[\Big{(}(\tfrac{\widehat{f}_{1}(p_{m}^{-})-\widehat{f}_{1}(p_{m}^ {+})}{\imath\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}B_{\lambda_{m}}[(u_{*m1}^{\tau k }+\bar{b}u_{*m0}^{\tau k})_{k\in K^{\lambda_{m}}}]+\tfrac{\widehat{f}_{1}(p_{ m}^{-})p_{m}^{+}-\widehat{f}_{1}(p_{m}^{+})p_{m}^{-}}{\imath\sqrt{4c^{2} \lambda_{m}-\bar{b}^{2}}}B_{\lambda_{m}}[(u_{*m0}^{\tau k})_{k\in K^{\lambda_{ m}}}])_{m\in\mathbb{N}},\] \[+(\tfrac{\widehat{f}_{2}(p_{m}^{-})-\widehat{f}_{2}(p_{m}^{+})}{ \imath\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}B_{\lambda_{m}}[(u_{*m1}^{\tau k}+ \bar{b}u_{*m0}^{\tau k})_{k\in K^{\lambda_{m}}}]+\tfrac{\widehat{f}_{2}(p_{m }^{-})p_{m}^{+}-\widehat{f}_{2}(p_{m}^{+})p_{m}^{-}}{\imath\sqrt{4c^{2}\lambda _{m}-\bar{b}^{2}}}B_{\lambda_{m}}[(u_{*m0}^{\tau k})_{k\in K^{\lambda_{m}}}]) _{m\in\mathbb{N}}\Big{)}\]
Analogously to above, we can conclude the following differentiability result.
**Corollary 2**.: _Under the assumptions of Proposition 2, and with \((\widehat{f}_{1}(p_{m}^{+})-\widehat{f}_{1}(p_{m}^{-}))_{m\in\mathbb{N}}\), \((\widehat{f}_{2}(p_{m}^{+})-\widehat{f}_{2}(p_{m}^{-}))_{m\in\mathbb{N}}\in \ell^{\infty}\), the operator \(\widetilde{\mathbb{F}}\) is well-defined as a mapping \(\mathcal{B}_{\rho}^{X}\to\mathbb{Y}\). If additionally \(\chi\in W^{2,\infty}(0,\infty)\) then \(\widetilde{\mathbb{F}}\) satisfies a Lipschitz estimate and can be extended to a (Lipschitz continuous) mapping \(\widetilde{\mathbb{F}}:\mathcal{B}_{\rho}^{\tilde{X}}(0,c^{2},b)\to\mathbb{Y}\), which is Frechet differentiable with Lipschitz continuous derivative._
## 3. Uniqueness
In this section we will prove
1. uniqueness of \(\kappa\) (while \(c_{0}\) does not necessarily need to be known for this purpose; the essential condition in which \(c_{0}\) is implicitly involved is (46));
2. linearized uniqueness simultaneously of \((\kappa,c_{0}^{2})\);
3. linearized uniqueness simultaneously of \((\kappa,b_{0})\);
from the boundary observations (7).
A crucial step for this purpose will be to show that for every \((\kappa,c_{0}^{2},\bar{b})\in\mathcal{B}_{\rho}^{X}(0,c^{2},b)\) with \(\nabla\bar{b}=0\), the operator
\[D\widetilde{\mathbb{F}}(0,c_{0}^{2}):=\left(\begin{array}{c}\partial_{\kappa} \widetilde{\mathbb{F}}(0,c_{0}^{2},\bar{b})\\ d_{c_{0}^{2}}\widetilde{\mathbb{F}}(\kappa,c_{0}^{2},\bar{b})\end{array} \right)=\left(\begin{array}{c}\widetilde{\mathcal{R}}_{c_{0}^{2},\bar{b}}\circ \mathcal{C}_{\Sigma}\circ\partial_{\kappa}S(0,c_{0}^{2},\bar{b})\\ \widetilde{\mathcal{R}}_{c_{0}^{2},\bar{b}}\circ\mathcal{C}_{\Sigma}\circ\partial _{c_{0}^{2}}S(0,c_{0}^{2},\bar{b})\end{array}\right)\]
is an isomorphism between \(\mathbb{X}_{1}\times\mathbb{X}_{2}\) and \(\mathbb{Y}\). Here \(\partial_{\kappa}\widetilde{\mathbb{F}}\) is the (partial) Frechet derivative according to the results from the previous section, while \(d_{c_{0}^{2}}\widetilde{\mathbb{F}}\) is just a formal linearization in \(c_{0}^{2}\) direction. On one hand, this implies that also \(\partial_{\kappa}\widetilde{\mathbb{F}}(0,c_{0}^{2},\bar{b}):\mathbb{X}_{1} \rightarrow\mathbb{Y}\) is a isomorphism and thus, by the Inverse Function Theorem, (i) follows. On the other hand, we can directly conclude linearized uniqueness (ii) in the sense that \((\partial_{\kappa}\widetilde{\mathbb{F}}(0,c_{0}^{2},\bar{b}),d_{c_{0}^{2}} \widetilde{\mathbb{F}}(0,c_{0}^{2},\bar{b}))\) and thus \((\mathcal{C}_{\Sigma}\circ\partial_{\kappa}S(0,c_{0}^{2},\bar{b}),\mathcal{C} _{\Sigma}\circ\partial_{c_{0}^{2}}S(0,c_{0}^{2},\bar{b}))\) is injective. Similarly to the latter, one can also prove (iii).
### An isomorphism property of the linearized forward operator
Fixing \(c_{0}\), \(\bar{b}\), we denote by \((p_{m}^{\pm})_{m\in\mathbb{N}}\) the sequence of poles according to (40). Note that these are precisely the poles of the Laplace transformed data \(\widehat{h}\), provided \(c_{0}\), \(\bar{b}\) are the actual coefficients of the PDE for which this data has been taken, which we assume to hold in this section.
As in the linearized injectivity proof of [23] we use an excitation \(r\) that for \(\kappa=0\) leads to a space-time separable solution \(u^{0}(x,t)=\phi(x)\psi(t)\) of (13), namely
\[\begin{split}& r(x,t):=\phi(x)\psi^{\tau\prime}(t)+c^{2}(\mathcal{ A}_{c}\phi)(x)\psi^{\tau}+\bar{b}\phi(x)\psi^{\tau\prime}(t)\\ &\text{where }\psi^{\tau}=\tau\psi^{\prime}+\psi\end{split} \tag{47}\]
with some given functions \(\phi(x)\), \(\psi(t)\) such that
\[\begin{split}&\phi\in\mathcal{D}(\mathcal{A}_{c}),\quad\tfrac{1}{ \phi},\tfrac{1}{\Delta\phi}\in\dot{H}^{s}(\Omega)\text{ for some }s>\tfrac{d}{2}\\ &\psi(0)=\psi^{\prime}(0)=\psi^{\prime\prime}(0)=0\\ &\nu_{m}:=\widehat{f}_{1}(p_{m}^{-})\widehat{f}_{2}(p_{m}^{+})- \widehat{f}_{1}(p_{m}^{+})\widehat{f}_{2}(p_{m}^{-})\neq 0\quad\text{ for all }m\in\mathbb{N}\\ &\text{ for }f_{1}(t):=\tau\psi^{\prime}(t)+\psi(t),\quad f_{2}(t):= \chi(|\phi|^{2}_{L^{2}(\Omega)}(\tau\psi^{\prime}(t)+\psi(t))^{2})(\psi^{2}) ^{\prime\prime}(t).\end{split} \tag{48}\]
With this, the linearization of (13) at \(\kappa=0\), \(c_{0}^{2}\), \(b_{0}=\bar{b}\) becomes
\[\begin{split}&\underline{du}_{tt}^{\tau}+c^{2}\mathcal{A}_{c} \underline{du}^{\tau}+\bar{b}\underline{du}_{t}^{\tau}=f_{2}(t)\underline{d \kappa}(x)\phi^{2}(x)-f_{1}(t)\underline{d}c_{0}^{2}\Delta\phi(x)\quad\text{ in }\Omega\times(0,T)\\ &\underline{du}(0)=0,\quad\underline{du}^{\tau}(0)=0,\quad \underline{du}_{t}^{\tau}(0)=0\quad\text{ in }\Omega\\ &\underline{du}^{\tau}=\tau\underline{du}_{t}+\underline{du}. \end{split}\]
Moreover, using separation of variables and the relaxation functions we can write
\[\widehat{\underline{du}^{\tau}}(x,z)=\sum_{j=1}^{\infty}\tfrac{1}{\omega_{ \lambda_{j}}(z)}\sum_{k\in K^{\lambda_{j}}}\Bigl{(}\widehat{f}_{2}(z)\langle \underline{d}\underline{\kappa}\phi^{2},\varphi_{j}^{k}\rangle+\widehat{f}_{1} (z)\langle\underline{d}\underline{c}_{0}^{2}(-\Delta\phi),\varphi_{j}^{k} \rangle\Bigr{)}\varphi_{j}^{k}(x). \tag{49}\]
Due to Lemma 2.2, taking the residues at the poles singles out the contributions of the individual eigenspaces to this sum, that is, after evaluation by the observation operator
\[\begin{split}&\text{Res}(\widehat{\mathcal{C}_{\Sigma}} \underline{du};p_{m}^{\pm})\\ &=\text{Res}(\tfrac{1}{\omega_{\lambda_{m}}};p_{m}^{\pm})\sum_{k \in K^{\lambda_{m}}}\Bigl{(}\widehat{f}_{2}(p_{m}^{\pm})\langle\underline{d} \underline{\kappa}\phi^{2},\varphi_{m}^{k}\rangle+\widehat{f}_{1}(p_{m}^{\pm}) \langle\underline{d}\underline{c}_{0}^{2}(-\Delta\phi),\varphi_{m}^{k}\rangle \Bigr{)}\mathcal{C}_{\Sigma}\varphi_{m}^{k}.\end{split} \tag{50}\]
This allows us to exploit some cancellations and write the linearization of the forward operator at \(\kappa=0\), \(c_{0}^{2}=0\), \(b_{0}=\bar{b}\) as
\[\begin{split} D\widetilde{\mathbb{F}}(0,c_{0}^{2})(\underline{d} \underline{\kappa},\underline{d}c_{0}^{2})=\Bigl{(}(&\tfrac{\nu_{m }}{\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}B_{\lambda_{m}}[(\langle\underline{d} \underline{\kappa}\phi^{2},\varphi_{m}^{k}\rangle)_{k\in K_{\lambda_{m}}}])_{m \in\mathbb{N}},\\ &(-\tfrac{\nu_{m}}{\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}B_{\lambda_ {m}}[(\langle\underline{d}c_{0}^{2}(-\Delta\phi),\varphi_{m}^{k}\rangle)_{k\in K _{\lambda_{m}}}])_{m\in\mathbb{N}}\Bigr{)}\end{split} \tag{51}\]
with \(\nu_{m}\) as in (48). As a consequence of Lemma 2.3 and (48), the factors \(\frac{\nu_{m}}{v\sqrt{4c^{2}\lambda_{m}-\bar{b}^{2}}}\) appearing here do not vanish. Under condition
\[|\nu_{m}|^{2}\geq c_{\nu}\lambda_{m}^{s-1},\quad m\in\mathbb{N}\text{ for some }s>\frac{d}{2} \tag{52}\]
we have
\[\begin{split}&\|D\widetilde{\mathbb{F}}(0,c_{0}^{2})(\underline{d \kappa},\underline{dc}_{0}^{2})\|_{\mathbb{Y}}\\ &=\sum_{m\in\mathbb{N}}\frac{\lambda_{m}^{2}|\nu_{m}|^{2}}{4c^{2 }\lambda_{m}-\bar{b}^{2}}\sum_{k\in K_{\lambda_{m}}}\Big{(}|(\underline{d \kappa}\phi^{2},\varphi_{m}^{k})|^{2}+|\langle\underline{dc}_{0}^{2}(-\Delta \phi),,\varphi_{m}^{k}\rangle|^{2}\Big{)}\\ &\geq\frac{c_{\nu}}{c^{2}}\Big{(}\|\underline{d\kappa}\phi^{2} \|_{\dot{H}^{s}(\Omega)}^{2}+\|\underline{dc}_{0}^{2}(-\Delta\phi)\|_{\dot{H}^ {s}(\Omega)}^{2}\Big{)}\end{split} \tag{53}\]
and thus, \(D\widetilde{\mathbb{F}}(0,c_{0}^{2})\) is an isomorphism between \(\mathbb{X}_{1}\times\mathbb{X}_{2}\) and \(\mathbb{Y}\), where
\[\mathbb{X}_{1}:=\{\underline{d\kappa}\in L^{\infty}(\Omega)\,:\,\underline{d \kappa}\phi^{2}\in\dot{H}^{s}(\Omega)\},\quad\mathbb{X}_{2}:\{\underline{dc}_ {0}^{2}\in L^{\infty}(\Omega)\,:\,\underline{dc}_{0}^{2}(-\Delta\phi)\in\dot{H }^{s}(\Omega)\},\]
so that \(\mathbb{X}_{1}\times\mathbb{X}_{2}\times\{\bar{b}\}\subseteq X\).
### Linearized uniqueness of \((\kappa,c_{0}^{2})\) for constant \(b_{0}(x)\equiv\bar{b}\)
From the isomorphism property of \(D\widetilde{\mathbb{F}}(0,c_{0}^{2})\), we conclude its injectivity and thus injectivity of \((\mathcal{C}_{\Sigma}\circ\partial_{\kappa}S(0,c_{0}^{2},\bar{b}),\mathcal{C }_{\Sigma}\circ\partial_{c_{0}^{2}}S(0,c_{0}^{2},\bar{b})\).
### Linearized uniqueness of \((\kappa,b_{0})\) for fixed \(c_{0}(x)\)
Similarly to the above isomorphism proof, also injectivity of \((\mathcal{C}_{\Sigma}\circ\partial_{\kappa}S(0,c_{0}^{2},\bar{b}),\mathcal{C }_{\Sigma}\circ\partial_{b_{0}}S(0,c_{0}^{2},\bar{b})\) can be shown. To this end, we keep (47) and replace \(f_{1}(t):=\tau\psi^{\prime}(t)+\psi(t)\) in (48) by \(f_{1}(t):=\tau\psi^{\prime\prime}(t)+\psi^{\prime}(t)\). With this, the linearization of (13) at \(\kappa=0\), \(c_{0}^{2}\), \(b_{0}=\bar{b}\) becomes
\[\underline{du}_{tt}^{\tau}+c^{2}\mathcal{A}_{c}\underline{du}^{ \tau}+\bar{b}\underline{du}_{t}^{\tau} =f_{2}(t)\underline{d\kappa}(x)\phi^{2}(x)+f_{1}(t)\underline{db}_ {0}\phi(x)\quad\text{ in }\Omega\times(0,T)\] \[\underline{du}(0)=0,\quad\underline{du}^{\tau}(0)=0,\quad \underline{du}^{\tau}_{t}(0)=0\quad\text{ in }\Omega\] \[\underline{du}^{\tau}=\tau\underline{du}_{t}+\underline{du}.\]
and so we just have to replace \(\underline{dc}_{0}^{2}(-\Delta\phi)\) by \(\underline{db}_{0}\phi\) in (49) and (50), (51), (53).
### Local uniqueness of \(\kappa\)
The fact that the operator \(D\widetilde{\mathbb{F}}(0,c_{0}^{2})\) is an isomorphism between \(\mathbb{X}_{1}\times\mathbb{X}_{2}\) and \(\mathbb{Y}\) also implies that \(\partial_{\kappa}\widetilde{\mathbb{F}}(0,c_{0}^{2},\bar{b}):\mathbb{X}_{1} \to\mathbb{Y}\) is a isomorphism. This together with the differentiability result Corollary 2 allows us to apply the Inverse Function Theorem (see, e.g., [41, Section 4.8]) and conclude that \(\mathbb{F}(\cdot,c_{0}^{2},\bar{b}):\mathbb{X}_{1}\to\mathbb{Y}\) is locally bijective (with a continuous inverse between these spaces) and thus, due to completeness of the eigensystem and our assumption that \(\phi\neq 0\) in \(\Omega\) the full forward map \(\kappa\to\mathcal{C}_{\Sigma}u\) is locally injective. More precisely, we even have local well-posedness of the inverse problem in the \(\mathbb{X}_{1}\)-\(\mathbb{Y}\) topologies.
**Theorem 3.1**.: _Assume that the conditions of Proposition 2 are satisfied, that (46) holds, \(r\) takes the form (47) with \(\phi\), \(\psi\) satisfying (48), (52), and that \(c_{0}(x)\), \(b_{0}(x)\equiv\bar{b}\) are the actual sound speed and damping coefficients. Then there exists \(\rho>0\) such that for any \(\kappa,\tilde{\kappa}\in B_{\rho}^{\mathbb{X}_{1}}(0)\), equality of the observations \(\mathcal{C}_{\Sigma}S(\kappa,c_{0}^{2},\bar{b})(t)=\mathcal{C}_{\Sigma}S(\tilde {\kappa},c_{0}^{2},\bar{b})(t)\), \(t>0\) implies \(\kappa=\tilde{\kappa}\)._
This does not require \(c_{0}(x)\) nor \(\mathcal{A}_{c}\) to be known, just the conditions (47), (48), (46) to be satisfied by the (possibly unknown) operator \(\mathcal{A}_{c}\) and its eigensystem.
Existence of an excitation \(r\) according to (47) such that \(e^{\alpha/2}\cdot r\in L^{2}(L^{2}(\Omega))\) and (48) holds might be hard to reconcile with (52), in particular in dimensions larger
than one, where (52) requires growth of \(\nu_{n}\). However, we can use the result to recover an arbitrary number \(M\) of generalized Fourier components of \(\kappa\phi^{2}\), assuming only (47), \(e^{\alpha/2}\,r\in L^{2}(L^{2}(\Omega))\) and (48) to hold, by setting \(c_{\nu}:=\max\{|\nu_{m}|^{-2}\lambda_{m}^{s-1}\,:\,m\in\{1,\ldots,M\}\}>0\) for a given \(s>\frac{d}{2}\) and working in the space \(\mathbb{X}_{1M}:=\{\underline{d\kappa}\in L^{\infty}(\Omega)\,:\,\underline{d \kappa}\phi^{2}\in\operatorname{span}(\varphi_{j}^{k})_{j\in\{1,\ldots,M\},\,k \in K^{\lambda_{j}}\}\subseteq\dot{H}^{s}(\Omega)\).
**Corollary 3**.: _Assume that the conditions of Proposition 2 are satisfied, that (46) holds, \(r\) takes the form (47) with \(\phi\), \(\psi\) satisfying (48), and that \(c_{0}(x)\), \(b_{0}(x)\equiv\bar{b}\) are the actual sound speed and damping coefficients. Then there exists \(\rho>0\) such that for any \(\kappa,\tilde{\kappa}\in B_{\rho}^{\mathbb{X}_{1M}}(0)\), equality of the observations \(\mathcal{C}_{\Sigma}S(\kappa,c_{0}^{2},\bar{b})(t)=\mathcal{C}_{\Sigma}S( \tilde{\kappa},c_{0}^{2},\bar{b})(t)\), \(t>0\) implies \(\kappa=\tilde{\kappa}\)._
**Acknowledgment.** This work was supported by the Austrian Science Fund FWF [http://dx.doi.org/10.13039/501100002428](http://dx.doi.org/10.13039/501100002428) under the grant P36318.
|
2304.04239 | Universal Transitions between Growth and Dormancy via Intermediate
Complex Formation | A simple cell model consisting of a catalytic reaction network with
intermediate complex formation is numerically studied. As nutrients are
depleted, the transition from the exponential growth phase to the
growth-arrested dormant phase occurs along with hysteresis and a lag time for
growth recovery. This transition is caused by the accumulation of intermediate
complexes, leading to the jamming of reactions and the diversification of
components. These properties are generic in random reaction networks, as
supported by dynamical systems analyses of corresponding mean-field models. | Jumpei F. Yamagishi, Kunihiko Kaneko | 2023-04-09T13:55:45Z | http://arxiv.org/abs/2304.04239v1 | # Universal Transitions between Growth and Dormancy via Intermediate Complex Formation
###### Abstract
A simple cell model consisting of a catalytic reaction network with intermediate complex formation is numerically studied. As nutrients are depleted, the transition from the exponential growth phase to the growth-arrested dormant phase occurs along with hysteresis and a lag time for growth recovery. This transition is caused by the accumulation of intermediate complexes, leading to the jamming of reactions and the diversification of components. These properties are generic in random reaction networks, as supported by dynamical systems analyses of corresponding mean-field models.
## I Introduction
As microbial cells proliferate, they are crowded and nutrients in the environment are depleted. The cells then enter the dormant phase (or so-called stationary phase), in which cell growth is significantly arrested [1]. This behavior is commonly observed across microbial species and even mammalian cells under a variety of environmental conditions [2]. In fact, most microbial cells in natural ecosystems are in the growth-arrested dormant phase as they are under resource limitation [3; 4; 5; 6; 7]. Once cells enter the dormant phase, the intracellular metabolite phenotypes drastically change, and hysteresis and bistability between phenotypes with exponential and arrested growth are observed [8; 9; 10]; in addition, a certain time is required to recover growth even after the resource supply has resumed, and it is known as the lag time [11; 12; 13].
Despite the importance of such universal and mundane behavior, the theoretical understanding of dormancy is still in its infancy compared with that of the exponential growth phase, for which well-established quantitative theories are available [14; 15; 16]. Although specific molecular mechanisms of dormancy have been extensively studied [4; 17], little attention has been paid to establishing a theory for universal characteristics of the dormant phase and transitions to it. Refs. [18] and [19] represent a few early exceptions. In Ref. [18], by assuming that nutrient limitation leads to the accumulation of waste chemicals, a phenomenological model for the growth-dormant transition was proposed and quantitative laws of the lag time were derived. In Ref. [19], an abstract spin glass model for aging dynamics was proposed. However, the universality of their results and the origin(s) of these specific mechanisms have not been fully explored. Therefore, a better understanding of the growth-dormant transition as a universal behavior of cells growing through intracellular reactions with many components is required.
In this Letter, by considering a simple cell model consisting of catalytic reactions of many components, we demonstrate that such a transition between growth and dormant phases generally appears without specifically tuning the intracellular reactions as long as intermediate complexes between substrates and catalysts have sufficient lifetimes. The transition is used by the accumulation of complexes under the depletion of nutrients, and it is characterized as a cusp bifurcation in dynamical systems theory. The transition observed in random reaction networks is then analyzed using the "mean-field" theory of catalytic reaction dynamics, which also implies that the transition to a dormant phase does not require any special mechanism and is a universal feature of cells that grow by intracellular catalytic reactions.
## II Model
In this study, we adopt a simple model of cellular dynamics that captures only the basic features of these dynamics. It consists of intracellular reaction networks and transport reactions of externally supplied nutrient(s). Complicated intracellular metabolic reactions are simplified as randomly connected catalytic reaction networks. Although such models with catalytic reaction networks have reproduced the statistics of cells in the exponential growth phase [20; 21], they do not demonstrate the growth-dormant transition. One possible drawback in these models is that catalytic reactions progress immediately. In reality, each chemical reaction progresses after the formation of an intermediate complex between the substrate and catalyst is formed.
Here we introduce a model that includes the formation of intermediate complexes in reactions and examine whether and how the growth-dormant transition is exhibited by the model. Then, each catalytic reaction \(\rho\), in which substrate \(X_{\rho_{s}}\) is converted into product \(X_{\rho_{p}}\) by catalyst \(X_{\rho_{e}}\), consists of two-step elementary reaction processes with the formation of an intermediate complex \(Y_{\rho}\) as follows:
\[X_{\rho_{s}}+X_{\rho_{e}}\rightleftharpoons_{k_{\rho}}^{k_{\rho}^{+}}Y_{\rho} \overset{v_{\rho_{s}}}{\rightarrow}X_{\rho_{p}}+X_{\rho_{e}}.\]
Here, each elementary process proceeds according to the law of mass action with the labeled coefficient, and \(\rho_{s}\), \(\rho_{p}\), and \(\rho_{c}\) denote the indices of the substrate, product, and catalyst for reaction \(\rho\), respectively. For the adiabatic limit \(v_{\rho}\rightarrow\infty\), the above reaction processes are reduced to the single mass action kinetics without intermediate complex formation, \(X_{\rho_{s}}+X_{\rho_{e}}\to X_{\rho_{p}}+X_{\rho_{e}}\), and the model is reduced to those studied earlier [20; 21]. In contrast, when \(v_{\rho}\) is small, the intermediate complex \(Y_{\rho}\) can accumulate, leading to a decrease in free reactants that are not bound into complexes, which can hinder the reaction processes.
Considering a cell consisting of \(n\) chemicals and \(N_{r}\) reactions (and corresponding intermediate complexes), its state is represented by a set of concentrations \((\mathbf{x},\mathbf{y})\) of free reactants (that are not bound into complexes) \(X_{i}\) and complexes \(Y_{\rho}\). The time change of the cellular state \((\mathbf{x},\mathbf{y})\) is then given as follows:
\[\dot{x}_{i} =\sum_{\rho}(\delta_{i,\rho_{p}}+\delta_{i,\rho_{e}})v_{\rho}y_{ \rho}-(\delta_{i,\rho_{s}}+\delta_{i,\rho_{e}})f_{\rho}(\mathbf{x},\mathbf{y})\] \[+F_{i}(\mathbf{x};S_{\mathrm{ext}},\alpha)-\mu x_{i}, \tag{1}\] \[\dot{y}_{\rho} =f_{\rho}(\mathbf{x},\mathbf{y})-v_{\rho}y_{\rho}-\mu y_{\rho}, \tag{2}\]
where \(f_{\rho}(\mathbf{x},\mathbf{y}):=k_{\rho}^{+}x_{\rho_{s}}x_{\rho_{e}}-k_{\rho} ^{-}y_{\rho}\) is the total consumption rate of substrate \(\rho_{s}\) by reaction \(\rho\), and \(\delta\) is Kronecker's delta. The term \(F_{i}(\mathbf{x};S_{\mathrm{ext}},\alpha)\) in Eq. (1) represents the intake of chemical \(X_{i}\) (\(i=0,1,\cdots,n-1\)), which can be non-zero if \(X_{i}\) is a nutrient but is zero otherwise. The last terms in Eqs. (1-2), \(-\mu x_{i}\) and \(-\mu y_{\rho}\), represent the dilution of each concentration due to cellular volume growth. The growth rate \(\mu\) is given by \(\mu(\mathbf{x},\mathbf{y}):=\sum_{i}F_{i}(\mathbf{x})\), because for simplicity, we assumed that the contribution of each chemical \(X_{i}\) to volume or weight is uniform regardless of \(i\). \(\sum_{i}x_{i}+2\sum_{\rho}y_{\rho}=1\) is then constant in the dynamics (1-2) based on the law of mass conservation.
Below, for simplification purposes, the reaction rate constants \(k_{\rho}^{+}\), \(k_{\rho}^{-}\), and \(v_{\rho}\) are set as independent of \(\rho\), and they are denoted by \(k^{+}=1\), \(k^{-}=0\), and \(v\), respectively. For simplicity, we also assumed that there is only a single nutrient chemical \(X_{0}\). Its intake is mediated by transporter chemical \(X_{1}\) with \(\alpha=2\), i.e., \(F_{i}(\mathbf{x};S_{\mathrm{ext}},\alpha)=\delta_{i0}S_{\mathrm{ext}}x_{1}^{\alpha}\), where \(S_{\mathrm{ext}}\) denotes the environmental concentration of nutrient chemical \(X_{0}\), and the transport coefficient for \(F_{i}\) is normalized as unity. Note that the following results and arguments hold independent of the details of settings, such as parameter values and specific functional forms of nutrient intake \(F_{i}\) (see also Supplemental Material (SM), Sec. A).
## III Results
### Randomly generated networks
To understand the behaviors of the above model, we first randomly generated hundreds of intracellular reaction networks and numerically investigated the networks [22]. We then observed discontinuous transitions between growth and dormant phases against external nutrient abundance \(S_{\mathrm{ext}}\).
As an example, we consider the reaction network in Fig. 1(a). In Fig. 1(b), the steady growth rate \(\mu^{*}\), numerically obtained by solving the dynamics (1-2), is plotted against the environmental nutrient concentration \(S_{\mathrm{ext}}\). As shown, \(\mu^{*}\) decreases by orders of magnitude at \(S_{\mathrm{ext}}=\oplus S_{\mathrm{ext}}^{c}\), thus demonstrating the transition from growth to growth-arrested dormant phase. In addition, when \(S_{\mathrm{ext}}\) is increased starting from the dormant phase, the transition occurs at a larger \(S_{\mathrm{ext}}\), thus demonstrating hysteresis and bistability between the growth and dormant phases with intermediate levels of nutrient supply \(S_{\mathrm{ext}}\) (Fig. 1(b)) [23], as is observed for real microbes [8; 9; 10].
Through this _growth-dormant transition_, the intracellular chemical compositions and dominant reactions at work also change drastically (Fig. 1(b,c)). In the growth phase with larger \(S_{\mathrm{ext}}\), the nutrient influx is concentrated on an _autocatalytic growth subnetwork_[24; 25; 26; 27; 28] consisting of a few chemicals and reactions that connect the nutrient to the transporter (and associated byproducts). In contrast, in the dormant phase, fluxes spread over many chemicals in a subnetwork that cannot sustain growth by itself, which we term the _non-growing subnetwork_. Here, a subnetwork is referred to as a closed set of chemicals and reactions whose catalysts, substrates, and products are included therein and in which no other chemicals are included (see also SM, Sec. A for details). In contrast, in the dormant phase, fluxes spread over many chemicals in a subnetwork that cannot sustain growth by itself, which we term the _non-growing subnetwork_. Here,
Figure 1: Example of a growth-dormant transition in randomly generated networks. (a) Reaction network (\(n=10,N_{r}=30\)). Chemicals at arrowtails are transformed into those at arrowheads, catalyzed by the chemicals labeled on the edges. Nutrient \(X_{0}\) is taken up via active transport by transporter \(X_{1}\) in proportion to \(x_{1}^{2}\). (b) Dependence of \(\mu^{*}\) (black points) and \(\mathbf{x}^{*}\) (colored lines) on \(S_{\mathrm{ext}}\). \(v=0.01\). (inset) Hysteresis and bistability for \(\mu^{*}\). \(v=0.03\). (c) Dominant pathways for the growth phase (\(S_{\mathrm{ext}}=10\); top) and dormant phase (\(S_{\mathrm{ext}}=0.0016\); bottom). \(v=0.01\). The edge colors represent reaction fluxes in the log scale. (d) Dependence of composition entropy \(H:=-\sum_{i}x_{i}\log x_{i}-\sum_{\rho}2y_{\rho}\log(2y_{\rho})\) (black) and \(Y:=\sum_{\rho}y_{\rho}\) (gray) on \(S_{\mathrm{ext}}\). \(v=0.01\). (e) Dependence of \(\mu^{*}\) on \((S_{\mathrm{ext}},v)\). \(\mu^{*}\) is numerically calculated by decreasing \(S_{\mathrm{ext}}\) for each \(v\), and hysteresis is observed in the area surrounded by the gray line.
a subnetwork is referred to as a closed set of chemicals and reactions whose catalysts and substrates are included therein (see also SM, Sec. A for details). These subnetworks compete with each other while also overlapping: the activation of the autocatalytic growth subnetwork suppresses the non-growing subnetwork via growth-induced dilution, whereas the latter inhibits the former by the accumulation of complexes because reactants that are combined into some complexes in the non-growing subnetwork cannot be used for the reactions in the autocatalytic growth subnetwork. Consistently, the total concentration of complexes \(Y:=\sum_{\rho}y_{\rho}\) increases across the growth-dormant transition, as shown in Fig. 1(d). Due to the competition between the autocatalytic and non-growing subnetworks, this transition exhibits discontinuity, hysteresis, and bistability (Fig. 1(b)).
To measure such competition between autocatalytic and non-growing subnetworks, we defined _composition entropy_, \(H(\mathbf{x},\mathbf{y}):=-\sum_{i}x_{i}\log x_{i}-\sum_{\rho}2y_{\rho}\log(2 y_{\rho})\). In general, in the growth phase, an autocatalytic subnetwork with the largest growth rate should be dominant and \(H\) should be small, whereas in the dormant phase, many reactions and chemicals in the non-growing subnetwork could be engaged to the same degree and \(H\) can be relatively large. As both subnetworks are comparably active near the critical nutrient concentration \(S_{\mathrm{ext}}^{c}\), the composition entropy \(H\), or the diversity of the intracellular chemical composition, reaches a maximum near the transition point (Fig. 1(d)). Notably, such a trend is common among randomly generated networks (Fig. 2). From a biological perspective, this prediction would be consistent with the observations that stringent responses increase the diversity of the cellular components during the transition and in the dormant phase [4, 17, 29].
The suppression of growth at the transition can be understood as a type of jamming caused by the accumulation of intermediate complexes: when the total concentration of complexes is larger, the free catalysts necessary for reactions in the autocatalytic growth subnetwork are limited [30]. Consistently, with \(v\!>\!^{5}v^{c}\), discontinuous transition and hysteresis are not observed against changes in \(S_{\mathrm{ext}}\) (Fig. 1(e)). Moreover, the dependence of the steady growth rate \(\mu^{*}\) on \((S_{\mathrm{ext}},v)\) in Fig. 1(e) suggests a cusp bifurcation in the dynamical systems theory (as is also confirmed by the following mean-field analysis). We also found that as \(v\) is smaller, both \(S_{\mathrm{ext}}^{c}\) and \(\mu_{\mathrm{max}}\) are smaller; in other words, when \(v\) varies, a trade-off occurs between maximum growth rate \(\mu_{\mathrm{max}}\) and minimal nutrient concentration for the growth phase, \(S_{\mathrm{ext}}^{c}\). Such a trade-off has historically been considered a result of evolution leading to adaptations to either abundant or scarce nutrient environments [31, 32], whereas our results suggest that this trade-off is a universal feature of growing cells with complex reaction networks.
Statistically, sufficiently large reaction networks are expected to include non-growing subnetwork(s) in addition to autocatalytic growth subnetwork(s). Indeed, even with \(n=10\sim 30\), about half of the randomly generated networks exhibited growth-dormant transitions (Fig. S1(a) in SM). In addition, the proportion of networks that exhibit transitions is maximal for relatively sparse reaction networks, and the peak value gradually increases as the number \(n\) of chemicals increases. The following characteristics are also common to such networks: (i) growth-dormant transition against changes in \(S_{\mathrm{ext}}\) requires small \(v\); (ii) hysteresis against changes in \(S_{\mathrm{ext}}\); (iii) increases in composition entropy \(H\) around the transitions; and (iv) a trade-off between maximum growth rates \(\mu_{\mathrm{max}}\) and minimal nutrient concentrations \(S_{\mathrm{ext}}^{c}\) to sustain growth. These results suggest the universality of growth-dormant transitions due to reactant competition via complex formation in complicated reaction networks, as is the case for metabolic networks in actual cells.
We also numerically calculated the time for growth recovery after starvation as follows: First, up to \(t=0\), cells are set in nutrient-rich conditions with sufficiently large \(S_{\mathrm{ext}}\), and they remain in steady states with exponential growth. Second, the external nutrient supply is instantaneously depleted to \(S_{\mathrm{ext}}=0\) until \(t=T_{\mathrm{strv}}\). Finally, \(S_{\mathrm{ext}}\) is instantaneously increased to the original value. Then, a certain period \(T_{\mathrm{lag}}\gg 1/\mu_{\mathrm{max}}\), known as the lag time, is required for the cell to recover the original exponential growth, if the non-growing subnetwork is not a cycle and the amount of transporter chemical is sufficiently reduced therein; the lag time \(T_{\mathrm{lag}}\) increases with starvation time \(T_{\mathrm{strv}}\) in the form \(T_{\mathrm{lag}}\propto T_{\mathrm{strv}}^{\beta}\) for a certain range (up to some saturation time), as observed in real microbes [33, 34] (see SM, Fig. S2 for an example). Here, the transporter's concentration gradually decreases under starvation; thus, the time for growth recovery, which requires the regain of the transporter, increases with starvation time [35]. The exponent \(\beta\) ranges from approximately \(0.1\) to \(0.5\) depending on the network structures that alter the intracellular reaction dynamics.
### Mean-field analysis
To further investigate the mechanism underlying the growth-dormant transition in terms of dynamical systems theory, we constructed mean-field models. First, we considered a model with one effective concentration variable \(X\) and associated complexes \(Y\) in addition to the nutrient \(S\) (Fig. 3(a)). This model exhibits the growth-dormant transition, although it requires extremely small values of the parameter \(v\), \(v<\mu_{\mathrm{max}}\), and \(\alpha>2\).
Then, we considered another mean-field model that incor
Figure 2: The steady growth rate \(\mu^{*}\), the total concentration of complexes \(Y\), and composition entropy \(H\) are plotted against \(S_{\mathrm{ext}}/S_{\mathrm{ext}}^{c}\) for several randomly generated networks that exhibit growth-dormant transitions in different colors. \(n=10,N_{r}=30\), \(v=0.01\).
\(T\) operates another variable \(T\) representing the mean-field for the concentration of the transporter(s) in addition to \(X\) representing the remaining non-nutrient chemicals (Fig. 3(b)). The number of chemicals represented by \(X\) and \(T\) are denoted by \(n_{X}\) and \(n_{T}\), respectively. As only the complex \(Y\) between \(X\) and \(T\) is considered for simplicity in this model [36], it includes the autocatalytic growth subnetwork, \(S+X\to T+X\) and \(S+X\to 2X\), and the single non-growing subnetwork, \(T+X\rightleftharpoons Y\). This mean-field model reproduces common behaviors observed for randomly generated networks, including discontinuous growth-dormant transitions with \(v>\mu_{\max}\) (Fig. 3(b)). The transition occurs when \(n_{X}>n_{T}=1\), and a larger number \(n_{X}\) of \(X\) leads to a larger \(S_{\mathrm{ext}}^{c}\) (see SM, Fig. S5).
From the bifurcation analysis (Fig. 3(c-d)), we found that the growth-dormant transition occurs as a cusp bifurcation against changes in \(S_{\mathrm{ext}}\) and \(v\)[37]. This observation can explain some of the above-mentioned properties of randomly generated networks, i.e., discontinuous transitions and hysteresis. Notably, although both transporter \(T\) and the remaining chemicals \(X\) are essential for cell growth, their competition leads to a flow field with mutual inhibition as in the toggle switch at the intermediate value of \(S_{\mathrm{ext}}\). Furthermore, from the self-consistent equation for the steady growth rate \(\mu^{*}\), we can determine where and how the growth-dormant transition occurs (Fig. 3(e)).
## IV Discussion
In this Letter, we studied a model of catalytic reaction networks wherein a variety of components react via the formation of intermediate complexes. This model exhibits discontinuous growth-dormant transitions against nutrient conditions as long as the formed complexes have sufficient lifetimes (i.e., with small \(v\)). This transition to growth-arrested dormant phases is caused by the accumulation of intermediate complexes under nutrient-poor conditions, which results in the jamming of reactions in the autocatalytic growth subnetwork and the relatively even distribution of chemical concentrations over diverse components in the non-growing subnetwork. Remarkably, other basic characteristics of dormancy, i.e., hysteresis between the exponential growth and dormant phases, the lag time for growth recovery after starvation, and a trade-off between maximum growth rate \(\mu_{\max}\) and minimal nutrient concentration \(S_{\mathrm{ext}}^{c}\) to sustain growth (or a sort of sensitivity to nutrient scarcity) are also reproduced. These results indicate that growth-dormant transitions and dormancy might be inevitable for cells that grow via complex-forming catalytic reaction networks and likely emerge without tuning by evolution or adaptation; thus, even protocells at the primitive stage of life [38; 39] are expected to exhibit such transitions to dormancy, which would be relevant to their survival under environmental stresses. On the other hand, further studies of detailed realistic models, such as those including distributed parameters and more realistic network structures, will be necessary to reveal how the above fundamental characteristics of dormancy are preserved or changed by evolution.
Figure 3: Mean-field models. (a) Mean-field model with \(S\), \(X\), and \(Y\) only. (left) Network structure. (right) Dependence of \(\mu^{*}\) and steady states on \(S_{\mathrm{ext}}\). \(\alpha=3,v=0.001\). (b-e) Mean-field model with the distinction between transporter \(T\) and the remaining chemicals \(X\). Unless otherwise stated, \(v=0.01,n_{X}=2\). (b) (left) Network structure. (right) Dependence of \(\mu^{*}\) and steady states on \(S_{\mathrm{ext}}\). (c) Bifurcation diagram: Dependence of \(T^{*}\) on \((S_{\mathrm{ext}},v)\). (d) Flow diagram in the phase space \((T,X)\). Red and blue lines represent \(T\)-nullcline and \(X\)-nullcline, respectively. Arrows with brighter colors correspond to faster flows. (e) Self-consistent equation for \(T\) and \(\mu\). Black and orange lines depict \(T=T^{*}(\mu;v,n_{X})\) (Eq. (B1) in SM) and \(T=(\mu/S_{\mathrm{ext}})^{1/\alpha}\) with \(S_{\mathrm{ext}}=10,25,50\), respectively.
Moreover, the composition entropy \(H\) is predicted to increase towards the transition point as a result of the competition between the autocatalytic and non-growing subnetworks. Biologically, it would correspond to the stringent responses that increase the diversity of the intracellular components [4; 17; 29]. It will be crucial to experimentally verify the predicted increases in the composition entropy and jamming of reactions due to the accumulation of complexes.
We also analyzed the dynamics of mean-field models and thereby demonstrate that the growth-dormant transition occurs as a cusp bifurcation [37], which supports the discontinuity of the transitions as well as hysteresis. The validity of the coarse-grained mean-field models suggests the universality of the growth-dormant transition across many-body reaction systems. Note that, although the mean-field models in Fig. 3 capture the mechanism of the dormancy or growth arrest due to the accumulation of complexes, the diversification of components in nutrient-poor conditions owing to competition between subnetworks [40] is not addressed. To consider such diversification and improve the mean-field theory, the incorporation of two-body correlations among the concentrations beyond their mean will be required.
In conclusion, our study explains the ubiquity and fundamental characteristics of dormancy as general properties in reaction networks with complex formation, by offering a coherent view of cell growth and dormancy.
###### Acknowledgements.
The authors would like to thank Yuichi Wakamoto, Yusuke Himeoka, Tetsuhiro S. Hatakeyama, and Shuji Ishihara for stimulating discussions. J. F. Y. is supported by Grant-in-Aid for JSPS Fellows (21J22920). K. K. is supported by Grant-in-Aid for Scientific Research (A) (20H00123) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan and the Novo Nordisk Foundation.
|
2303.17584 | Consensus controller with safety guarantee: an application to the
kinematic bicycle model | This paper proposes a consensus controller for multi-agent systems that can
guarantee the agents' safety. The controller, built with the idea of output
prediction and the Newton-Raphson method, achieves consensus for a class of
heterogeneous nonlinear systems. The Integral Control Barrier Function is
applied in conjunction with the controller, such that the agents' states are
confined within pre-defined safety sets. Due to the dynamically-defined control
input, the resulting optimization problem from the barrier function is always a
Quadratic Program, despite the nonlinearities that the system dynamics may
have. We verify the proposed controller using a platoon of autonomous vehicles
modeled by kinematic bicycles. A convergence analysis of the leader-follower
consensus under the path graph topology is conducted. Simulation results show
that the vehicles achieve consensus while keeping safe inter-agent distances,
suggesting a potential in future applications. | Kaicheng Niu, Chaouki Abdallah, Mohammad Hayajneh | 2023-03-30T17:52:57Z | http://arxiv.org/abs/2303.17584v2 | # Consensus controller with safety guarantee: an application to the kinematic bicycle model
###### Abstract
This paper investigates a consensus controller for multi-agent systems that can guarantee both the safety and the privacy of the agents. The controller, built upon the idea of output prediction and the Newton-Raphson method, guarantees consensus for a class of heterogeneous systems. Its required inter-agent data-exchange consists of the predicted agents' locations but not their state variables, which guarantees a measure of privacy. Using the Control Barrier Function, which confines the agents' states within the pre-defined safety sets, the safety of the agents is also guaranteed. We verify the proposed controller on a platoon of autonomous vehicles modeled by kinematic bicycles. Simulation results show that the agents achieve leader-follower consensus and avoid collisions while keeping a suitable inter-agent distances.
## I Introduction
Consensus control has been extensively investigated in the setting of multi-agent systems, where typically it is underscored by a distributed algorithm that guarantees a convergence of state or output variables of various agents to a common target value; see, e.g., [1] and references therein. Application areas of consensus control include swarms of unmanned aerial vehicles and platoons of self-driving cars. Following initial investigations of homogeneous linear systems, which led to the definition of the graph Laplacian as the fundamental consensus controller, this research's scope has been broadened to encompass heterogeneous systems whose agents have different, possibly-nonlinear dynamic models. Ref. [2] derived a consensus controller for heterogeneous Euler-Lagrange systems, while [3] investigated general nonlinear single-input-single-output systems, and [4] considered a class of second-order systems. Recent studies have derived various consensus-control techniques including adaptive control [5][6] and Model Predictive Control (MPC) [7][8].
Recently, authors of this paper have developed a consensus controller for heterogeneous nonlinear systems [9]. Its required inter-agent communications include predicted outputs but no state information, thereby providing a measure of agents' privacy. However, the controller does not provide any measure of safety: while assuming that the agents have various models of mobile robots, e.g., dynamical bicycles and differentially driven cars, it allows multiple agents to occupy common physical spaces as the consensus control converges. The main contribution of this paper is to supplement the consensus controller derived in [9] with a Control Barrier Function (CBF) technique designed to prevent collisions. To be exact, this safety constraint is achieved by ensuring a minimum distance among vehicles. We recognize that it is also possible to maintain the separation directly by the platoon-consensus control protocol, and a CBF technique may serve as a backup to unforeseen circumstances.
As an example of applications, this paper solves the problem of the leader-follower consensus over a platoon of autonomous vehicles. The control of vehicle platoons has been studied extensively in the literature. The _adaptive cruise control_ focuses on the longitudinal motion dynamics, for which a PID controller is designed in [10], and [11] applied the MPC. The lateral dynamics, which are nonlinear and can be described by kinematic or dynamic bicycle models [12], have been considered as well. Ref. [13] achieved vehicle merging and platooning by computing a consensus control over a second-order simplified system-model and translating the results to a bicycle model. A hierarchical formation-controller for platoons has been derived in [14], based on a joint application of MPC for path planning and tracking. Ref. [15] has considered platoon tracking and spacing by applications of a tracking controller for the lateral dynamics and a consensus controller for the longitudinal dynamics. This paper considers the platooning problem in a setting where the vehicles' motions are two dimensional, namely planar. This is different from motions on the roads and could demonstrate the interactions between platooning and CBFs in two dimensions. The leader-follower consensus is achieved using the algorithm described in [9], and the safe sets are comprised of variable-radius circles centered at the neighboring vehicles. Related references on applications of control barrier functions to autonomous vehicles include an adaptive cruise control in [16], and lane keeping and spacing on unicycle models in [17]. Additionally, [18] applied a control barrier function in conjunction with a tracking technique to a dynamic bicycle model.
The remainder of this paper is organized as follows: Section II reviews the existing consensus controller defined in [9] and summarizes relevant results on CBFs. Section III defines the consensus problem considered in this paper and presents the proposed solutions to it. Section IV discusses the simulation results, and Section V concludes the paper.
## II Survy of background material
### _Prediction-based consensus controller_
The consensus controller reviewed in this subsection was originally presented in [9]. Consider a multi-agent system with \(N\) agents, denoted by \(A_{i}\), \(i\in\mathcal{N}:=\{1,2,3,\ldots,N\}\). Suppose that the motion dynamics of \(A_{i}\) are modelled by a differential equation of the form
\[\dot{x}_{i}(t)=f_{i}(x_{i}(t),u_{i}(t)), \tag{1}\]
where \(x_{i}(t)\in\mathbb{R}^{n_{i}}\) and \(u_{i}(t)\in\mathbb{R}^{m}\) are the respective state variable and input variable of \(A_{i}\) for given positive integers \(n_{i}\) and \(m\), and \(x_{i}(0)\in\mathbb{R}^{n_{i}}\) is the initial condition of Eq. (1) at time \(t=0\). Suppose that \(f_{i}:\mathbb{R}^{n_{i}}\times\mathbb{R}^{m}\mapsto\mathbb{R}^{n_{i}}\) is a continuously differentiable function. Plus, \(f_{i}\) satisfies suitable sufficient conditions for the existence of unique solutions of Eq. (1), in the time interval \(t\in[0,\infty)\) for every bounded, piecewise continuous input signal \(\{u(t):t\in[0,\infty)\}\) and initial condition \(x_{0}\in\mathbb{R}^{n_{i}}\) (see, e.g., Ref. [9], Assumption 1). The output of the agent \(A_{i}\) is denoted by
\[y_{i}(t)=h_{i}(x_{i}(t)), \tag{2}\]
where the function \(h_{i}:\mathbb{R}^{n_{i}}\mapsto\mathbb{R}^{m}\) is continuously differentiable. Observe that the dimensions of the agents' state spaces, \(n_{i}\), may be different from each other, but their input and output spaces must have the same dimension, \(m\).
The consensus problem requires that we design a distributed controller guaranteeing that
\[\limsup_{t\rightarrow\infty}\|y_{i}(t)-y_{j}(t)\|=0, \tag{3}\]
for every pair of \(i,j\in\mathcal{N}\).
To reach consensus, agents in the system have to exchange information among themselves. This information exchange is performed over a communication network, whose topology is modeled by an undirected graph \(\mathcal{G}\). Its vertices, \(V_{i}\), \(i=1,\ldots,N\) correspond to the respective agents \(A_{i}\), \(i=1,\ldots,N\). Its edges, connecting pairs of vertices, correspond to pairs of agents with direct and bi-directional communication links between them. We say that \(A_{j}\) is a neighbor of \(A_{i}\) if the graph contains an edge between \(V_{i}\) and \(V_{j}\). The indices of the neighbors of the \(i\)-th agent is denoted by a set \(N_{i}\). We make the following assumption:
_Assumption 1_: The (undirected) communication graph \(\mathcal{G}\) is connected, meaning that there is a path formed by some edges between every pair of vertices \((V_{i},V_{j}),i\neq j\). \(\square\)
The existing consensus controller attempts to reach consensus among the predicted outputs of the agents. Suppose that all the agents have the same prediction horizon, \(T>0\), and denote the predicted output of \(A_{i}\), computed at time \(t\), by \(\tilde{y}_{i}(t+T)\). We make the following assumption:
_Assumption II_: The predicted output \(\tilde{y}_{i}(t+T)\) functionally depends on \((x_{i}(t),u_{i}(t))\) in the following way:
\[\tilde{y}_{i}(t+T)=g_{i}(x_{i}(t),u_{i}(t)), \tag{4}\]
where \(g_{i}:\mathbb{R}^{n_{i}}\times\mathbb{R}^{m}\mapsto\mathbb{R}^{m}\) is a continuously differentiable function. \(\square\)
We point out that the function \(g_{i}(\cdot,\cdot)\) may not have a known analytic form, but it can be approximated by numerical or simulation means.
The consensus controller used in [9] has the following form:
\[\dot{u}_{i}=-\alpha_{i}\left(\frac{\partial g_{i}}{\partial u_{i}}(x_{i},u_{i })\right)^{-1}\sum_{j\in N_{i}}\big{(}g_{i}(x_{i},u_{i})-g_{j}(x_{j},u_{j}) \big{)}, \tag{5}\]
where \(\alpha_{i}\geq 1\) is a given constant called the controller's _speedup_ factor of \(A_{i}\). As discussed in [9] and explained in the sequel, large controller speedup factors may be associated with stabilization of the closed-loop system and reductions of asymptotic tracking errors. The combinations of Eqs. (1), (2), (4) and (5) defines the closed-loop of the agents \(A_{i}\), whereby for a given \(i\in\mathcal{N}\), the closed-loop state variable is defined as \(z_{i}:=(x_{i},u_{i})\in\mathbb{R}^{n_{i}}\times\mathbb{R}^{m}\). Under a bounded-input-bounded-state stability condition defined in [9], the following inequality is satisfied:
\[\limsup_{t\rightarrow\infty}\|\sum_{j\in\mathcal{N}_{i}}\big{(}\tilde{y}_{i}(t +T)-\tilde{y}_{j}(t+T)\big{)}\|^{2}<\frac{\eta}{4\alpha_{\min}}, \tag{6}\]
where \(\alpha_{\min}:=\min\{\alpha_{i}:i\in\mathcal{N}\}\). It must be mentioned that the derived consensus is not among the controllers' outputs \(y_{j}(t)\), \(j=1,\ldots,N\), but among their predicted values, \(\tilde{y}_{j}(t+T)\). The terms \(||\tilde{y}_{j}(t+T)-y_{j}(t+T)||\), called the prediction gaps, cannot be reduced by increasing the controllers' speedup, and may provide additive terms to the consensus quantifier \(||y_{j}(t)-y_{i}(t)||\). This term generally has to be controlled by using high-precision prediction techniques.
The predictor used in [9], following [19], is defined by the following equations
\[\dot{\xi}_{i}(\tau)=f_{i}(\xi_{i}(\tau),u_{i}(t)), \tag{7}\]
where "dot" denotes derivative with respect to \(\tau\in[t,t+T]\), with the boundary condition \(\xi(t)=x(t)\); and
\[\tilde{y}_{i}(t+T):=h_{i}(\xi_{i}(t+T)). \tag{8}\]
Observe that these equations describe the dynamical system defined by (1), (2), and (5) in the interval \(\tau\in[t,t+T]\) with the constant input \(\mu_{i}(\tau)\equiv u_{i}(t)\) and the initial condition \(\xi_{i}(t)=x_{i}(t)\). The function \(f_{i}(\cdot,\cdot)\) is the state equation of agent \(A_{i}\) with the constant input \(u_{i}(t)\) for all \(\tau\in[t,t+T]\), reflecting the fact that the input at future times ahead of \(t\) are not known at time \(t\). This predictor satisfies Assumption II, and can be computed by the forward-Euler method.
### _Control Barrier Functions_
The Control Barrier Functions technique has emerged as an effective, efficient method for ensuring safety in feedback-control systems. Primary references on CBF include [16, 17], which provide the sources for the summary presented in this subsection.
Consider a control system defined by the equations
\[\dot{x}(t)=f(x(t),u(t)) \tag{9}\]
\[u(t)=\phi(x(t)), \tag{10}\]
where \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is the state equation of the system, and the function \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) represents the feedback law. Let \(S\subset\mathbb{R}^{n}\) be a closed set representing a system-safety in the sense that \(x(t)\in\mathbb{R}^{n}\) is safe if and only if \(x(t)\in S\). The set \(S\) is called a _safety set_. The objective of a CBF control is to ensure that \(S\) is forward invariant and exponentially stable under the system's dynamics, and it achieves that by modifying the control law defined by Eq. (10) if need be. In the following discussion we will denote the modified control law by the equation
\[u(t)=\psi(x(t)) \tag{11}\]
for a function \(\psi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), and refer to \(\{x(t):t\in[0,\infty)\}\) defined by Eqs. (9) and (11) as the system's trajectory.
In order to ensure the forward invariance and exponential stability of the set \(S\), let \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a continuously-differentiable function satisfying the following condition,
\[\begin{split} h(x)&>0,\qquad\forall x\in\mathcal{S} -\partial\mathcal{S}\\ h(x)&=0,\qquad\forall x\in\partial\mathcal{S}\\ h(x)&<0,\qquad\forall x\not\in\mathcal{S},\end{split} \tag{12}\]
and let \(\kappa:\mathbb{R}\rightarrow\mathbb{R}\) be an extended class-\(\mathcal{K}\) function. Suppose that for every system's trajectory \(\{x(t):t\in[0,\infty)\}\) the following equation is satisfied for every \(t\in[0,\infty)\):
\[\dot{h}(x(t))+\kappa(h(x(t)))\geq 0. \tag{13}\]
It was shown in [16, 17] that the conditions represented by Eqs. (12) and (13) guarantee the forward invariance and exponential stability of the set \(S\).
The inequality in Eq. (13) is ensured by a choice of the control function \(\psi(\cdot)\) as a minimal modification of the function \(\phi(\cdot)\) in Eq. (10). Specifically, observe that
\[\dot{h}(x(t))=\frac{dh}{dx}(x(t))f(x(t),u(t)), \tag{14}\]
and given \(x\in\mathbb{R}^{n}\), define the set \(S_{x}\subset\mathbb{R}^{m}\) by
\[S_{x}=\big{\{}u\in\mathbb{R}^{m}:\frac{dh}{dx}(x(t))f(x(t),u)+\kappa(h(x(t))) \geq 0\big{\}}. \tag{15}\]
Now given \(x\in\mathbb{R}^{n}\), define \(\psi(x)\) as
\[\psi(x)\in\operatorname{argmin}\big{\{}||\mathrm{u}-\phi(\mathrm{x})||\ :\ \mathrm{u}\in\mathrm{S}_{\mathrm{x}}\big{\}}. \tag{16}\]
If the inequality-constraint set in (16) is empty then \(\psi(x)\) is not well defined, but if it is non-empty, then any point in the argmin can be chosen as \(\psi(x)\). Note that, by (16) and (15), \(u(t)\) defined by Eq. (11) satisfies Eq. (13) for every \(t\geq 0\), thereby ensuring the safety of the trajectory \(\{x(t):t\in[0,\infty)\}\) if \(x(0)\) is safe.
A special case of interest arises when the state equation in (9) is control affine, namely
\[\dot{x}=f_{1}(x)+f_{2}(x)u \tag{17}\]
for functions \(f_{1}:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\) and \(f_{2}:\mathbb{R}^{n}\mapsto\mathbb{R}^{n\times m}\), assumed in this discussion to be continuously differentiable. In this case the set \(S_{x}\) in Eq. (15) has the form
\[S_{x}\ =\ \big{\{}u\in\mathbb{R}^{m}:\frac{dh}{dx}(x(t))\big{(}f_{ 1}(x)+f_{2}(x)u\big{)}+\\ \kappa(h(x(t)))\geq 0\big{\}}, \tag{18}\]
thereby rendering the optimization problem defined by the right-hand-side of Eq. (16) a quadratic programming program that arguably can be solved in real time at every point \(x(t)\) on a fine grid covering the time-set \(\{t\in[0,\infty)\}\). If the system is not control-affine, then techniques including nonlinear programming, constraints relaxation or grid search may be used to find the safe control.
We point out that our controller defined by Eqn. (5) does not satisfy the feedback from of Eqn. (10). However, before applied to the controlled plant, the inputs calculated by (5) can still be modified according to (15) to guarantee the safety.
## III Applications of the consensus controller on kinematic bicycles
The kinematic bicycle model is a 4-th order system, depicted in Fig. 1. The dynamics of this model are:
\[\begin{split}\dot{z}_{1}&=V\cos(\psi+\gamma)\\ \dot{z}_{2}&=V\sin(\psi+\gamma)\\ \dot{V}&=a\\ \dot{\psi}&=\frac{V}{L_{r}}\sin(\gamma),\end{split} \tag{19}\]
where \((z_{1},z_{2})^{\top}\in\mathbb{R}^{2}\) represents the location of the bicycle in the 2-D plain. \(V\in\mathbb{R}\) is the velocity of the bicycle. \(a\in[a_{\min},a_{\max}]\) is the bicycle's acceleration, and \(\gamma\in[\gamma_{\min},\gamma_{\max}]\) is the direction of the bicycle's velocity with respect to the heading of the bicycle frame. \(L_{r}\) is the distance from the rear wheel to the bicycle's center of gravity (COG). Denote the angle of the steering wheel (with respect to the bicycle's heading) with \(\delta_{f}\), then the following relationship holds:
\[\tan(\gamma)=\frac{L_{r}}{L_{f}+L_{r}}\tan(\delta_{f}), \tag{20}\]
where \(L_{f}\) is the distance from the front wheel to the bicycle's COG. The system inputs are chosen to be \(u=(a,\gamma)^{\top}\), and the system outputs are \(y=(z_{1},z_{2})^{\top}\). To distinguish among different agents, we denote the states of the \(i\)-th agent by \(x_{i}=(z_{i,1},z_{i,2},V_{i},\psi_{i})^{\top}\) and the inputs of \(A_{i}\) by \(u_{i}=(a_{i},\gamma_{i})^{\top}\).
The controller proposed in [9] considers only the case of leaderless consensus. As an extension to the original algorithm, this paper further consider the leader-follower consensus, which can be framed as follows: consider the multi-agent system consisting \(N+1\) agents, denoted by \(A_{i},i\in\{0,1,2,\ldots,N\}\). Each agent follows the system dynamics defined by Eqn. (1) and Eqn. (2). It is required that the leader of the system, \(A_{0}\), should (asymptotically) track an external reference signal \(r(t)\). Furthermore, all the agents should achieve consensus, as is described by Eqn. (3) for
every pair of \(i,j\in\{0,1,2,\ldots,N\}\). We apply the consensus controller (5) to guarantee (3), and to achieve tracking, we treat the external reference signal as a virtual agent that does not follow any other agents. Denote the controller speedup for tracking with \(\alpha_{t}\), then the inputs of the leader are:
\[\dot{u}_{0} =-\alpha_{0}\left(\frac{\partial g_{0}}{\partial u_{0}}(x_{0},u_{0} )\right)^{-1}\sum_{j\in N_{0}}\big{(}g_{0}(x_{0},u_{0})-g_{j}(x_{j},u_{j})\big{)} \tag{21}\] \[-\alpha_{t}\left(\frac{\partial g_{0}}{\partial u_{0}}(x_{0},u_{0 })\right)^{-1}\big{(}g_{0}(x_{0},u_{0})-r(t+T)\big{)}.\]
The reason to distinguish tracking speedup \(\alpha_{t}\) from consensus speedup \(\alpha_{0}\) is to reach a balance between consensus and tracking: the controller puts more emphasize on tracking as \(\alpha_{t}\) gets larger than \(\alpha_{0}\). Due to the nature of prediction, the reference signal \(r(\cdot)\) should also be shifted by a time lap \(T\) to guarantee consistency. See [19] for an explanation regarding the choice of these parameters.
Next, we apply control barrier functions to guarantee the agents' safety. In [9], it is shown that the consensus controller (5) leads to inter-agent collisions as the consensus error converges to \(0\). In reality, a safety distance between agents must be maintained. It is recommended that the time headway should be at least \(2\) seconds [20], indicating that the distance \(D\) between vehicles should satisfy:
\[D\geq k_{v}V, \tag{22}\]
where \(k_{v}\geq 2\).
In our experiments, we consider the leader-follower consensus in a 2-D plain. As a consequence, the unsafe set for agent \(A_{i}\) should be circles with radius \(k_{v}V_{i}\), where \(V_{i}\) is the speed of the bicycle \(i\). The centers of the unsafe set are the agents' neighbors, denoted by \((z_{j,1},z_{j,2})^{\top}\). As a consequence, the safe set for \(A_{i}\) is thus:
\[\mathcal{S}_{i}:=\{(z_{i,1},z_{i,2})^{\top}\in\mathbb{R}^{2}: \tag{23}\] \[(z_{i,1}-z_{j,1})^{2}+(z_{i,2}-z_{j,2})^{2}\geq k_{v}^{2}V_{i}^{2 },\forall j\in N_{i}\}.\]
_Remark:_ In the consensus controller (5), each agent \(A_{i}\) does not have to know the agents true location \((z_{j,1},z_{j,2})^{\top}\), but only the predicted locations \((\tilde{z}_{j_{1}},\tilde{z}_{j,2})^{\top}\). The requirement of the true locations in the CBF will thus destroy the agents privacy. If privacy is critical, then the unsafe set centers can be switched to the agents' predicted locations. However, the ratio \(K_{v}\) in the barrier function has to be enlarged in order to avoid collisions. This relationship is illustrated in Fig. 2. We conclude that there are trade-offs among performance, safety and privacy.
Using Eqn. (23), we can easily derive \(|N_{i}|\) CBFs for agent \(A_{i}\), where the \(j\)-th CBF is associated with agent \(A_{j},j\in N_{i}\). Denote the CBF by \(h_{i,j}(x_{i};x_{j})\), then we have:
\[h_{i,j}(x_{i};x_{j})=-k_{v}^{2}V_{i}^{2}+(z_{i,1}-z_{j,1})^{2}+(z_{i,2}-z_{j,2} )^{2}. \tag{24}\]
Taking derivative of Eqn. (24) yields:
\[\dot{h}_{i,j}(x_{i},u_{i};x_{j}) =-k_{v}^{2}V_{i}a_{i} \tag{25}\] \[+2(z_{i,1}-z_{j,1})V_{i}\cos(\psi_{i}+\gamma_{i})\] \[+2(z_{i,2}-z_{j,2})V_{i}\sin(\psi_{i}+\gamma_{i}).\]
Choose the class-\(\mathcal{K}\) function to be:
\[\kappa(s)=s, \tag{26}\]
and denote the system inputs computed by Eqn. (5) as \(\bar{a}_{i},\bar{\gamma}_{i}\), then the NLP problem associated with the CBF is:
\[\min_{a_{i},\gamma_{i}} (a_{i}-\bar{a}_{i})^{2}+(\gamma_{i}-\bar{\gamma}_{i})^{2}\] (27) s.t. \[-k_{v}^{2}V_{i}a_{i}+2(z_{i,1}-z_{j,1})V_{i}\cos(\psi_{i}+\gamma_ {i})\] \[+2(z_{i,2}-z_{j,2})V_{i}\sin(\psi_{i}+\gamma_{i})-k_{v}^{2}V_{i} ^{2}\] \[+(z_{i,1}-z_{j,1})^{2}+(z_{i,2}-z_{j,2})^{2}\geq 0,\forall j\in N_{i}.\] \[a_{\min}\leq a_{i}\leq a_{\max},\] \[\gamma_{\min}\leq\gamma_{i}\leq\gamma_{\max}.\]
Note that the constraints in Eqn. (27) are linear in \(a_{i}\), but nonlinear in \(\gamma_{i}\). This feature can be used to simplify the computation of the safe inputs. Indeed, it is sometimes preferable to change the car's acceleration \(a_{i}\) instead of the steering wheel \(\gamma_{i}\), because a slight modification in the steering wheel may lead to a large change in the car's trajectory. Hence, we propose a two-step solution to the NLP (27): first, fix \(\gamma_{i}=\bar{\gamma}_{i}\), and solve (27) by only considering \(a_{i}\). If no solution exists, meaning that either \(a_{i}>a_{\max}\) or \(a_{i}<a_{\min}\), then fix \(a_{i}=a_{\max}\) or \(a_{i}=a_{\min}\), respectively, and find a proper \(\gamma_{i}\) using brute force. The brute force method is carried in the following manner: divide \([\gamma_{\min},\gamma_{\max}]\) into \(D+1\) parts, and for each \(\gamma_{i}=\gamma_{\min}+\frac{k(\gamma_{\max}-\gamma_{\min})}{D},k=0,1,\ldots,D\), check if Eqn. (27) is satisfied. Note that this solution is not necessarily the same as the original one. However, in the next section, we will show by simulation that this strategy can be computed in real-time and can achieve very good performances.
Fig. 1: The kinematic bicycle model
Fig. 2: Geometric relationship of agent outputs, predictions and safety sets.
We summarize our leader-follower consensus controller with CBFs to conclude this section. The multi-agent system in question consists of a platoon of kinematic bicycles, modeled by Eqn. (19). Agents aim at reaching consensus with each other, and agent \(A_{0}\) is also supposed to track an external reference signal \(r(t)\). The control law for the leader is given by Eqn. (21), and that for the followers is given by Eqn. (5). To guarantee a suitable inter-agent distance, every controller applies the CBF by solving the nonlinear programming problem (27). The goal of the controller is to reach leader-follower consensus and avoid collisions by maintaining a proper distance.
## IV Simulation results
In this section we test our proposed consensus controller through a leader-follower consensus control problem. A video of our simulation results can be found at: [https://youtu.be/7Dble83hUIM](https://youtu.be/7Dble83hUIM). Our simulation consists of five agents following the dynamics of (19). The parameters of these five bicycles are arbitrarily chosen, listed in Table I. The inputs of the agents are restricted by \(a_{i,\max}=-a_{i,\min}=2m/s^{2}\) and \(\gamma_{i,\max}=-\gamma_{i,\min}=\frac{\pi}{6}\) for \(i\in\mathcal{N}\).
The communication among agents is carried out over a line graph, meaning that agent \(i\) is connected to agent \(i-1\) and \(i+1\) (if exists). Beyond the information from agent \(1\), the leader (agent \(0\)) also receives the external reference signal. The relationship among agents and the reference signal is depicted in Fig. 3. The reference signal starts with a straight line, and then merges into a lemniscate path. This reference signal can be expressed by:
\[r(t)=\begin{cases}(-50+3.75t,-60+4.5t)^{\top},t\in[0,\frac{40}{3}]\\ \big{(}150\sin\!\left(0.025(t-\frac{40}{3})\right),90\sin\!\left(0.05(t-\frac{4 0}{3})\right)\big{)}^{\top},\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad t\in[ \frac{40}{3},300].\end{cases} \tag{28}\]
At time \(0\), the five agents are arranged in a straight line. The initial locations of the five bicycles are \((-50,-60)\), \((-60,-72)\), \((-70,-84)\), \((-80,-96)\) and \((-90,-108)\), respectively. The controller speedup parameters are \(\alpha_{t}=150\) and \(\alpha_{i}=7\) for \(i=0,1,\ldots,4\). The lookahead time horizon is set to be \(T=0.3\).
Our experiment is carried out using the Forward-Euler method, starting from \(t_{s}=0s\) until \(t_{f}=300s\). The simulation stepsize is \(0.01s\). The output prediction, \(g_{i}(x_{i},u_{i};t+T)\), and its partial derivative, \(\dfrac{\partial g_{i}}{\partial u_{i}}(x_{i},u_{i};t+T)\), are calculated using explicit Runge-Kutta method of order 5(4) (RK45). The trajectories of the agents in the 2-D plain are illustrated in Fig. 4.
It is shown that the agent \(A_{0}\) is tracking the given reference signal with a relatively small tracking error, and the followers, \(A_{1},A_{2},A_{3}\) and \(A_{4}\), reach a consensus with a suitable inter-agent distance. Fig. 6 depicts the tracking error of the leader. It is observed that the tracking error lies between \(0.205m\) and \(0.875m\). This error is due to the fact that the tracking is implemented in the prediction \(g(\cdot)\) instead of the output \(y(\cdot)\). The maximal lateral error of the vehicle is \(0.361m\), appearing at the road with the maximum curvature. The distances between two adjacent agents are shown in Fig. 5, together with the minimal inter-agent distance allowed, which is equal to two times of the agents velocity. It is shown that the agents' inter-agent distances are always larger than the minimal safety distances, maintaining a headway time of more than two seconds. It is also observed that the actual inter-agent distances change according to their velocities. The minimal distance in Fig. 5 is \(6.87m\), appearing at the four corners of the lemniscate path with the smallest reference velocity.
Next, we investigate the agents' inputs and analyze the effects of the barrier functions. The agents' accelerations \(a_{i}(t)\) are shown in Fig. 7, while their steering angles \(\gamma_{i}(t)\) are shown in Fig. 8. Both inputs have some initial transients at the first \(2\) seconds. Oscillations in the system inputs are also observed during the simulation. These oscillations can be explained as follows: during the control, the agents have a tendency of turning right. If the curvature of the reference signal is large, then agent \(A_{i+1}\) may be located at the right-hand side of the agent \(A_{i}\). This will put an unsafe set around \(A_{i}\), preventing \(A_{i}\) from turning directly. In this case, the agents have to turn slowly in order to avoid running into unsafe areas. Compared with the agents acceleration, the
Fig. 4: The trajectory of the agents in the bicycle platoon
Fig. 3: The relationship among agents and the reference signal.
agents steering wheel angle is less impacted. This is due to our CBF strategy, which is to guarantee safety using acceleration first and then consider the steering wheel.
Finally, we demonstrate the efficacy of the proposed algorithm. We tested this algorithm on a computer equipped with Intel(r) Core(r) i5-7400 CPU. The simulation takes roughly \(1407\) seconds to complete. Since this experiment consists of five agents, it takes on average \(281.4\) seconds to simulate one agent, which is shorter than the simulated time, \(300\) seconds. Therefore, this proposed algorithm is fast enough for real-time deployment.
## V Conclusion
In this paper we studied the application of the control barrier functions over a consensus controller. The controller features privacy, safety and real-time computation, and can be applied on multi-agent systems where different controlled plants have different dynamics. We demonstrate the effectiveness of the proposed approach using a leader-follower consensus problem. Agents within the system are modeled as kinematic bicycles with different system parameters. Using a proper control barrier function, the agents in the system can keep a proper inter-agent distance and achieve tracking of an external signal. The controller can also be computed online, indicating a potential in real-world deployments.
|
2305.14303 | QTSumm: Query-Focused Summarization over Tabular Data | People primarily consult tables to conduct data analysis or answer specific
questions. Text generation systems that can provide accurate table summaries
tailored to users' information needs can facilitate more efficient access to
relevant data insights. Motivated by this, we define a new query-focused table
summarization task, where text generation models have to perform human-like
reasoning and analysis over the given table to generate a tailored summary. We
introduce a new benchmark named QTSumm for this task, which contains 7,111
human-annotated query-summary pairs over 2,934 tables covering diverse topics.
We investigate a set of strong baselines on QTSumm, including text generation,
table-to-text generation, and large language models. Experimental results and
manual analysis reveal that the new task presents significant challenges in
table-to-text generation for future research. Moreover, we propose a new
approach named ReFactor, to retrieve and reason over query-relevant information
from tabular data to generate several natural language facts. Experimental
results demonstrate that ReFactor can bring improvements to baselines by
concatenating the generated facts to the model input. Our data and code are
publicly available at https://github.com/yale-nlp/QTSumm. | Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru Tang, Yumo Xu, Dragomir Radev, Arman Cohan | 2023-05-23T17:43:51Z | http://arxiv.org/abs/2305.14303v2 | # QTSumm: A New Benchmark for Query-Focused Table Summarization
###### Abstract
People primarily consult tables to conduct data analysis or answer specific questions. Text generation systems that can provide accurate table summaries tailored to users' information needs can facilitate more efficient access to relevant data insights. However, existing table-to-text generation studies primarily focus on converting tabular data into coherent statements, rather than addressing information-seeking purposes. In this paper, we define a new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary, and we introduce a new benchmark named QTSumm for this task. QTSumm consists of 5,625 human-annotated query-summary pairs over 2,437 tables on diverse topics. Moreover, we investigate state-of-the-art models (i.e., text generation, table-to-text generation, and large language models) on the QTSumm dataset. Experimental results and manual analysis reveal that our benchmark presents significant challenges in table-to-text generation for future research.
## 1 Introduction
In the era of data-driven decision-making, tabular data plays a crucial role in facilitating data analysis, serving as a concise and structured representation of information (Kukich, 1983; Pasupat and Liang, 2015; Chen et al., 2020; Zhu et al., 2021; Zhao et al., 2022). People often consult tables to extract valuable insights and make informed decisions. For example, sales managers typically explore large tables with specific business questions to gain insights about clients and processes. Sports coaches will analyze performance tables containing various statistics to develop effective game strategies and make critical team adjustments. However, effectively accessing and comprehending the information contained within a large and complex table can be time-consuming for users (Hurst, 2000; Pasupat and Liang, 2015; Pujara et al., 2021). Text generation systems that can accurately summarize the provided table, tailored to users' information needs, have the potential to greatly enhance data analysis and expedite the process of obtaining data insights.
Existing work and datasets on table-to-text generation (Parikh et al., 2020; Chen et al., 2020; Cheng et al., 2022; Lebret et al., 2016; Moosavi et al., 2021; Suadaa et al., 2021) have mainly focused on converting tabular data into coherent statements, aiming to present the structured data in a human-readable format. However, these approaches have overlooked the fundamental goal of addressing users' _information-seeking_ purposes. Table-to-text generation systems should adopt a more flexible and interactive approach that allows people to obtain a user-customized summary tailored to their
Figure 1: An example of QTSumm. Given the numerous data points in the table, different users may be interested in various aspects for their own information-seeking or decision-making purposes. The system needs to perform human-like reasoning and analysis over relevant table regions to generate a tailored table summary.
information needs (Dang, 2006; Xu and Lapata, 2020; Zhong et al., 2021; Xu and Lapata, 2022), as illustrated in Figure 1. While table question answering (QA) (Pasupat and Liang, 2015; Iyyer et al., 2017; Zhong et al., 2018; Chen et al., 2020; Nan et al., 2022) has made significant progress in answering fact-based questions, the primary focus of their approaches is on extracting relevant facts or entities from the table and composing short-form answers. Nevertheless, in real-world scenario, users often have more complex and diverse information needs that extend beyond simple fact retrieval. They expect models to perform _human-like reasoning_ and provide explanations or justifications that accompany the extracted insights.
With comprehensive consideration of the real-world information needs of users when consulting tabular data, we propose a new task, query-focused table summarization. In this task, the model is required to generate a user-customized summary given the table and user query. To enable research in this area, we construct a human-annotated table-to-text generation dataset named QTSumm1, that contains 5,625 query-summary pairs over 2,437 Wikipedia tables covering diverse topics. Table 1 compares QTSumm with previous table-to-text generation work. To the best of our knowledge, QTSumm is the first table-to-text generation dataset that tackle tasks of generating user-customized table summary based on the real-world scenarios.
Footnote 1: The data and code will be released at [https://github.com/yilunzhao/QTSumm](https://github.com/yilunzhao/QTSumm) upon publication.
Furthermore, we provide a comprehensive evaluation of current state-of-the-art models, including text generation (Lewis et al., 2020; Raffel et al., 2020; Chung et al., 2022), table-to-text generation models (Liu et al., 2022; Zhao et al., 2022; Liu et al., 2022; Xie et al., 2022), and large language models (Brown et al., 2020; Wei et al., 2022). Our results and analysis from different perspectives reveal that the existing models struggle in solving this new task, highlighting the challenges the models face when performing human-like reasoning and analysis to generate summary tailored to users' information needs. We are releasing our dataset and baselines to support additional research in query-focused table summarization.
The main contribution of this work is three-fold:
* We propose a new task, query-focused table summarization. The task requires the model to generate customized summary tailored to users' information needs, driving future research on table-to-text generation from a user-centric perspective.
* We construct a new large-scale dataset, QTSumm, with 5,626 query-summary pairs collected under real-world scenarios. We apply strict quality control to ensure high quality of dataset.
* We conduct a systematic study of state-of-the-art models on QTSumm, and demonstrate that they are still far behind expert performance, strongly motivating future research.
## 2 Related Work
Table-to-Text GenerationExisting work and datasets on table-to-text generation typically pose the problem as either a single-sentence generation task (Chen et al., 2020; Parikh et al., 2020; Cheng et al., 2022; Liu et al., 2022; Nan et al., 2022; Zhao et al., 2023), or a generic summarization task (Lebret et al., 2016; Moosavi et al., 2021; Suadaa et al., 2021). In the single-sentence generation task, the focus is on generating fluent and faithful descriptions using provided table regions as a control for text generation. For example, the ToTTo (Parikh et al., 2020) and HiTab (Cheng et al., 2022) datasets require models to convert high
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Table Source} & \multirow{2}{*}{\# Tables / Statements} & \# Words / & \multirow{2}{*}{Explicit Control} & Rich in \\ & & & & & Reasoning? \\ \hline ToTTo (Parikh et al., 2020) & Wikipedia & 83,141 / 83,141 & 17.4 & Table region & ✗ \\ LogicNLG (Chen et al., 2020) & Wikipedia & 7,392 / 36,960 & 14.2 & Table regions & ✓ \\ HiTab (Cheng et al., 2022) & Statistics web & 3,597 / 10,672 & 16.4 & Table regions \& reasoning operator \\ FeTaQ (Nan et al., 2022) & Wikipedia & 10,330 / 10,330 & 18.9 & Queries rewritten from ToTTO & ✗ \\ RotoWire (Lebret et al., 2016) & NBA games & 4,953 / 4,953 & 337.1 & ✗ \\ SciGen (Moosavi et al., 2021) & Sci-Paper & 1,338 / 1,338 & 116.0 & ✗ & ✓ \\ NumericNLG (Suadaa et al., 2021) & Sci-Paper & 1,355 / 1,355 & 94.2 & ✗ & ✓ \\ QTSumm & Wikipedia & 2,437 / 5,625 & 67.7 & Queries from a real-world scenario & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of existing table-to-text generation datasets.
lighted table regions into a single-sentence description. The LogicNLG dataset Chen et al. (2020) involves generating statements that can be logically entailed by the facts in the provided table regions. Nevertheless, using table regions for controlling text generation does not correspond with real-world scenarios where people refer to tabular data for information-seeking purposes. The generic table summarization tasks aim to create concise and informative summaries based on the content of a given table. RotoWire Lebret et al. (2016) requires models to produce a coherent summary given a game performance table. SciGen Moosavi et al. (2021) and NumericNLG Suada et al. (2021) datasets require models to perform arithmetic reasoning and generate descriptions for tables extracted from scientific papers. Compared to these domain-specific table summarization datasets, the tables in QTSumm covers diverse topics. Furthermore, considering the numerous data points in the table, various users may be interested in different aspects, making it challenging to create a generic summary that encompasses all the salient information within the table. Therefore, we propose and investigate a new query-focused task setting.
Table QATable QA is a task that involves answering user questions based on the information in a source table. Existing Table QA datasets, such as WikiTableQuestions Pasupat and Liang (2015), SQA Iyyer et al. (2017), and WikiSQL Zhong et al. (2018), primarily focus on _short-form_ answers, which usually consist of a single cell value or aggregation of multiple cell values. However, in real-world scenario, users often have more complex and diverse information needs that requires to engage a deeper understanding and reasoning over the given table. FeTaQA Nan et al. (2022) is a recently introduced Table QA dataset that collects queries by rewriting ToTTo's Parikh et al. (2020) statements into questions and uses the same statements as the answers. In comparison with FeTaQA, the queries in QTSumm were annotated under real-world scenarios, making them more natural and better reflecting users' actual information needs. Furthermore, FeTaQA's queries typically focus on surface-level facts, whereas QTSumm requires models to perform human-like reasoning and analysis when generating summaries.
Query-Focused SummarizationInitially formulated as a document summarization task, QFS aims to generate summaries from documents that are tailored to specific user queries Dang (2006). Despite its potential real-world applications, QFS remains a challenging task due to the lack of large-scale training data. Existing works have attempted to address this issue by leveraging distant NLP resources, including question answering Xu and Lapata (2020) and paraphrase identification Su et al. (2020), and generic summarization Xu and Lapata (2022). Recently, Zhong et al. (2021) adopted QFS for meeting summarization and proposed a human-annotated benchmark over meeting transcripts. Similar to text, effectively accessing and comprehending the information contained within a large and complex table can be time-consuming for users, while QFS remains unexplored in table-to-text generation. In this work, we extend QFS to this new modality for more effective information-seeking and decision-making purposes.
## 3 QTSumm Dataset
In this section, we introduce the QTSumm task and its data collection process.
### Problem Formulation
We formally define the proposed query-focused table summarization task as follows. The input is a user query \(\mathbf{Q}\), and a table \(\mathbf{T}\). The table \(\mathbf{T}=W\cup\{T_{i,j}|i\leq R_{T},j\leq C_{T}\}\) has \(R_{T}\) rows and \(C_{T}\) columns, with \(W\) being the table title and \(T_{i,j}\) being the textual content in the \((i,j)\)-th cell. The task objective of QTSumm is to generate a paragraph-long textual summary \(\mathbf{Y}=(y_{1},y_{2},\ldots,y_{n})\) given the user query \(\mathbf{Q}\) and source table \(\mathbf{T}\):
\[\mathbf{Y}=\operatorname*{argmax}\prod_{i=1}^{n}P(y_{i}|y_{<i},\mathbf{Q},\mathbf{T};\theta), \tag{1}\]
where \(\theta\) denotes the parameters of a neural text generation model, and \(y_{i}\) denotes the \(i\)-th tokens in the generated summary.
### Data Collection Principles
At a high level, the goal of the data collection process is to obtain high-quality user queries and their corresponding paragraph-long summaries grounded on the tabular data. We enumerate our key desiderata for designing a benchmark capable of through evaluation of table-to-text summarization capabilities of models. To achieve this, we first design three principles for annotating a good query-summary pair:
* **Attributablity & Faithfulness**: The query should be answerable using only information from the source table. The summary should be grounded on the source table, and not contain any unfaithful or nonsensical text.
* **Comprehensiveness**: The paragraph-long summary should provide enough details of the source table to respond to the user query so as to satisfy the user's information need.
* **Fluency**: Both the user query and its corresponding table summary should be coherent and fluent, without any grammar errors.
### Annotation Pipeline
To ensure that QTSumm annotation fulfills the aforementioned principles, we carefully design an annotation pipeline consisting of four steps, with details discussed as follows:
Source Table CollectionQTSumm uses tables from LogicNLG Chen et al. (2020) and ToTTo Parikh et al. (2020) datasets as source tables, as these tables are crushed from Wikipedia and covers diverse domains and topics. We filter out tables that are 1) too large or too small, 2) with only string-type columns, or 3) with hierarchical structures (e.g., containing more than one table header). Then we randomly sample 2,000 candidate tables from LogicNLG and ToTTo, respectively, for the query-summary annotation.
User Query AnnotationGiven a table, the annotators are required to read its content, and determine whether the table is informative and intelligible to common web users. Then they were asked to come up with two or three queries, assuming they are users seeking certain information from the table. We require each query to be answerable using information only from the table. Moreover, as this work focuses on paragraph-long summaries as query responses, we avoid queries that can be answered in a short sentence (e.g., "Which country held the 2022 FIFA World Cup?").
Query-Focused Summary AnnotationGiven a table and user query, we ask another annotator to use only information from the source table to write a paragraph-long summary that satisfies the user's information need. We encourage annotators to produce sophisticated summaries that 1) contain as much information from the table as possible, and 2) involve more types of reasoning over multiple relevant table regions. To further encourage high quality annotations, similar to TabFact Chen et al. (2020), we adopt the "two channel collection" design, in which the annotators would be paid 60% more if their summaries are manually verified to exhibit adequate complexity. Finally, the annotators are required to record the row IDs of relevant table regions that are covered in the written summary, to allow us quantifying how grounded the summaries are to the table in the future research.
Multi-Round ValidationWe conduct a multi-round validation protocol to ensure that the annotated data fulfills the aforementioned annotation principles. We first assign query annotators to validate each summary against their corresponding queries, and fix the mistakes if there are any. Then we check 1) whether a query-summary pair contain adequate information and complex aggregation by examining the length of the summary, and 2) whether the information in summary is essential in responding to the user query. We manually revise pairs that do not meet the above standard.
### Annotation Quality Control
Along with the multi-round validation, we also carefully design several strategies to ensure the high quality of QTSumm annotation.
Expert AnnotatorsTo help improve the annotation process, five experts with professional experience in the text summarization tasks are invited to conduct the _internal annotation_. They are asked to provide feedback regarding the task instructions and the user experience of the annotation interface, based on which we iteratively modify the annotation guideline and interface design. In the stage of _external annotation_, we enroll 17 graduate students majoring in STEM fields (10 females, and 7 males). We do not use the crowd-source annotation platform such as Mechanical Turk as our preliminary study indicates that annotators on MTurk fail to annotate high-quality query-summary data. Before starting the official annotation process, each annotator is given a two-hour training session to learn the annotation requirements and interface.
Annotation De-biasingWe observed several kinds of annotation bias during our internal annotation, and we proposed countermeasures as follows for annotation de-biasing:
_Source Table Diversity:_ During internal annotation, we found that many tables in LogicNLG
have similar content. For example, there are around 200 tables describing the results of football games, with identical table headers. To ensure the diversity of source tables, we keep only one table for each unique table header.
_Query Diversity:_ When annotating queries, annotators may prefer simpler ones, resulting in low query diversity. Therefore, we frequently monitor the diversity of queries for each annotator. Annotators are also encouraged to craft queries that are either creative or require complex reasoning in summarization, resulting in a doubled payment to compensate them for the extra time.
_Supporting Fact Position:_ We found that annotators prefer to raise queries regarding the first few rows of each table. To deal with such bias regarding supporting fact positions, we randomly highlight certain rows for each table in the annotation interface. We require the annotators to write queries whose summaries should cover at least two rows of the highlighted regions.
### Dataset Analysis
Table 2 describes the basic statistics of QTSumm, and Table 1 shows a comprehensive comparison of related table-to-text generation datasets. QTSumm is the first dataset for research in query-focused summarization over tabular data. In contrast to ToTTo, LogicNLG, and HiTab, which use specific table regions as explicit control and focus on single-sentence generation, QTSumm requires models to generate paragraph-long summaries via interacting with natural language queries, thus is challenging and better reflect the real-world use cases. Compared with FeTaQA, the queries in QTSumm were gathered from real-world scenarios. Additionally, to better reflect users' information needs, QTSumm demands models to perform more complex and diverse types of reasoning and aggregation over a larger relevant table region than FeTaQA. As illustrated in Figure 2, QTSumm is also the first table summarization dataset with an _open-domain_ table source, containing a large number of tables on diverse topics.
Annotator AgreementTo evaluate the annotation quality of QTSumm, we also reported the human evaluation scores and inter-evaluator agreements over 200 query-summary pairs that were randomly sampled from the development set. As shown in Table 3, QTSumm has a high annotation quality and inter-annotator agreement.
## 4 QTSumm Evaluation
As discussed in Section 3.2, a good summary should be fluent and without any unfaithful or non-sensical text. Moreover, it should provide enough details of the source table to satisfy the user information need. With this motivation, we proposed both automated and human-evaluation metrics to
\begin{table}
\begin{tabular}{l r} \hline \hline
**Property** & **Value** \\ \hline Unique Tables & 2,437 \\ Query-Summary Pairs & 5,625 \\ Rows per Table (Median/Avg) & 9 / 10.1 \\ Columns per Table (Median/Avg) & 6 / 6.3 \\ Table Table Table Length (Median/Avg) & 5 / 4.8 \\ Query Length (Median/Avg) & 19 / 18.6 \\ Relevant Rows (Median/Avg) & 4 / 3.9 \\ Summary Length (Median/Avg) & 71 / 67.7 \\ \hline Training Set Size (Table/Summary) & 1,713 / 3,937 (70\%) \\ Development Set Size (Table/Summary) & 362 / 843 (15\%) \\ Test Set Size (Table/Summary) & 362 / 845 (15\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Basic statistics of QTSumm dataset.
Figure 2: Domain distribution of QTSumm tables.
measure the _fluency_, _faithfulness_, and _comprehensiveness_ aspects of our task.
### Automated Evaluation
**BLEU**Papineni et al. (2002) computes the geometric average of the precision over output text's n-grams. We used SacreBLEU Post (2018) that produces comparable and reproducible BLEU scores.
**ROUGE**Lin and Hovy (2003) measures the word overlap between the candidate and reference summaries. We reported F1 score for ROUGE-L (longest common subsequences).
**BERTScore**Zhang et al. (2020) computes the similarity between the reference and generated summary using contextual word embeddings.
**LitePyramids**Zhang and Bansal (2021) uses a semantic role labeling model to decompose the reference summary into several more fine-grained semantic triplets, each of which contains a single fact from the reference. It then reports an entailment score between these decomposed semantic triplets and the generated summary.
### Human Evaluation
To gain a more comprehensive understanding of the system's performance, we also designed a human evaluation system. Specifically, the summaries from different models were evaluated by experts from three criteria (i.e., _comprehensiveness_, _faithfulness_, and _fluency_) that have been discussed in Section 3.2. Each summary was scored from 1 (worst) to 5 (best) for each criteria, with the final score averaged across different expert evaluators.
## 5 Models
We evaluate following three types of models.
### Text Generation Models
In this approach, we modeled QTSumm as an end-to-end, sequence-to-sequence task, and applied following three popular text generation models.
**BART**Lewis et al. (2020) is a pre-trained denoising autoencoder with transformer-based architecture and shows effectiveness in NLG tasks.
**T5**Raffel et al. (2020) demonstrates effectiveness in NLG tasks by treating all natural language problems as text-to-text tasks during pre-training stage.
**Flan-T5**Chung et al. (2022) enhances T5 by scaling instruction fine-tuning and demonstrates better human-like reasoning abilities than the T5 model.
### Table-to-Text Generation Models
**TAPEX**Liu et al. (2022) continues pre-training the BART model by using a large-scale corpus of synthetic SQL query execution data. It shows better table understanding and reasoning abilities.
**ReasTAP**Zhao et al. (2022) enhances the table understanding and reasoning abilities of BART by pre-training on a synthetic Table QA corpus.
**PLOG**Liu et al. (2022) continues pre-training text generation models on a table-to-logic-form generation task (i.e., T5 model), improving the faithfulness of table-to-text generation.
**UnifiedSKG**Xie et al. (2022) unifies and multi-tasks structured knowledge grounding with text-to-text language models (i.e., T5), achieving high performance on various table-to-text tasks.
### Large Language Models
**GPT-3**Brown et al. (2020); Wei et al. (2022) is a powerful large language model which is capable of generating human-like text and performing a wide range of NLP tasks in a zero- or few-shot setting.
## 6 Experiments
### Implementation Details
**Input Data Serialization** The input contains a user query, and corresponding table data. For text generation and large language models (Section 5.1 & 5.3), we followed recent works on table-to-text generation Liu et al. (2022); Xie et al. (2022); Chen (2023) to flatten the table data as T=[HEAD]:h, [ROW]1:r\({}_{1}\),..., [ROW]n:r\({}_{n}\), where h is table header, \(r_{i}\) is the i-th table row, [HEAD] and [ROW] are special tokens indicating the region of table headers and rows respectively. We also separated headers or cells in different columns using a vertical bar \(\mid\). In this way, the flattened table input can be fed directly into text generation models. For table-to-text generation models (Section 5.2), we followed their original data processing methods to input the query and table data.
**Experiment Setting** The fine-tuning experiments were conducted on an 8 NVIDIA RTX 3090 24GB cluster. We selected the large version for all baseline models. For each fine-tuning experiment, we ran 30 epochs with a batch size of 128. The best fine-tuning checkpoints were
selected according to the validation loss. For LLM experiments, we randomly sampled 200 examples from the QTSumm test set. We used text-davinci-003 via the public OpenAI APIs2. For its parameter setting, we used a temperature of 0.7, maximum output length of 512 tokens, without any frequency or presence penalty. Figure 3 in Appendix shows a prompt prefix example for text-davinci-003 in zero-shot setting.
Footnote 2: [https://openai.com/api/](https://openai.com/api/)
### Main Results
We evaluate all baseline systems using automated evaluations. Additionally, we conduct a small-scale human evaluation involving a subset of models to further verify the results of automated evaluation. Table 4 and Table 5 display the results of automated and human evaluations, respectively.
### Automated Evaluation Results
We draw the following two conclusions based on the automated evaluation results.
**Importance of table structure understanding** Table-to-text generation models achieve better performance than their corresponding text generation backbones. For example, PLOG model that continues pretraining T5 with table-to-logic-form corpus achieves a 5.3% improvement on the LitePyramid metric. This improvement demonstrates the importance of considering table structure understanding for the QTSumm task.
**Importance of human-like reasoning** Among text generation models, Flan-T5, which enhances T5 by scaling instruction fine-tuning, achieves the best performance. This finding demonstrates the importance of human-like reasoning and anslysis for the QTSumm task.
### Human Evaluation Results
We observe that Flan-T5 performs better in terms of comprehensiveness and faithfulness. Despite receiving low scores in automated evaluation, GPT-3 achieves results comparable to state-of-the-art fine-tuned models in human evaluation. This motivates us to explore how to develop automated evaluation metrics for the QTSumm dataset that align better with human judgments. We also plan to further investigate the faithful table-to-text generation capability of large language models in future work.
## 7 Analysis
### Multi-Task Results
The generative model can perform various downstream table-to-text tasks with the same architecture. Therefore, we also investigated whether the generative models can benefit from multi-task learning. We chose RotoWire(Lebret et al.,
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Source Task & ScareBLEU & ROUGE-L & BERTScore & LitePyramid \\ \hline – & 20.3 & 42.9 & 75.2 & 54.6 \\ RotoWire & 21.6 (+1.3) & 43.9 (+1.0) & 79.1 (+3.9) & 57.4 (+2.8) \\ HiTab & **22.4 (+2.1)** & **44.2 (+1.3)** & **79.8 (+4.6)** & **57.9 (+3.3)** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Automated evaluation results of T5 on QTSumm development set with multi-task fine-tuning.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Type & Model & Backbone & ScareBLEU & ROUGE-L & BERTScore & LitePyramid \\ \hline \multirow{3}{*}{Text Generation} & BART & – & 19.9 & 42.7 & 74.9 & 54.5 \\ & T5 & – & 20.3 & 42.9 & 75.2 & 55.6 \\ & Flan-T5 & T5 & 23.1 & 45.8 & 79.5 & 60.5 \\ \multirow{3}{*}{Table-to-Text} & TAPEX & BART & 22.3 & 44.8 & 78.0 & 57.3 \\ & ReasTAP & BART & 23.2 & 45.0 & 78.7 & 58.1 \\ & UnifiedSKG & T5 & 23.4 & **46.2** & 79.3 & 60.2 \\ & PLOG & T5 & **23.7** & 46.1 & **79.9** & **60.9** \\ \multirow{3}{*}{LLM} & GPT-3 0-shot & – & 16.7 & 34.2 & 62.4 & 46.2 \\ & GPT-3 2-shot & – & 19.5 & 41.3 & 73.2 & 56.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Automated evaluation results of baseline systems on the QTSumm test set.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Model & Fluency & Comprehensiveness & Faithfulness \\ \hline PLOG & 4.71 & 3.62 & 3.44 \\ GPT-3 2-shot & **4.84** & 3.54 & 3.50 \\ Flan-T5 & 4.80 & **3.75** & **3.56** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Human evaluation results (Likert Scale Scoring) of three best-performance baselines. Three experts were enrolled to evaluate 50 predictions for each model, which were randomly sampled from QTSumm test set.
2016) and HiTab Cheng et al. (2022) as the source tasks, whose tables do not overlap with QTSumm. The generative model (i.e, T5) was first fine-tuned on the source task and then fine-tuned and evaluated on QTSumm. As shown in Table 6, multi-task fine-tuning can improve model performance on QTSumm. Moreover, fine-tuning on HiTab brings the T5 model better improvement. This is because HiTab requires text generation models to perform various kinds of reasoning operations over tables covering diverse topics. And such reasoning capabilities are also beneficial for QTSumm task.
### Case Study
For a deeper understanding of the query-focused table summarization task on QTSumm, we conducted a case study on the development set to illustrate existing challenges. Table 7 shows user query, reference summary, and predictions from the best-performance fine-tuned model (i.e. PLOG). We identify the following common mistakes that current text generation models are likely to make.
HallucinationConsistent with many investigations on natural language generation tasks Ji et al. (2022); Tang et al. (2022), we also observed that systems tend to hallucinate in their generations. Specifically, we found a strong presence of extrinsic hallucination, i.e., mentioning facts that are not shown in the table.
Factual IncorrectnessWe also found our best system suffers from intrinsic hallucination, that is, presenting information that contradicts the source table. We suspect this can be largely attributed to the failures in inaccurate reasoning.
User Intent MisunderstandingSome system outputs include information that is irrelevant to the user question. The third example of Table 7 presents a system generated summary that contains extra information not requested by the user.
RepetitionWe further noticed that some system generated summaries contain repetitive information. The last example in Table 7 shows that the system mentions the same information multiple times throughout the whole summary, which is not desirable as summaries should be informative.
## 8 Conclusion
We introduce the task of query-focused table summarization, together with QTSumm, the first dataset for the proposed task consisting of complex questions paired with high quality summaries in large-scale. We elaborate on many challenges that QTSumm poses to the existing summarization models, and demonstrate with automatic and human evaluations of baseline systems that we propose as benchmarks for our dataset. Our analysis suggests that existing models struggle with producing factually correct and highly relevant summaries given the user question, and more research efforts in this direction are needed.
## Acknowledgements
We would like to dedicate this paper to the memory of Dr. Dragomir Radev. Dr. Radev's leadership, guidance, and expertise were instrumental in shaping the direction and quality of this project. His loss is deeply felt by all of us involved. We extend our heartfelt gratitude to Dr. Radev for his passion and dedication to the whole NLP community.
## Ethical Consideration
The source tables in QTSumm were collected from LogicNLG Chen et al. (2020) and ToTToParikh et al. (2020) datasets, which are publicly available under the MIT license3 and CC BY-SA 3.0 license 4, respectively. They both permit us to compose, modify, publish, and distribute additional annotations upon the original dataset.
Footnote 3: [https://opensource.org/licenses/MIT](https://opensource.org/licenses/MIT)
Footnote 4: [https://creativecommons.org/licenses/by-sa/3.0/](https://creativecommons.org/licenses/by-sa/3.0/)
For the external annotation of QTSumm, we hired 17 graduate students majoring in STEM majors. We regard 1) creating three queries for one table, and validating the corresponding summaries annotated by others, and 2) composing a query-focused summary response as a unit task. And we paid around $1.5 for each unit task. For creative annotation rewards, we paid additional $0.5 for a query, and $1.5 for a summary. Averagely, an annotator can finish 7 unit tasks per hour after training and practicing. And the hourly rates are in the range of $9 and $13 based on the different working speed (above the local average wage of similar jobs). We recommended that annotators complete a maximum of 30 unit tasks per day in order to reduce pressure and maintain a comfortable pace. In total, the approximate working hours to annotate QTSumm dataset was 1,400 hours. The whole annotation work lasted about 40 days. |
2304.03556 | Construction of unbiased dental template and parametric dental model for
precision digital dentistry | Dental template and parametric dental models are important tools for various
applications in digital dentistry. However, constructing an unbiased dental
template and accurate parametric dental models remains a challenging task due
to the complex anatomical and morphological dental structures and also low
volume ratio of the teeth. In this study, we develop an unbiased dental
template by constructing an accurate dental atlas from CBCT images with
guidance of teeth segmentation. First, to address the challenges, we propose to
enhance the CBCT images and their segmentation images, including image
cropping, image masking and segmentation intensity reassigning. Then, we
further use the segmentation images to perform co-registration with the CBCT
images to generate an accurate dental atlas, from which an unbiased dental
template can be generated. By leveraging the unbiased dental template, we
construct parametric dental models by estimating point-to-point correspondences
between the dental models and employing Principal Component Analysis to
determine shape subspaces of the parametric dental models. A total of 159 CBCT
images of real subjects are collected to perform the constructions.
Experimental results demonstrate effectiveness of our proposed method in
constructing unbiased dental template and parametric dental model. The
developed dental template and parametric dental models are available at
https://github.com/Marvin0724/Teeth_template. | Lei Ma, Jingyang Zhang, Ke Deng, Peng Xue, Zhiming Cui, Yu Fang, Minhui Tang, Yue Zhao, Min Zhu, Zhongxiang Ding, Dinggang Shen | 2023-04-07T09:39:03Z | http://arxiv.org/abs/2304.03556v1 | Construction of unbiased dental template and parametric dental model for precision digital dentistry
###### Abstract
Dental template and parametric dental models are important tools for various applications in digital dentistry. However, constructing an unbiased dental template and accurate parametric dental models remains a challenging task due to the complex anatomical and morphological dental structures and also low volume ratio of the teeth. In this study, we develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation. First, to address the challenges, we propose to enhance the CBCT images and their segmentation images, including image cropping, image masking and segmentation intensity reassigning. Then, we further use the segmentation images to perform co-registration with the CBCT images to generate an accurate dental atlas, from which an unbiased dental template can be generated. By leveraging the unbiased dental template, we construct parametric dental models by estimating point-to-point correspondences between the dental models and employing Principal Component Analysis to determine shape subspaces of the parametric dental models. A total of 159 CBCT images of real subjects are collected to perform the constructions. Experimental results demonstrate effectiveness of our proposed method in constructing unbiased dental template and parametric dental model. The developed dental template and parametric dental models are available at [https://github.com/Marvin0724/Teeth_template](https://github.com/Marvin0724/Teeth_template).
keywords: Atlas, dental template, image alignment, parametric model, digital dentistry +
Footnote †: journal:
## 1 Introduction
Dental diseases, such as malocclusion or edentulism, affect a large number of people worldwide, leading many to seek dental treatments to restore oral functions and improve facial aesthetics [1]. Digital dentistry is an emerging technology reforming today's dental treatments because of its efficiency and accuracy [2]. Typical digital dentistry process involves three main steps: digital dental image acquisition, dental image processing and analysis, and transfer of digital treatment to the clinical environment [3]. By leveraging digital imaging techniques, dental professionals can improve accuracy and effciency of their diagnosis, treatment planning, and treatment deliv
ery, ultimately leading to better patient outcomes [4].
Dental template is a standardized, three-dimensional (3D) representation of a typical dental model. It usually serves as a valuable tool for spatial normalization and spatial correspondence estimation across different subjects for dental disease diagnosis and treatment planning in many dental applications, such as dental implant planning and orthodontic treatment planning [5]. However, currently, there is still no dental template that can represent population-average dental structures without subject-biases, i.e., unbiased dental template. This is because it is still challenging to construct an unbiased dental template despite advances in dental imaging technologies, as explained below. First, the maxilla and mandible that support the upper and lower rows of teeth, have complicated anatomical and morphological structures. Second, the volume ratio of teeth with respect to the whole image is small. These two factors make it challenging to accurately construct the unbiased dental template. Previously, researchers were limited to using biased dental templates, such as artificial templates or selected subject templates, which could potentially influence the accuracy of the final results. For example, Barone et al. created a digital dental template from a reference physical model to reconstruct complete 3D teeth model from panoramic radiographs [6]. Wu et al. artistically designed templates for four categories of teeth (i.e., incisors, canines, pre-molars and molars), and used these templates to fit the the teeth model in the plaster cast scans [7]. Pei et al. created a 3D dental template by digitizing the teeth specimens used in dental clinics, and applied it to constrain tooth segmentation from CBCT images [8]. Gholamalizadeh et al. randomly selected a typical dental model as the reference dental template for spatial alignment [5]. To learn the tooth arrangement in digital orthodontics, Wei et al. normalized the position and orientation of diff/ferent dental models by defining a local coordinate system on each model and aligning it with the world coordinate system [9]. An unbiased dental template would improve the normalization of dental models, thus resulting in higher prediction accuracy of deep learning methods [10].
Parametric dental models, which describe shape and position variations of dental models in a population, also have broad applications in digital dentistry, such as teeth segmentation [11], dental mesh repairing [12] and 3D teeth reconstruction [13; 7]. The construction of parametric dental models involves the following steps [14; 15]: (1) rigidly aligning all the dental models to a common space, (2) estimating the point-to-point correspondence between the aligned dental models and (3) employing Principal Component Analysis (PCA) [16] to determine subspaces of the parametric dental models. However, the accuracy of existing parametric dental models is limited due to the lack of an unbiased dental template, which is essential for precise spatial alignment and point-to-point correspondence estimation. Without an unbiased template, the construction of parametric dental models becomes challenging, and the resulting models may contain subject-specific biases. Therefore, the development of an unbiased dental template is crucial for construction of accurate parametric dental models and for improving dental image processing and analysis.
Anatomical atlases, i.e., population-average anatomical structures, have been widely used as unbiased templates for medical image processing and analysis [17]. These templates provide a population-average representation of typical shapes and locations of a specific anatomical structure, while maintaining high cross-individual validity within the population [18]. Therefore, the unbiased template is an effective tool for medical image processing, like spatial normalization and spatial correspondence estimation across different subjects. Over the years, numerous unbiased templates of different anatomies have been constructed for various medical applications. For instance, Rajashekar et al. constructed FLAIR MRI and noncontrast computed tomography (CT) atlases of elderly populations for clinical applications [17]. Chen et al. developed a 4D infant brain atlas for dynamic brain development analysis during infancy [18]. Additionally, Lee et al. created a CT atlas of cardiac disease understanding and abdominal atlas for integration of tissue information in different scales [19].
In this paper, we develop an unbiased dental template by constructing an accurate dental atlas from CBCT images. To address the challenges of complex adjacent anatomical structures and low volume ratio, we propose to enhance CBCT images and their teeth segmentation images through image cropping, image masking and segmentation intensity reassigning. Then, we construct a dental atlas of CBCT images by performing registration between the enhanced segmentation images to guide the registration between the masked CBCT images, which
further reduces influence of the adjacent anatomical structures on atlas construction and improves accuracy of teeth region registration. Leveraging the constructed unbiased dental template, we build parametric dental models for dentition and each individual tooth. The contributions of this paper can be concluded as follows: (1) We develop an unbiased dental template by accurately constructing a dental atlas from CBCT images. (2) We propose an approach to enhance CBCT images and teeth segmentation images for accurate dental atlas construction. (3) By using the unbiased dental template, we construct parametric dental models by analyzing the shape variations of dentition and every tooth. To the best of our knowledge, this is the first study for unbiased dental template construction.
The rest of the paper is organized as follows. The studied data and proposed method are described in Section 2. Section 3 presents the results of dental template and parametric model construction. The paper is finally discussed and concluded in Section 4.
## 2 Materials and Method
### Dataset
In this study, we selected cone-beam computed tomography (CBCT) images of 159 subjects from our digital archive. All the CBCT images were acquired by a dental CBCT scanner (Manufacturer: Planmeca) in First People's Hospital of Hangzhou. During the selection, we excluded the subjects with severe malocclusion or with missing of 2 more teeth. Additionally, we did not consider the third molars in this study. The age range of the selected subjects was 18 to 60, and all the subjects were of Chinese ethnicity. The resolution of the CBCT images varied from \(0.1mm\) to \(0.4mm\) with X and Y dimensions both set to 400, while the Z dimension ranged from 256 to 328. Prior to the dental atlas construction, we pre-processed the CBCT images by first standardizing their resolutions to \(0.4mm\loz0.4mm\loz.4mm\) and then normalizing voxel intensities of the images to the range of [0,1].
Figure 1: The proposed framework for unbiased dental template construction. The image enhancement includes image cropping, image masking and segmentation intensity reassignment. Image intensity values of all images shown in this figure were remapped to the full display range.
### Dental Atlas Construction
Figure 1 illustrates the framework proposed for constructing a dental atlas. First, we employ a deep learning-based tooth segmentation method [20] to segment teeth from the CBCT images. Then, based on the teeth segmentation, we enhance CBCT images and their segmentation images for accurate atlas construction, which includes cropping the images, masking the surrounding tissues, and enhancing the intensity contrast between adjacent teeth in the segmentation images. Next, we utilize the SyGN atlas construction algorithm [21] to perform co-registration between the CBCT and segmentation images, resulting in a CBCT image atlas and a segmentation image atlas. Finally, we manually segment teeth from CBCT image atlas to obtain an unbiased dental template.
#### 2.2.1 Teeth Segmentation and Labeling:
In this study, we propose to use teeth segmentation to guide image enhancement and co-registration for accurate dental atlas construction. For this purpose, we adopt a fully automatic deep learning-based tooth segmentation method to segment all the teeth from the normalized CBCT images [20]. The segmentation method employs an U-Net network to extract the teeth region and uses a hierarchical morphology-guided network to differentiate and segment each tooth from the extracted teeth region. This approach enables accurate segmentation of all teeth, which is essential for creating an accurate dental atlas.
In order to identify the segmented teeth, we deploy a hierarchical method to iteratively label the teeth in the segmentation images. First, we randomly select 30 subjects and manually label the segmented teeth in those subjects. Using the labeled segmentation images, we create an initial dental atlas of teeth segmentation images. Then, we apply atlas-based labeling to the remaining segmentation images by non-rigidly registering the initial dental atlas to those images [22]. Finally, we manually correct wrong labels during a final quality checking of the results.
#### 2.2.2 Segmentation-Guided Image Enhancement:
Prior to constructing dental atlas, we perform segmentation-guided image processing to enhance CBCT and segmentation images for accurate atlas construction. First, we crop both CBCT and segmentation images with a margin of 30 in the x, y, and z directions around the segmented teeth, thus removing the background region. Next, we eliminate the tissue surrounding the teeth by using the teeth segmentation images as masking images to mask the cropped CBCT images. Here, we dilate the binary images of the teeth segmentation before performing the masking operation. The purpose of the dilation operation is to avoid masking the teeth region that are not accurately segmented.
To further improve accuracy of the co-registration between CBCT and teeth segmentation images, we enhance intensity contrast between one tooth and its adjacent teeth in the segmentation images by reassigning intensity val
Figure 2: Intensity reassignment of teeth segmentation images. (a): An example of a paired upper and lower teeth slices images with reassigned intensities. (b): teeth numbering and their corresponding intensity reassignment. Note that image intensity values are remapped to the full display range.
ues to the teeth according to their labels. Fig. 2(a) shows a teeth segmentation image with reassigned intensity values using a tooth label-based assignment strategy (Fig. 2(b)). High intensity contrast between adjacent teeth can push teeth in the moving image to their corresponding teeth in the fixed image during the registration, thus improving the accuracy of dental atlas construction.
#### 2.2.3 Atlas Construction
The enhanced CBCT images and teeth segmentation images are used to jointly construct a teeth segmentation image atlas and a CBCT image atlas. The symmetric group-wise normalization (SyGN) based method in ANTs toolkit is adopted to construct the dental atlases with unbiased shape and appearance [21]. The SyGN-based method iteratively performs multi-channel registration between the CBCT images and the teeth segmentation images, and involves multiple iterations of registration, averaging, and optimization with the following steps:
1. Initial templates are first generated for the CBCT and teeth segmentation images by averaging all the images in their respective domain.
2. Each pair of the CBCT and segmentation image is aligned to their respective initial templates using the symmetric normalization (SyN) algorithm [23]. The alignment includes rigid, affine, and non-rigid diffeomorphic registrations.
3. The warped CBCT images and segmentation images are averaged to generate a new segmentation template and a new CBCT image template.
4. The affine transformations and deformation fields obtained in (2) are averaged and then applied to the templates achieved in (3) for template optimization.
5. The CBCT and segmentation templates outputted at (4) are used as initial templates for next iteration.
The aforementioned operations are iterated to generate an accurate CBCT image atlas. Cross correlation is selected as the similarity metric for the image registration. The shrinkage factors, smoothing factors and max iterations [21] of the SyN method are set to 8x4x2x1, 3x2x1x0 and 100x80x40x10, respectively. Finally, we segment teeth from the outputted CBCT image atlas and then reconstruct tooth mesh models from the segmentation image, resulting in an unbiased dental template.
### Parametric Dental Model
Human adults typically have 28 teeth if the third molars are not considered. They can be divided into four categories, i.e., incisors, canines, premolars and molars. By leveraging the constructed unbiased dental template, we are able to build precise parametric dental models, which includes a parametric dentition model and a parametric model for each tooth, from real clinical dataset. The dentition parametric model encodes the pose and shape variation of all teeth in the upper and lower rows. The parametric tooth model describes the shape variations of one tooth among the population.
Figure 3 illustrates the framework proposed for constructing parametric dental models, i.e., parametric dentition model and parametric tooth model. To build the parametric models, we need to estimate point-wise correspondences between the teeth of different subjects in the dataset. For this purpose, we first reconstruct dental models from the teeth segmentation images, and then rigidly align the dental models to the constructed unbiased dental template. To construct the parametric dentition models, we non-rigidly register each tooth in the constructed dental template to its corresponding tooth in the aligned teeth models through coherent point drift (CPD) algorithm [24], which results in dental models with strict point-to-point correspondence. Once the correspondence is established, PCA is employed to determine the shape variation modes of the dentition models as the subspaces of their parametric models. To construct the parametric tooth models, we rigidly register each tooth in the aligned dental models to its corresponding tooth in the teeth template, and then estimate point-to-point correspondences between the registered teeth for PCA analysis.
## 3 Results
### Unbiased Dental Template
We followed the proposed dental atlas construction framework to create a dental atlas of CBCT image using the studied dataset. The radius of the structuring element in the dilation operation was set to 15. The itera
tions of the SyGN-based method was set to 10. According to the results, the final CBCT image atlas had a size of 240x215x172 and a resolution of 1x1x1 \(mm^{3}\). To reconstruct an unbiased dental template from the dental atlas, we performed a semi-automatic teeth segmentation process. We first applied the learning-based method [20] to segment teeth automatically, and then refined the segmentation result manually.
The constructed dental atlas is presented in Fig. 4. The first row displays slice images of a selected CBCT image template, while the second and third rows show the slice images of the constructed CBCT image atlas and their corresponding teeth segmentation (slice) images, respectively. According to the results, we can observe that the constructed CBCT image atlas provides clearer anatomical structures with distinct boundaries compared to the selected CBCT images. Moreover, the teeth in the dental atlas are arranged in a more smooth and aesthetic dental arch curve [25].
Fig. 5 displays the unbiased dental template derived from the CBCT atlas image. The first row shows the different views (i.e., front view, back view, side view and bottom view) of the dental template. The dental template showcases an ideal arrangement of teeth, with no overlapping or malpositioning. Specifically, the front and back views of the dental template reveal the upper front teeth slightly overlapping the lower ones, and the upper and lower dental midlines aligning with each other. From the side view, all upper teeth perfectly fit the upper ones and the teeth are aligned with a nice curve. The second and third rows of Fig. 5 provide detailed views of the individual tooth templates in the upper and lower rows, respectively.
We conducted experiments to quantitatively evaluate the obtained unbiased dental template by applying it to label teeth in the teeth segmentation images. First, we randomly selected 32 CBCT images from our digital archive, and segmented teeth from the CBCT images using the learning-based method. Then, we non-rigidly registered the labeled image of the unbiased dental template to each
Figure 3: The framework of parametric dental model construction. The framework consists of two steps, i.e., point-to-point correspondence estimation and Principal Component Analysis-based subspaces determination.
segmentation image, obtaining a warped labeled image. For each tooth in the segmentation image, we measured the Dice similarity coefffclient (DSC) between the tooth and the teeth in the warped image, and assigned the label with the template tooth that had the largest DSC. As a comparison, we also performed automatic teeth labeling using the selected teeth template following the same method. According to the results, the mean success rate of the teeth labeling achieved by our unbiased dental template was 93.4%, which was higher than the success rate achieved by the selected teeth template (91.7%).
### Parametric Dental Models
We excluded 21 subjects with missing teeth from the studied dataset when constructing parametric dental models. For each CBCT image of the remaining 138 subjects, we reconstructed a dental model from its segmentation image. We then performed spatial alignment and point-to-point correspondence estimation of the dentition models and the individual tooth models across the 138 subjects by leveraging the unbiased dental template. Finally, we employed PCA to determine shape variations of the dentition and individual tooth models as subspaces of their parametric models.
According to the results, the first 31 principal components (PCs) of the constructed parametric dentition model explained more than 85% of the shape variations of dentition models. The first three PCs, which explained about 45.0% of the whole variations, are shown in Fig. 6. Specifically, the first PC (i.e., PC1) exhibited a variation from underlie teeth to overjet teeth, with an increase in dental arch length and a decrease in dental arch width. In PC2, there was a decrease in teeth length and a slight increase in overjet from the left dentition to the right dentition. PC3 showed a decrease in labiolingual/buccolingual inclination of the upper and lower teeth.
To construct the parametric model of each tooth, we aligned each tooth in the dental models to its corresponding tooth template and estimated point-to-point correspondence among them for PCA analysis. Fig. 7 shows the first three PCs of shape variations of seven different teeth, i.e., from tooth11 to tooth17. For all seven teeth, their first PCs demonstrated an increase in tooth size. For the incisors (tooth11 and tooth12), canine (tooth13), and premolars (tooth14 and tooth15), their second PCs represented an increase in tooth width, including tooth crown and root, while their third PCs indicated an increase in tooth root with a slight decrease in tooth crown width. For the molars (tooth16 and tooth17), their second PCs
Figure 4: The dental atlas constructed by our proposed method. The first row presents the slice images of a selected CBCT image template. The second row shows slice images of the constructed CBCT image atlas, and the third row gives segmentation results of the slice images in the second row.
described an increase in tooth width with a slight decrease in tooth length. PC3 of tooth16 indicated an extension of one tooth root, while PC3 of tooth17 represented an increase in tooth roots with a slight decrease in tooth crown width.
## 4 Discussions and Conclusions
Our paper presents an accurate dental atlas construction framework to generate an unbiased dental template. The evaluation results show that the constructed unbiased dental template outperforms the selected dental template, which has subjective bias. This indicates that our unbiased dental template has more typical shapes and location information of teeth and higher cross-individual validity than the biased template, thus facilitate image processing and analysis in digital dentistry. Besides teeth labeling in the evaluation, the unbiased dental template can also be applied to various digital dentistry applications, such as spatial alignment of teeth models (images) and correspondence estimation across different teeth models.
By leveraging the unbiased dental template, we can accurately construct parametric models for the dentition and individual tooth from the studied dataset. From the constructed parametric dental models, we can easily observe the teeth shape variations and their proportions in the subspace of the parametric dental models. The parametric dental models have various applications in digital dentistry. Specifically, the parametric dental models can be used as shape constraints in teeth segmentation and teeth deformable registration. The parametric models can also be applied to 3D teeth reconstruction from 2D pho
Figure 5: The unbiased dental template generated from the constructed dental atlas. The first row shows the different views (i.e., front view, back view, and side view) of the dental template reconstructed from the dental atlas. The second and third rows show the templates of different teeth.
Figure 6: The first three PCs of the constructed parametric dentition model.
to or 2D dental x-ray images. Other potential applications of the parametric dental models include teeth completion, normal teeth prediction in orthodontic planning, and more.
In this study, we present the results of unbiased templates and parametric models constructed for the dentition and the individual teeth. Based on these results, one can easily obtain unbiased templates and parametric models of upper teeth rows, lower teeth rows or other teeth clusters. This can extend the applications of the constructed unbiased dental template and parametric dental models to new scenarios, where the upper or lower teeth row is focused. In the future, we plan to further evaluate the performance of the unbiased dental template and parametric dental models by applying them to more specific applications.
## Acknowledgments
This work was supported in part by National Natural Science Foundation of China (grant number 62131015), Science and Technology Commission of Shanghai Municipality (STCSM) (grant number 21010502600), and The Key RD Program of Guangdong Province, China (grant number 2021B0101420006).
|
2307.15664 | Phase diagram of one-dimensional driven-dissipative exciton-polariton
condensates | We consider a one-dimensional driven-dissipative exciton-polariton condensate
under incoherent pump, described by the stochastic generalized Gross-Pitaevskii
equation. It was shown that the condensate phase dynamics maps under some
assumptions to the Kardar-Parisi-Zhang (KPZ) equation, and the temporal
coherence of the condensate follows a stretched exponential decay characterized
by KPZ universal exponents. In this work, we determine the main mechanisms
which lead to the departure from the KPZ phase, and identify three possible
other regimes: (i) a soliton-patterned regime at large interactions and weak
noise, populated by localized structures analogue to dark solitons; (ii) a
vortex-disordered regime at high noise and weak interactions, dominated by
point-like phase defects in space-time; (iii) a defect-free reservoir-textured
regime where the adiabatic approximation breaks down. We characterize each
regime by the space-time maps, the first-order correlations, the momentum
distribution and the density of topological defects. We thus obtain the phase
diagram at varying noise, pump intensity and interaction strength. Our
predictions are amenable to observation in state-of-art experiments with
exciton-polaritons. | Francesco Vercesi, Quentin Fontaine, Sylvain Ravets, Jacqueline Bloch, Maxime Richard, Léonie Canet, Anna Minguzzi | 2023-07-28T16:51:26Z | http://arxiv.org/abs/2307.15664v1 | # Phase diagram of one-dimensional driven-dissipative exciton-polariton condensates
###### Abstract
We consider a one-dimensional driven-dissipative exciton-polariton condensate under incoherent pump, described by the stochastic generalized Gross-Pitaevskii equation. It was shown that the condensate phase dynamics maps under some assumptions to the Kardar-Parisi-Zhang (KPZ) equation, and the temporal coherence of the condensate follows a stretched exponential decay characterized by KPZ universal exponents. In this work, we determine the main mechanisms which lead to the departure from the KPZ phase, and identify three possible other regimes: (i) a soliton-patterned regime at large interactions and weak noise, populated by localized structures analogue to dark solitons; (ii) a vortex-disordered regime at high noise and weak interactions, dominated by point-like phase defects in space-time; (iii) a defect-free reservoir-textured regime where the adiabatic approximation breaks down. We characterize each regime by the space-time maps, the first-order correlations, the momentum distribution and the density of topological defects. We thus obtain the phase diagram at varying noise, pump intensity and interaction strength. Our predictions are amenable to observation in state-of-art experiments with exciton-polaritons.
## I Introduction
Exciton-polaritons (polaritons) are photons-like quasiparticles that can behave, as a many-body system, as a quantum fluid made of light [1]. They are obtained in semiconductor optical microcavities when photons in the cavity mode are in the strong coupling regime with electron-hole excitations within the cavity medium, known as excitons [2]. Owing to the cavity short lifetime, an external excitation is required to replenish the system and achieve a non-equilibrium steady-state. Under incoherent excitation, a striking phenomenon exhibited by these systems, is the nonequilibrium analogue of Bose-Einstein condensation first reported in Ref. [3]. Several other paradigmatic features of quantum fluids have been reported in polariton quantum fluids such as superfluidity [4], solitonic excitations [5; 6] and quantized vortices [7].
In the non-equilibrium condensation mechanism, upon crossing the excitation intensity threshold, the macroscopic phase of exciton-polariton condensates is spontaneously chosen and owing to the incoherent nature of the excitation, remains free to fluctuate [1]. A major difference of this condensate with respect to its equilibrium counterpart, is that it can display a critical behavior described by the Kardar-Parisi-Zhang equation [8], a non-linear Langevin equation which captures the universal dynamics of an extremely large collection of non-equilibrium systems, including stochastically growing interfaces, randomly stirred fluids, directed polymers in random media and many more [9]. Indeed, it was established in recent experiments [10] that in one dimension and for condensates of negative effective mass, the condensate phase exhibits the universal scaling behavior of the KPZ universality class, as was predicted in [11; 12]. These fluctuations determine the coherence in space and time of the exciton-polariton condensate, characterized by stretched exponential decays. The emergence of KPZ universal scaling was demonstrated experimentally thanks to precise interferometric measurements of spatial and temporal coherence, and supported by numerical modeling of the condensate evolution [10]. Besides, numerical simulations of the microscopic models showed that the phase fluctuations probability distributions - non-Gaussian and skewed - match the characteristic distributions of KPZ stochastic processes [13; 14].
The effective description of polaritons driven-dissipative condensates is given by the stochastic generalized Gross-Pitaevskii equation (gGPE) [1], a semiclassical nonlinear model in which drive and loss enter as complex coefficients and a white noise accounts for stochastic fluctuations intrinsically due to drive and dissipation. Under some conditions detailed in Ref. [15], its deterministic part is equivalent to the complex Ginzburg-Landau equation (CGLE), which has been widely studied both analytically and numerically [16]. Its rich phase diagram has been shown to exhibit nonlinear localization, solitons, pattern selection and spatio-temporal chaos [16; 17; 18]. Moreover, in Ref. [19] the large scale effective dynamics of the CGLE in the regime of phase turbulence (i.e. in absence of defects) was predicted to be controlled by the KPZ equation.
A rich variety of dynamical regimes can also be accessed in polaritons quantum fluids, by tuning the pa
rameters of the gGPE. For instance, soliton generation and propagation has been predicted and experimentally observed both in one- and in two-dimensional geometries [1; 5; 20], dynamical instabilities and defects nucleation preventing extended coherence have been studied in Refs. [21; 22; 23; 24; 15], while the dynamics of dark solitons was characterized in Refs. [25; 26]. In Ref. [27], by solving a compact KPZ equation as well as the stochastic CGLE, it was shown that the KPZ coherence of one-dimensional driven-dissipative condensates can be disrupted due to the compact nature of the condensate phase, which can host spatio-temporal vortices. The phase diagram at varying noise strength and KPZ nonlinearity was investigated, and a vortex-turbulent regime at large effective KPZ nonlinearity was identified by analyzing the momentum distribution decay.
In this work, we determine the full phase diagram of a one-dimensional incoherently pumped polariton condensate, in a parameter space constituted by three key experimentally relevant parameters: the pump intensity, the blueshift of the condensate energy - related to the exciton-exciton interaction strength, and the noise amplitude. We first identify the extent of the KPZ regime for conditions close to the experiments of Ref. [10], retrieving the KPZ scaling in the decay of the coherence. We then then identify the main mechanisms of departure from this regime, and report the emergence of three other regimes. The first one sets in the low-noise regime at increasing blueshift and is characterized by the stochastic proliferation and dynamics of solitonic structures. It is sustained by the dynamical instability that favor the development of long-lived high-contrast defects in the condensate. The second one emerges in the high-noise and moderate pump regime, and is characterized by the stochastic formation of defects that form quantized phase vortices in the space-time plane. In this regime, the phase-phase correlation function grows linearly in time and deviates from the first-order correlation function. The latter still displays the KPZ scaling. In fact, within one defect and the next one, the phase of the condensate fluctuates according to KPZ scaling, but the presence of vortices add large jumps to the phase trajectories. Hence, this vortex regime regime may be viewed as piece-wise KPZ. The third one appears in the high-pump and moderate-noise regime, and corresponds to the breakdown of the adiabatic approximation, and of the mapping to the KPZ equation. In this regime, the dynamics of the reservoir cannot be neglected and leads to larger fluctuations of both the reservoir and the condensate densities. In this regime, the KPZ scaling is washed out, even though the phase remains defect-free.
For each three regimes, as well as for the KPZ one, we show a typical time evolution, describe its salient features and calculate the first-order correlation function within the steady-state to determine the decay of the coherence. We also determine the momentum distribution, as well as the phase-phase correlations to characterize the effective phase dynamics. All these studies are eventually merged into a single phase diagram of a one-dimensional polariton condensate, within a parameter space directly relevant to experiments.
## II Model
We consider a one-dimensional exciton-polariton condensate coupled to an excitonic reservoir which is pumped by an external optical excitation, and that feeds particles into the condensate via spontaneous and stimulated electronic scattering. The dynamics of the condensate wavefunction \(\psi(x,t)\) and of the exciton reservoir density \(n_{R}(x,t)\) are described, respectively, by the generalized stochastic Gross-Pitaevskii equation and by the rate equation [1]:
\[i\hbar\partial_{t}\psi(x,t)=\Big{[}\epsilon(\hat{k})+g|\psi(x,t) |^{2}+2g_{R}n_{R}(x,t)\] \[\qquad+\frac{i\hbar}{2}\left(Rn_{R}(x,t)-\gamma(\hat{k})\right) \Big{]}\psi(x,t)+\hbar\eta(x,t) \tag{1}\] \[\partial_{t}n_{R}(x,t)=P(x)-\left(\gamma_{R}+R|\psi(x,t)|^{2} \right)n_{R}(x,t) \tag{2}\]
with \(\hat{k}=-i\partial_{x}\). The pump \(P(x)\) describes the laser excitation of the reservoir, which we consider to be homogeneous \(P(x)=P\), \(\gamma_{R}\) is the rate for spontaneous decay of excitons and \(Rn_{R}>0\) the exciton scattering rate into the condensate, representing the condensate drive under incoherent pumping. Losses are taken into account through the \(k\)-dependent linewidth \(\gamma(\hat{k})\). Its momentum dependence around \(k=0\) is accounted for in a parabolic approximation as \(\gamma(k)=\gamma_{0}+\gamma_{2}k^{2}\), where \(\gamma_{0}^{-1}\) represents the polariton lifetime and \(\gamma_{2}>0\) is a diffusion constant.
It was shown in Ref. [23] that a positive effective mass leads to fragmentation due to self-focusing instabilities, hampering large spatial coherence. Hence, like in the experiments of Ref. [10], we add a periodic potential in order to achieve an effective negative mass for polaritons. This potential results in a polaritonic dispersion relation that we model effectively via the kinetic term \(\epsilon(k)=E_{0}+2J\text{cos}(ka)\), where \(a\) is the lattice spacing, and in which a condensate forms at \(k=0\). The polaritons effective mass around \(k=0\) is thus negative, ensuring effective repulsive polariton-polariton interactions in presence of the reservoir. In the experiment, polaritons are in a Lieb lattice providing them with an effective mass around the condensate state of \(m_{exp}=-3.3\cdot 10^{-6}m_{e}\)[10], where \(m_{e}\) is the electron mass. We thus set \(J=\hbar^{2}/(2|m_{exp}|a^{2})\) and choose \(E_{0}=-2J\) such that the reference energy is zero at \(k=0\).
The condensate gain and loss processes result in stochastic fluctuations which are described by a complex Gaussian noise \(\eta(x,t)\), with
\[\langle\eta(x,t)\rangle=0 \tag{3}\] \[\langle\eta(x,t)\eta(x^{\prime},t^{\prime})\rangle=2\sigma\delta (x-x^{\prime})\delta(t-t^{\prime}). \tag{4}\]
In the microscopic model, the fluctuations amplitudes are determined as \(\sigma_{0}=\frac{1}{2}\left(Rn_{R}+\gamma\right)\)[28]. In this work we assume that additional fluctuation sources may be present,
or added in a controlled way to the system by suitable noise engineering. We henceforth consider the noise amplitude \(\sigma\) as an adjustable parameter, and also allow for values smaller that the nominal noise \(\sigma_{0}\) to obtain a complete view of the phase diagram.
It is useful to recall the homogeneous mean-field solution of equations (1) and (2), given by \(n_{R0}=\frac{\gamma_{0}}{R}\) and \(\psi_{0}(t)=\sqrt{\rho_{0}}\mathrm{e}^{-\mathrm{i}\mu_{0}t}\), with \(\rho_{0}=\frac{\gamma_{R}}{R}(p-1)\). Here \(p=\frac{P}{P_{th}}=\frac{P_{R}}{\gamma_{0}\gamma_{R}}\) is the reduced pump, where \(P_{th}\) is the threshold pump intensity for condensation. The blueshift due to the two-body interactions is given by \(\mu_{0}=g\rho_{0}+2g_{R}n_{R0}\), with \(g,g_{R}\geq 0\). Since we aim to stay close to the conditions of the experiments of Ref. [10], in which the interactions are dominated by the reservoir of excitons, we take \(g=0\) all through the paper. In this case \(\mu\simeq\mu_{\mathrm{th}}=2g_{R}n_{R0}\), the blueshift of the polariton spectrum when the pump is tuned at threshold value (i.e. \(\rho=0\)).
Under certain assumption discussed below, starting from the stochastic Gross-Pitaevskii equation (1), the dynamics of the phase \(\theta\) can be mapped [11; 12] to the one-dimensional Kardar-Parisi-Zhang equation [8]:
\[\partial_{t}\theta=\nu\partial_{x}^{2}\theta+\frac{\lambda}{2}\left(\partial_ {x}\theta\right)^{2}+\sqrt{D}\xi. \tag{5}\]
The KPZ parameters are directly related to the microscopic ones according to \(\nu=\frac{1}{2}\left(\alpha\frac{\hbar}{2m}+\gamma_{2}\right),\lambda=-2\left( \frac{\gamma_{0}}{2m}-\alpha\gamma_{2}\right),D=\frac{\sigma}{2\rho_{0}}(1+4a ^{2})\) and \(\alpha=\frac{\left(\gamma_{R}+R\rho_{0}\right)^{2}}{R^{2}P}\left(g-\frac{2g_{ R}PR}{\left(\gamma_{R}+R\rho_{0}\right)^{2}}\right)\)[10]. Equation (5) holds assuming that the density fluctuations of both the condensate and the reservoir and their spatial variations are negligible as compared to the phase ones. Also, the mapping is valid as long as the phase is a slowly varying function in space and time. We show in the following that these assumptions break down in various cases, leading to the formation of a soliton-patterned, a disordered-vortex, or a reservoir-textured regime.
## III Characterization : Observables and Methods
### First-order coherence, phase correlations and momentum distribution
We characterize the coherence of the condensate by computing the first-order space-time correlation function
\[g^{(1)}(\Delta x,\Delta t)=\frac{\langle\psi^{*}(x_{0}+\Delta x,t_{0}+\Delta t )\psi(x_{0},t_{0})\rangle}{\sqrt{\langle|\psi(x_{0}+\Delta x,t_{0}+\Delta t)|^ {2}\rangle\langle|\psi(x_{0},t_{0})|^{2}\rangle}}, \tag{6}\]
where \(\langle..\rangle\) denotes the average over the stochastic trajectories generated by the noise \(\eta\). Upon assuming that density-density and density-phase correlations are negligible, the behavior of the coherence is determined by the phase-phase correlations, as follows from the cumulant expansion of \(g^{(1)}(\Delta x,\Delta t)\), which gives to lowest order
\[|g^{(1)}(\Delta x,\Delta t)|\approx\mathrm{e}^{-\frac{1}{2}C_{\theta}(\Delta x,\Delta t)} \tag{7}\]
with \(C_{\theta}(\Delta x,\Delta t)=\langle\left[\theta(x_{0}+\Delta x,t_{0}+\Delta t )-\theta(x_{0},t_{0})\right]^{2}\rangle\).
In the KPZ regime the phase correlation function \(C_{\theta}(\Delta x,\Delta t)\) is predicted to display the universal scaling form [8]:
\[C_{\theta}^{(\mathrm{KPZ})}(\Delta x,\Delta t) =C_{0}\Delta t^{2\beta}g^{(\mathrm{KPZ})}\left(\alpha_{0}\frac{ \Delta x}{\Delta t^{1/z}}\right)\] \[\sim\begin{cases}|\Delta x|^{2\chi}&t\to 0\\ \Delta t^{2\beta}&x\to 0\end{cases} \tag{8}\]
where \(\chi=1/2\), \(\beta=1/3\), \(z=\chi/\beta=3/2\) are the exact universal Kardar-Parisi-Zhang exponents in one dimension, \(g^{(\mathrm{KPZ})}(y)\) is the universal scaling function calculated exactly in Ref. [29], and \(C_{0},\alpha_{0}\) are non-universal normalization parameters.
Equal-time correlations are also used to obtain the polariton momentum distribution, given by
\[n(k)=\langle\psi^{*}(k,t_{0})\psi(k,t_{0})\rangle \tag{9}\]
where \(\psi(k,t)=\int dx\,e^{ikx}\psi(x,t)\).
### Soliton and vortex counting
When varying the parameters of the gGPE, we evidence various types of topological defects, in particular dark or grey solitons and space-time vortices. Recalling that a dark soliton is characterized by a vanishing density at its core and a \(\pi\) jump in the phase [30], we estimate the soliton density by
\[n_{s}(t)=\frac{1}{\pi L}\int_{0}^{L}|\partial_{x}\theta(x,t)|dx, \tag{10}\]
For a soliton train made of \(N_{s}\) dark solitons with cores at positions \(x_{n}\) one would have \(|\partial_{x}\theta(x)|\simeq\pi\sum_{n=1}^{N_{s}}\delta(x-x_{n})\) hence \(Ln_{s}=\int_{0}^{L}\sum_{n=1}^{N_{s}}\delta(x-x_{n})dx=N_{s}\). For the case of grey solitons, the above estimator yields \(Ln_{s}<N_{s}\), still providing a reliable estimate of the soliton density, especially at low noise. Note that this estimator also counts the phase variations due to the noise perturbation, contributing a background value which becomes important at large noise. In the numerical calculations, we evaluate the phase derivative as \(\partial_{x}\theta(x,t)=-ie^{-i\theta(x,t)}\partial_{x}\,e^{i\theta(x,t)}\) for better accuracy.
As discussed e.g. in Ref. [27], vortex defects in space time may appear in 1D quantum systems. We account for space-time vortices by considering the \(z\)-component of the space-time vorticity field \(\Omega(x,t)=\partial_{x}\partial_{t}\theta(x,t)-\partial_{t}\partial_{x}\theta (x,t)\). The flux of \(\Omega(x,t)\) through the surface
\(T\) gives a multiple of \(2\pi\), allowing to estimate the total density of vortices as:
\[n_{v}=\frac{1}{2\pi LT}\int_{0}^{T}\int_{0}^{L}|\Omega(x,t)|dtdx \tag{11}\]
which accounts for the positively and negatively charged vortices. Note that, although the two types of topological defects are markedly different, the two indicators defined in (10) and (11) share some similarities and provide comparable estimates for the number of defects.
### Bogoliubov stability diagram
The development of contrasted structures in the condensate, out a weak incoming noise, suggests that the condensate is in a regime in which its fluctuations are unstable. We thus determine the Bogoliubov spectrum of the condensate elementary excitations \(E_{\rm Bog}(k)\), which describes the linearized coupled dynamics of small fluctuations of condensate and reservoir density around the homogeneous mean-field solution [1]. For each given set of parameters, we detect the presence of modulational instabilities when the imaginary part of the Bogoliubov spectrum is positive [1] (see Fig. 1). We follow the maximum value of \({\rm Im}\ \{E_{\rm Bog}(k)\}\) to track the unstable regions in parameter space. The resulting Bogoliubov stability diagram in the \((\mu_{\rm th},p)\) plane is illustrated in Figure 1.
We remark that the origin of the instability is not due to a change in sign of the effective two-body coupling strength \(g_{\rm eff}=g-\frac{2g\eta n_{0}}{\gamma_{R}p}\) since as \(g=0\), it is always negative in our case. If it were the case, the stable and unstable areas in Fig. 1 would be separated by the straight line defined by \(g_{\rm eff}=0\). The instability shown here is rather connected with the breakdown of adiabaticity of the reservoir dynamics (2), similarly to [21].
For later comparison, we also calculate the momentum distribution within the linear Bogoliubov approximation, in the limit of adiabatic evolution of the reservoir.
\[n_{k}=2\pi|\psi_{0}|^{2}\delta(k)+\frac{\sigma}{\Gamma_{0}+\Gamma_{k}}\left(1+ \frac{\Gamma_{0}^{2}+\tilde{\mu}^{2}}{E_{k}^{2}+\Gamma_{k}^{2}+2\Gamma_{0} \Gamma_{k}}\right) \tag{12}\]
where \(\Gamma_{0}=\frac{1}{2}\gamma_{0}\left(\frac{p-1}{p}\right)\), \(\Gamma_{k}=\frac{1}{2}\gamma_{2}k^{2}\), \(E_{k}^{2}=\epsilon_{k}(\epsilon_{k}+2\tilde{\mu})\) and \(\tilde{\mu}=g_{\rm eff}|\psi_{0}|^{2}\). The large-\(k\) behavior of the momentum distribution is given by \(n_{k}\sim\frac{\sigma}{\Gamma_{k}}\sim k^{-2}\), which is thus determined by the momentum dependence of the polariton linewidth.
The behavior of the generalized Gross Pitaevskii equation in a dynamically unstable regime has been studied in various works [21; 22; 23; 15]. While the onset of instabilities can be described in terms of the linearized evolution of Bogoliubov modes, their full development breaks down the Bogoliubov approximation such that their dynamics can only be accessed numerically.
### Numerical methods
We investigate the dynamics of the driven-dissipative exciton-polariton condensate via the numerical integration of the coupled equations (1) and (2). We fix the system size to \(L=1024a\). For each time step \(dt\), we compute the evolution using a standard semi-implicit split-step method for (1) and an explicit fourth order Runge-Kutta method for (2), then adding to each point of \(\psi(x,t)\) the stochastic term \(\sqrt{\sigma\left(\frac{dt}{a}\right)}\left(r_{1}+ir_{2}\right)\) using Euler-Maruyama scheme, with \(r_{1,2}\) independent random numbers sampled from a normal distribution \(\mathcal{N}(0,1)\).
The runs are performed according to the following procedure. We set as initial condition the homogeneous mean-field solution \(\psi(x,0)=\sqrt{\rho_{0}}\). We then run the full stochastic evolution, during which we record the variation of the space-averaged density \(\tilde{\rho}(t)=\frac{1}{L}\int_{0}^{L}|\psi(x,t)|^{2}dx\), up to a finite time \(t_{0}\) after which the system reaches a steady-state. The value of \(t_{0}\) depends on the specific parameters of Eqs. (1, 2) and is thus determined case by case. We identify different regimes by characterizing the steady state dynamics of \(\psi(x,t)\) and its first-order correlations, where the reference time is set to \(t_{0}\). We employ the numerical simulations to explore the different dynamical regimes realized by varying the value of the blueshift \(\mu_{\rm th}\), the reduced pump \(p\) and the intensity of stochastic fluctuations \(\sigma\).
Figure 1: (Color online) (a, b) Real and imaginary parts of the Bogoliubov spectrum for \(p=1.15\), \(\mu_{\rm th}=0.6\ {\rm meV}\). (c) Bogoliubov stability diagram of Eqs. (1, 2) in the plane \((\mu_{\rm th},p)\). The instability is given by the condition \(E_{max}=\max\{{\rm Im}(E_{\rm Bog}(k))\}>0\).
Results
### Kardar-Parisi-Zhang regime
We start by characterizing the condensate in the parameter region which is dynamically stable according to the Bogoliubov analysis (see Fig. 1). Typical space-time maps for the condensate density and phase are shown in Fig. 2a,b. In order to highlight at best the features of the dynamics, in the space-time maps the phase is rescaled as \(\theta(x,t)=\mathrm{Arg}\left[\psi\,e^{i\omega_{r}t}\right]\), which amounts to shifting the origin of the energies by an arbitrary value \(\hbar\omega_{r}\), suitably chosen to highlight the phase structure. The images show a relatively smooth density profile, with small modulations due to the noise fluctuations, and a uniform phase front showing no dislocations. We then calculate the space-time first-order correlation function \(g^{(1)}(\Delta x,\Delta t)\) by averaging over the trajectories obtained for independent noise realizations. The results are shown in Fig. 2c,d. The KPZ universal scaling clearly emerges in the \(g^{(1)}\) correlations in the space-time plane when plotting the rescaled correlator \(g^{(\mathrm{EP})}(\Delta x,\Delta t)=-2C_{0}^{-1}\Delta t^{-2\beta}\log|g^{(1)} (\Delta x,\Delta t)|\), with \(\beta=1/3\) the KPZ critical exponent and \(C_{0}\) a normalisation constant. Indeed, owing to the expression (8), \(g^{(\mathrm{EP})}\) is expected to depend not on space and time independently, but only on the ratio (the scaling variable) \(\Delta x/\Delta t^{1/z}\). This is manifest in the density plot as parallel level lines of slope \(\Delta t\sim\Delta x^{3/2}\), and is also confirmed by the data collapse onto a single curve given by the exact KPZ scaling function \(g^{(\mathrm{KPZ})}\)[29].
### Soliton-patterned regime
We now analyze the dynamics of the exciton-polariton condensate in the parameter region which is dynamically unstable according to the linear Bogoliubov analysis (see Fig. 1), focusing on the low-noise regime \(\sigma\leq\sigma_{0}\). We find that the dynamical instability found in the linearized equations of motion within the Bogoliubov approximation favors the emergence of spatial modulations that are then hampered at the nonlinear level, leading to a patterned regime with a stationary average density lower than in the KPZ regime.
The space-time maps for the density and phase of a typical trajectory are shown in Fig. 3. At difference with the KPZ regime, we observe the nucleation and proliferation of long-lived, spatially localized structures characterized by density dips and \(\approx\pi\) phase jumps, which dominate the long-time dynamics. We refer to them as dark solitons, in analogy to the corresponding structures which emerge in the solution of the non-linear Schrodinger equation in closed quantum systems.
In order to determine the parameter region where solitons emerge, we follow the space and time average of the density profile \(\langle\bar{\rho}\rangle\) and of the soliton density \(\langle n_{s}\rangle\), shown in Fig. 4. The soliton regime, with \(\langle n_{s}\rangle>0\), is characterized by a strongly reduced density. The information displayed by both indicators is consistent, and globally reflects the picture obtained from the Bogoliubov stability analysis. The Bogoliubov stability diagram thus provides a qualitative determination of the phase boundary of the soliton regime in the \((\mu_{\mathrm{th}},p)\) plane.
Yet, the data from direct numerical simulations exhibits two additional features: i) the soliton-free region is reduced at low pump values compared to the Bogoliubov instability boundary; ii) the density of dark solitons is strongly dependent on the pump power. This feature was also reported in [15]. The effect i) is due to a renormalization by the pump power of the relative noise amplitude as \(\sigma/|\psi|^{2}\propto\sigma/(p-1)\), which becomes large at low pump powers, undermining the validity of the Bogoliubov approximation. This increase of the effective noise amplitude at small \(p\) promotes the random activation of soliton-like defects, which thus leads to a larger density of solitons. The second feature ii) is connected to the appearance, together with solitons, of two typical length scales \(d_{s}\) and \(L_{s}\), which can be interpreted as the typical spacing and size of solitons respectively. They are both found to depend on the pump as
\(d_{s},L_{s}\sim 1/\sqrt{p-1}\) (see Appendix A). They thus affect the total density of solitons which can approximately be estimated as \(n_{s}=1/(L_{s}+d_{s})\propto\sqrt{p-1}\).
The space-time first-order correlation function for the soliton-patterned regime is shown in Fig. 3. We find that it approximately corresponds to a Gaussian decay in space-time as \(g^{(1)}(\Delta x,\Delta t)\sim\mathrm{e}^{-(a\Delta t^{2}+b\Delta x^{2})}\). The coherence decay hence endows a scaling form, with the scaling variable \(y=y_{0}x/t^{1/z}\) and exponents \(\chi=\beta=1\), and thus a dynamical exponent \(z=1\), yielding a data collapse according to
\[-2\mathrm{log}|g^{(1)}(\Delta x,\Delta t)|=C_{0}^{(sol)}\Delta t^{2}g^{(\mathrm{ sol})}\left(\alpha_{0}^{(sol)}\frac{\Delta x}{\Delta t}\right) \tag{13}\]
with \(g^{(\mathrm{sol})}(y)=1+y^{2}\) and \(C_{0}^{(sol)},\alpha_{0}^{(sol)}\) normalization constants. We remark however that the window of spatial and temporal coherence of the condensate is smaller than the one in the KPZ regime (notice the different horizontal scale as compared to Fig. 2). The spatial coherence is limited by the typical spacing between solitons \(d_{s}\). Let us mention that the scaling \(z=1\) was also recently observed in studies of the inviscid, or tensionless, limit of the KPZ equation [31; 32; 33]. The potential connection between the two systems is intriguing and would deserve further theoretical investigation.
The presence of solitons is also clearly detectable in the momentum distribution \(n(k)\), defined by Eq. (9), which is displayed in Fig. 4. One observes that two scales appear. They can be identified as \(k_{1}\sim d_{s}^{-1}\) and \(k_{2}\sim L_{s}^{-1}\), the typical spacing and size of solitons. For \(k<k_{1}\), the momentum distribution is \(k\)-independent, due to the loss of long-range first-order coherence. In the region \(k_{1}<k<k_{2}\), we observe a power law decay \(n_{k}\sim k^{-\alpha}\) with \(\alpha\) a non-universal exponent in the range \(6\lesssim\alpha\lesssim 7\). Similar decays were reported in [21; 27]. This behaviour is well marked at high pump power. Finally, for \(k>k_{2}\) the momentum distribution exhibits a power law decay \(n_{k}\sim k^{-2}\). This decay is determined by the microscopic details of the model (1), and in particular can be related to the quadratic growth of the polaritons linewidth \(\gamma(k)=\gamma_{0}+\gamma_{2}k^{2}\). It thus has a very different origin from the one predicted by [34] in the context of non-thermal Bose gases.
To further characterize the scaling range where the momentum distribution displays the \(k^{-\alpha}\) decay, we show in Fig. 4 the data of \(n(k)\) for different \(\mu_{\mathrm{th}}\) at fixed \(p\) (left panel) and for different \(p\) at fixed \(\mu_{\mathrm{th}}\) (right panel). In the latter, the momentum is rescaled as \(k\rightarrow\hat{k}=k/\sqrt{p-1}\), which yields a very good horizontal collapse of the curves
Figure 4: (Color online) (a) Average density of the condensate and (b) density of solitons (right panel) in the \((\mu_{\mathrm{th}},p)\) plane (note the inversion of the color scheme). The results are obtained by averaging over 100 independent trajectories starting from a sampling time \(t=50000\)\(ps\), sufficient to exceed the time needed for the instability to be activated, when present, and for the soliton-patterned regime to fully develop, and then averaged over the subsequent time interval \(\Delta t=5000\)\(ps\). The noise is \(\sigma=0.1\sigma_{0}\). The dashed line corresponds to the limit of modulational instability determined within the Bogoliubov analysis (see Fig. 1). Bottom: momentum distribution \(n(k)\) (in units of \(\rho_{0}\)) as a function of the momentum \(k\) (in units of \(\rho_{0}\)), for (c) fixed \(p\) and (d) fixed \(\mu_{\mathrm{th}}\). In (d) the momenta are rescaled with the pump as \(\hat{k}=k/\sqrt{p-1}\).
at intermediate wavevectors. This shows that the two scales \(k_{1}\) and \(k_{2}\) are independent of \(\mu_{\rm th}\), and depend on the pump power with as \(\propto 1/\sqrt{p-1}\). The appearance of the power-law scaling \(n(k)\sim k^{-\alpha}\) with a large power-law exponent \(6\leq\alpha\leq 7\) is a clear and robust signature of the soliton regime, which can be used to detect this phase.
### Disordered vortex regime
We now focus on the effect of increasing the noise strength starting from the soliton-free region of the phase diagram where the system exhibits KPZ universal features. We show in Fig. 5 a typical space-time map obtained at large noise \(\sigma=5\sigma_{0}\). In this case the phase map displays a high density of phase slips, or space-time vortices, appearing as fork-like in the \((x,t)\) plane.
In order to characterize the activation of these defects, we investigate the behavior of the defect density \(\langle n_{v}\rangle\) when varying the noise strength and the pump power. The results are shown in Fig. 6 as a function of the rescaled noise \(\sigma/\rho_{0}\propto\sigma/(p-1)\). Indeed, this specific combination enters the effective noise \(D\) in the KPZ equation (5) and hence is the relevant parameter for the stochastic dynamics of the phase. For high rescaled noise values, the density of vortices displays an exponential behavior \(\langle n_{v}\rangle\propto\exp\left[-\frac{A(p-1)\sigma_{0}}{\sigma}\right]\), where \(A\) is a constant which depends on the other parameters of the model, consistently with the results of Ref. [27] where noise-activated space-time vortices were first reported.
This feature is progressively lost upon decreasing the noise. Below a certain value, indicated with a dashed line in both left and right panels of Fig. 6, the noise and pump effects decouple and \(\langle n_{v}\rangle\) departs from the exponential law. We observe that as the pump is further increased, the density of noise-activated defects stabilizes, the density dips lasting longer in time, and the defects progressively deforming into longer-lived structures. Their shape is somewhat between vortices and solitons. This feature is also visible in the momentum distribution, as shown in Fig. 7. At low pump, the activation of defects at increased noise translates into plateaus at low momenta, corresponding to a coherence loss beyond the average vortex separation length. At higher pump, we observe a modulation at intermediate momenta, signature of long-lived defects.
We then determine the first-order correlation function in order to characterize the space-time coherence of the exciton-polariton condensate in the vortex phase. In the presence of topological defects, the validity of the relation (7) connecting the coherence of the condensate \(g^{(1)}(\Delta x,\Delta t)\) to the phase correlations \(C_{\theta}(\Delta x,\Delta t)\) eventually fails. Indeed, in \(g^{(1)}(\Delta x,\Delta t)\) the phase differences only enter in the complex exponential, hence the information about \(\approx 2\pi\) phase jumps is lost, whereas in the unwrapped phase correlations, these phase jumps have a strong effect and dominate the scaling. This feature is visible when the temporal coherence of \(g^{(\rm EP)}(\Delta x,\Delta t)=-2\Delta t^{-2\beta}\log|g^{(1)}(\Delta x, \Delta t)|\) and of the phase-phase correlations are compared for a low value of the pump, as shown in the left panel of Fig. 8. The two correlation functions differ for time intervals larger than \(t_{c}\), such that \(C_{\theta}(\Delta t>t_{c})>-2\log|g^{(1)}|(\Delta t>t_{c})\). Thus at low pump, as a result of this resilience of the coherence to \(2\pi\) phase jumps, the KPZ scaling in the coherence persists in the presence of vortices. This is due to the fact that the phase dynamics is piece-wise KPZ, i.e. the phase fluctuations grow according to the KPZ dynamics within two defects.
Figure 5: (Color online) (a, b) Space-time maps for the condensate density (in units of \(\rho_{0}\)) and for the rescaled phase \(\theta(x,t)=\text{Arg}\left[\psi(x,t)\,e^{i\omega_{r}t}\right]\) in the disordered-vortex regime. The parameters are \(\mu_{\rm th}=0.1\) meV, \(p=1.15\), \(\sigma=5\sigma_{0}\), \(\omega_{r}=0\). In the zoomed window, two spatio-temporal vortices of opposite charge are identified.
However, what distinguishes the vortex-disordered regime from the KPZ one is that, in the vortex regime, the phase-phase correlations depart from the KPZ universal behavior, and exhibit instead a linear scaling in time, compatible with the effect of random phase jumps, as suggested in Ref. [27], and similarly to Ref. [35]. When the pump is increased, the reservoir dynamics affects the correlations as is detailed in Sec.IV.4.
We note that, due to the phase jumps occurring in both cases, the counting of solitons gives equivalent results as the total vorticity shown in Fig 6. Hence, the density of defects alone is not enough to discriminate between the vortex and the soliton phases. The joint analysis of the spatio-temporal correlations is needed, and shows two distinct scaling behaviors in the vortex and soliton phases.
Let us make a final remark concerning the crossover from the KPZ phase to the vortex-disordered phase. We observed that the density of vortices is a clear marker of the crossover, while the coherence is not very sensitive to it. This situation is reminiscent of spatio-temporal intermittency encountered and characterized in the context of the complex Ginzburg-Landau equation [16; 17; 18; 36]. This regime is usually reported in regions of the parameter-space for which linear wave solutions are stable. It is characterized by extended regions of coherent waves which coexist with localized defects, separating them. As was pointed out in Ref. [18], spatio-temporal intermittency is badly characterized by global observables (such as the average density) or locally probed averages (such as space-time correlations via \(g^{(1)}\)). One rather needs to follow the full evolution of an observable in order to identify the actual dynamical regime. It is indeed the unwinding of the phase along the time axis that allows one to detect the crossover in the phase-phase correlations \(C_{\theta}\).
### Reservoir-textured regime
We finally investigate the mechanism responsible for the departure from the KPZ regime at high pump, focusing on the case \(\sigma=\sigma_{0}\), where topological defects are rare. In this case, the fading of the KPZ scaling is not associated with the appearance of defects, but can be attributed to the fact that the coupling between the reservoir and the condensate dynamics cannot be neglected, and the adiabatic approximation does not hold any more. To show this, one can compare the results obtained by running the full model (1) and (2), and by running its adiabatic approximation, where the reservoir dynamics is replaced by its stationary solution (\(\partial_{t}n_{R}=0\) in (2)). This comparison is shown in Fig. 9, where typical space-time maps of the condensate phase and density, and of the reservoir density are displayed in both cases. In the full model, the density fluctuations are larger in the non-adiabatic case as compared to the adiabatic case, forming structures that we call textures. In Fig.9.(c) We observe that this texture contrast is increased at higher pump power. This comparison clearly indicates that this texturing results from the coupled condensate and reservoir dynamics.
In Fig.10, we compare \(g^{(1)}(\Delta t,\Delta x=0)\), as calculated from Eqs.(1),(2) without any approximation, to the adiabatic one for a low and a high pump value. While they are very similar at small pump, they differ at large pump, where a structure appears at large \(\Delta t\) in the non-adiabatic case. This structure is absent in the adiabatic approximation where a KPZ scaling persists in the coherence even at large pump.
Although the phase remains defect-free in this regime (see Fig. 9), the coherence of the condensate, measured by \(-2\log|g^{(1)}|\), progressively looses the KPZ scaling upon increasing pump power, as shown in the left panel of Fig. 11. This comes with a modulation of the phase correlations, resulting in a faster decay of the coherence which deviates from the KPZ decay. This can be quantified by determining the delay window \(\Delta T^{(\text{KPZ})}\) in which the KPZ scaling persists. For this, we perform a fit of the power law \(\Delta t^{2\beta}\) over increasing time windows \([\Delta t_{0}=10,\Delta t_{1}]\), computing the exponent \(\beta\) for each \(\Delta t_{1}\). We stop the iteration and define the KPZ delay window \(\Delta T^{(\text{KPZ})}=\Delta t_{stop}-\Delta t_{0}\) when \(2\beta>0.7\). The outcome
Figure 8: (Color online) (a) Comparison of the temporal correlations of the phase for low pump \(p=1.16\) at different noise values (from dark to light blue \(\sigma/\sigma_{0}=6,10,18,25,50\)), computed from the coherence \(-2\text{log}|g^{(1)}(t,x=0)|\) (solid lines) and directly as \(C_{\theta}\) (dashed lines). (b) Spatio-temporal coherence decay, compensated by the KPZ scaling as in Fig. 2, for \(p=1.15\), \(\sigma=25\sigma_{0}\). The normalisation is \(C_{0}=0.045\).
Figure 7: (Color online) Comparison of the behavior of the momentum distributions at increasing noise (\(1\leq\sigma\leq 25\)) for (a) \(p=1.16\) and (b) \(p=1.68\). The color code indicates the density of vortex-like defects \(\langle n_{v}\rangle\) of Fig. 6.
of this procedure in the whole \((p,\sigma)\) plane is displayed in the right panel of Fig. 11. We observe that the departure from the KPZ phase when increasing the pump intensity is only weakly dependent on the noise strength. We emphasize that a similar observation of the shrinking of the KPZ window as the pump power is increased was reported in Ref. [10], both from experimental measurements and numerical simulations.
In light of this analysis, let us come back to the behavior of the vortex density and momentum distribution in the \((p,\sigma)\) plane discussed in Sec. IV.3. In fact, we find that at high pump power, as the reservoir dynamics becomes important, the noise fluctuations can activate sparse long-lived solitons. These structures are responsible for the convergence of the computed vortex density to a small but non-zero value visible in Fig.6 on the right side of the dashed lines, and also to the corresponding modulation in the momentum distribution of Fig. 7 at high pump. We hence attribute these two effects to the role of the reservoir. Finally, let us mention that the space-time maps in this regime are similar to those reported in Ref. [15] under the label of non-adiabatic instability, although the model and parameters studied in this reference are different from ours and cannot be directly compared.
### Phase Diagram
Our analysis is summarized in Fig. 12, displaying a three-dimensional schematic phase diagram at varying pump, noise and interaction strengths, which collects at a glance our results of Figs. 4.b, 6.a and 16. The diagram shows the transition from the KPZ phase to the soliton phase at increasing \(\mu_{\rm th}\), and the crossovers to the vortex-disordered phase at increasing \(\sigma\), and to the reservoir-textured phase at increasing \(p\), monitored by following the soliton and vortex densities, and the size of the KPZ window in time.
For the experimentally realistic model given by the coupled Eqs.(1) and (2), we highlight the existence of two new regimes in addition to the vortex one pointed out in [27], respectively characterized by the presence of solitons or reservoir-induced textures.
We also compare in Fig. 13 the momentum distribution in the four regimes. In the KPZ and reservoir-textured regimes, the momentum distribution scales as \(k^{-2}\) at in
Figure 11: (Color online) (a) Behavior of the temporal coherence decay at different values of the pump (from \(p=1.05\), yellow, to \(p=2.0\), dark brown ) with \(\sigma=1\). (b) Temporal window of KPZ scaling \(\Delta T^{\rm(KPZ)}\) (see text). The black dotted line corresponds to \(\Delta T^{\rm(KPZ)}=200\) ps.
Figure 10: (Color online) Temporal coherence decay for two values of the pump \(p=1.16\) and \(p=1.68\) at \(\sigma=1.0\), \(\mu_{\rm th}=0.1\) meV, computed from the general case without adiabatic approximation (solid line), _i.e_ the coupled Eqs. 1, 2, and from its adiabatic approximation (dashdot line). The black line serves as a guide to show the KPZ scaling \(\beta=1/3\).
Figure 9: (Color online) (a,b,c) Simulated evolution of the condensate density \(\rho\) (units of \(\rho_{0}\)), phase \(\theta=\text{Arg}\left[\psi\,e^{i\omega_{r}t}\right]\) and reservoir density \(n_{R}\) (in units of \(n_{R0}=\gamma_{0}/R\)), compared with (d,e,f) a simulation with the same parameters but implementing the adiabatic approximation for the reservoir. The parameters are \(p=1.68\), \(\sigma=\sigma_{0}\), \(\mu_{\rm th}=0.1\) meV, \(\hbar\omega_{r}=0.1\mu_{\rm th}\).
termediate wavevectors, like in the linear theory obtained from Bogoliubov approximation. Indeed, in one dimension, the exponential decay of the equal-time spatial correlations for KPZ and in the equilibrium (linear) theory, have the same \(\chi=1/2\) exponent, which is equivalent to a \(k^{-2}\) decay in Fourier space. The soliton regime is signaled by the clear appearance of the \(k^{-\alpha}\) scaling, whereas in the vortex regime, the momentum distribution exhibits a plateau at small \(k<k_{1}\).
## V Conclusion
In this work, we have obtained the phase diagram of a one-dimensional driven-dissipative exciton polariton condensate at varying interaction, noise and pump magnitudes. We have identified three regimes competing with the KPZ one: at strong interactions, a soliton-patterned regime arises from the dynamical instability of the Bogoliubov excitations at linear level; at higher noise, a vortex-disordered regime sets in due to the proliferation of phase slips, which are space-time point-like defects; while at high pump, a defect-free reservoir-textured regime emerges due to the coupling between the reservoir and the condensate dynamics leading to a non-adiabatic behaviour. We have characterized the space-time coherence in all the regimes and identified the momentum distribution as a key observable to detect the soliton regime. The vortex regime is characterized by a linear increase of the phase-phase time correlations, while the first-order correlation function is weakly affected by vortices and still displays the same critical exponents as in the KPZ region. Based on current experimental possibilities, we plan to extend this analysis to two spatial dimensions, where additional regimes have been proposed to compete with the KPZ one [37; 38; 39; 40; 41; 42; 43]. Finally, we emphasize that the versatility of the exciton-polariton platform allows tuning a great variety of parameters, such as the drive amplitude, the sample temperature, the pump noise, the material composition, etc. Therefore, navigating through various phases and exploring the rich phase diagram evidenced in this work is experimentally within reach.
###### Acknowledgements.
FV acknowledges the support from France 2030 ANR QuantForm-UGA. AM acknowledges funding from the Quantum-SOPHA ANR Project ANR-21-CE47-0009. LC acknowledges support from Institut Universitaire de France (IUF). MR acknowledges support by the Centre of Quantum Technologies Exploratory Initiative programme. GF, SR, and JB acknowledge support from the Paris Ile-de-France Region in the framework of DIM QuanTIP, the French RENATECH network, the H2020-FETFLAG project PhoQs (820392), the European Research Council via projects StG ARQADIA (949730) and AdG ANAPOLIS (101054448).
Figure 12: (Color online) Schematic phase diagram of the different regimes, merging the results of Figs.4.b, 6.a and 16. The representation of the four regimes and their transitions is depicted by using four colors and different shadings. The \((\mu_{\text{th}},p)\) plane is based on the characterization of Sec. IV.2 at \(\sigma=0.1\sigma_{0}\). The thick solid line represents the limit of modulational instability. The \((p,\sigma)\) plane is based on the characterization of Sec. IV.3 and IV.4 at \(\mu_{\text{th}}=0.1~{}\text{meV}\). The \((\mu_{\text{th}},\sigma)\) plane is based on the characterization at \(p=1.16\) reported in appendix A.3. The dashed lines represent the onset of a finite defect density, while the dotted lines represent the ending of the KPZ time window in the first-order correlation function.
Figure 13: (Color online) Comparison of the momentum distribution \(n_{\text{k}}\) in the four regimes. The parameters are: \(p=1.26\), \(\mu_{\text{th}}=0.10~{}\text{meV}\), \(\sigma=0.1\) (KPZ regime); \(p=1.26\), \(\mu_{\text{th}}=0.30\) meV, \(\sigma=0.1\) (soliton regime); \(p=1.26\), \(\mu_{\text{th}}=0.10~{}\text{meV}\), \(\sigma=9.0\) (vortex regime); \(p=1.68\), \(\mu_{\text{th}}=0.10~{}\text{meV}\), \(\sigma=1.0\) (reservoir regime). The vertical axes are rescaled to facilitate the comparison in the low momentum sector. The prediction of the linear theory is denoted \(n_{k}^{(lin)}\) for the parameters of the KPZ regime.
## Appendix A Details on the soliton-patterned regime
### Relevant scales
In order to estimate the typical soliton spacing \(d_{s}\), we consider the equal-time first-order correlation function \(g^{(1)}(\Delta x,\Delta t=0)\). Its behavior for different pump powers is shown in Figure 14, where the space axis has been rescaled as \(\Delta x\rightarrow\Delta x\sqrt{p-1}\). The Gaussian decay is followed by oscillations which are due to the presence of long-lived soliton patterns. We estimate the typical soliton spacing \(d_{s}\) by taking the first minimum of \(|g^{(1)}(x,t=0)|\). We find that it depends on the pump as \(d_{\rm s}\sim(p-1)^{-1/2}\), as anticipated in the main text. Moreover, we checked the dependence of \(d_{s}\) on the diffusion constant \(\gamma_{2}\), finding \(d_{s}\sim\sqrt{\gamma_{2}}\).
### First-order transition
We present in Fig. 15 two cuts of Fig. 4 which shows the average density of the condensate and the average number of solitons: one as a function of interaction strength for different pump powers, and the other as a function of the pump power for different interaction strengths. At increasing interaction strength, we observe for all pump powers an abrupt jump of both \(\langle\rho\rangle\) and \(\langle n_{s}\rangle\) at a transition value \(\mu_{\rm tr}\sim 0.2\). Similarly, the curves as a function of \(p\) split in two clearly distinct sets for \(\mu_{\rm th}>\mu_{\rm tr}\) and \(\mu_{\rm th}<\mu_{\rm tr}\). Both observations indicate a first-order phase transition, as was reported in Ref. [27]. Note that in Ref. [38], an analogue transition was observed for a two-dimensional exciton-polariton condensate, as a consequence of the Benjamin-Ferr instability by tuning the pump power below the critical value for the instability.
### The (\(\mu_{\rm th}\), \(\sigma\)) diagram
The density of vortices \(\langle n_{v}\rangle\) computed in the \((\mu_{\rm th},\sigma)\) for a low pump value \(p=1.16\) is reported in Fig. 16. The first order transition, shown at low noise in Sec. IV.2 and Appendix A.2 is not visible here since the number of noise-activated defects is already non-negligible for the lowest value \(\sigma=\sigma_{0}\) considered for this plot. A joint analysis of the momentum distributions (not shown here) performed at \(\sigma=\sigma_{0}\) has confirmed that the KPZ to soliton transition is still marked by a change of behaviour at finite momenta, and in particular the appearance of the high-power law decay \(n(k)\sim k^{-\alpha}\) upon entering the soliton regime.
Figure 16: (Color online) Average vortex density in the \((\mu_{\rm th},\sigma)\) plane, for \(p=1.16\). The dashed line indicates the limit of Bogoliubov instability.
Figure 14: (Color online) (a) Zero-time spatial correlations \(g^{(1)}\Delta x,\Delta t=0\)). The Gaussian \(e^{-a\Delta x^{2}}\) with fitted value \(a=0.03\) is plotted in solid black line for comparison. (b) Typical soliton spacing \(d_{s}\) (in lattice sites), estimated as the minimum of \(g^{(1)}(\Delta x,\Delta t=0)\).
Figure 15: (Color online) Average condensate density (a, b) and average density of solitons (c, d). (a, c) Fixed pump powers, equally spaced from \(p=1.05\) (dark) to \(p=2.0\) (light). (b, d) Fixed interaction strengths, equally spaced from blue \(\mu_{\rm th}=0.05\) meV to rose \(\mu_{\rm th}=0.6\) meV.
### Nature of the solitons
In the one dimensional system studied in Ref. [24], the authors report a chaotic dynamics characterized by nucleation and tree-like branching of defects in space-time. In our case, we do not observe the same branching mechanism. The model we consider in this work differs from the one studied in Ref. [24] by the sign of the mass (negative here), the interaction energy (reservoir term \(2g_{R}n_{R}\) only) and the presence of stochastic fluctuations during the whole evolution. It follows that the dynamics is qualitatively different. In our case, we rather observe the emergence of metastable soliton pairs, as we show in Fig. 17. To explain this feature, we calculate the fluid velocity and current, given respectively by
\[u_{x}=\frac{\hbar}{m}\partial_{x}\theta \tag{10}\] \[J=-i\frac{\hbar}{m}\left(\psi^{*}\partial_{x}\psi-\partial_{x} \psi^{*}\psi\right). \tag{11}\]
A typical configuration is shown in Fig. 17. Both the velocity and the current give a zero total contribution across the soliton pair, which hence does not split apart. The soliton pair is stable in our case, contrarily to Ref. [24].
|
2303.16842 | A new type of stable shock formation in gas dynamics | From an open set of initial data, we construct a family of classical
solutions to the 1D nonisentropic compressible Euler equations which form
$C^{0,\nu}$ cusps as a first singularity, for any $\nu \in [1/2,1)$. For this
range of $\nu$, this is the first result demonstrating the stable formation of
such $C^{0,\nu}$ cusp-type singularities, also known as pre-shocks. The proof
uses a new formulation of the differentiated Euler equations along the fast
acoustic characteristic, and relies on a novel set of $L^p$ energy estimates
for all $1 < p < \infty$, which may be of independent interest. | Isaac Neal, Calum Rickard, Steve Shkoller, Vlad Vicol | 2023-03-29T16:48:08Z | http://arxiv.org/abs/2303.16842v2 | # A new type of stable shock formation in gas dynamics
###### Abstract.
From an open set of initial data, we construct a family of classical solutions to the 1D nonisentropic compressible Euler equations which form \(C^{0,\nu}\) cusps as a first singularity, for any \(\nu\in[\frac{1}{2},1)\). For this range of \(\nu\), this is the first result demonstrating the stable formation of such \(C^{0,\nu}\) cusp-type singularities, also known as _pre-shocks_. The proof uses a new formulation of the differentiated Euler equations along the fast acoustic characteristic, and relies on a novel set of \(L^{p}\) energy estimates for all \(1<p<\infty\), which may be of independent interest.
## 1. Introduction
A core line of inquiry in the study of nonlinear partial differential equations is the analysis of the finite-time breakdown of classical solutions. For systems of conservation laws like the compressible Euler equations, the prototypical form of finite-time breakdown is a _shock_, where the smooth solution stays continuous but steepens to a cusp at a point in spacetime known as the _pre-shock_, after which time it can no longer exist as a smooth solution. In recent decades it has become apparent that having access to the precise behavior of the solution at the time of the first singularity (the _shock profile_), is key to determining how the resulting shock waves and weak singularities develop and propagate. Here we study the shock formation problem for the Euler equations of one-dimensional ideal gases, and we provide a detailed description of a family of _stable_ shocks which resemble \(-\mathrm{sgn}\left(x\right)\lvert x\rvert^{\nu}\) near the pre-shock for \(\nu\in[\frac{1}{2},1)\).
Here and throughout the paper, by the stability of the shock formation process we mean that for arbitrary sufficiently small and smooth perturbations (with respect to a suitable topology) of the initial data, the same type of pre-shock forms, possibly at a different location in space and time (which may be computed exactly), and with the same \(C^{0,\nu}\) shock profile (up to dilations).
### The Euler equations
The compressible Euler equations in one space dimension are given by
\[\partial_{t}(\rho u)+\partial_{y}(\rho u^{2}+p) =0\,, \tag{1.1a}\] \[\partial_{t}\rho+\partial_{y}(\rho u) =0\,,\] (1.1b) \[\partial_{t}E+\partial_{y}((p+E)u) =0\,, \tag{1.1c}\]
where the unknowns \(u,\rho,E,\) and \(p\) are scalar functions defined on \(\mathbb{R}\times\mathbb{R}\) or \(\mathbb{T}\times\mathbb{R}\): \(u\) is the fluid velocity, \(\rho\) is the (everywhere positive) fluid density, \(E\) is the specific total energy, and \(p\) is the pressure. To close the system, one must introduce an equation of state that relates the internal energy \(E-\frac{1}{2}\rho u^{2}\) to \(p\) and \(\rho\). If the fluid is an _ideal gas_, the equation of state is
\[p=(\gamma-1)(E-\tfrac{1}{2}\rho u^{2}), \tag{1.2}\]
where \(\gamma>1\) is a fixed constant called the _adiabatic exponent_.
For ideal gases, the specific entropy, \(S\), satisfies the relation
\[p=\tfrac{1}{\gamma}\rho^{\gamma}e^{S}. \tag{1.3}\]
If we introduce the parameter \(\alpha:=\frac{\gamma-1}{2}\) and define the (renormalized) sound speed1 to be
Footnote 1: The actual sound speed in a compressible fluid is \(c=\sqrt{\frac{\partial p}{\partial\rho}}\).
\[\sigma=\tfrac{1}{\alpha}\sqrt{\frac{\partial p}{\partial\rho}}=\tfrac{1}{ \alpha}e^{S/2}\rho^{\alpha}, \tag{1.4}\]
then we can rewrite the ideal gas equations (1.1)-(1.2) in terms of \(u,\sigma,\) and \(S\) as follows:
\[\partial_{t}u+u\partial_{y}u+\alpha\sigma\partial_{y}\sigma =\tfrac{\alpha}{2\gamma}\sigma^{2}\partial_{y}S, \tag{1.5a}\] \[\partial_{t}\sigma+u\partial_{y}\sigma+\alpha\sigma\partial_{y}u =0,\] (1.5b) \[\partial_{t}S+u\partial_{y}S =0. \tag{1.5c}\]
In this paper, we prove the following theorem:
**Theorem 1.1** (**Main Result, abbreviated)**.: _For any choice of adiabatic exponent \(\gamma>1\), and parameter \(\beta\geq 1\), there is an open set (a ball of radius \(\varepsilon\ll 1\) with respect to the \(W^{2,\frac{\beta}{\beta-1}}(\mathbb{T})\) topology2) of initial data \((u_{0},\sigma_{0},S_{0})\), for which the unique local-in-time classical solution \((u,\sigma,S)\) of the Cauchy problem for (1.5) forms a gradient blowup singularity at a computable time \(T_{*}\approx\frac{2}{1+\alpha}\). Furthermore, at time \(T_{*}\) the solution has the profile_
Footnote 2: See the statement of Theorem 3.1, which in fact gives a weaker assumption for the dominant Riemann variable.
\[u(y,T_{*}) =u(y_{*},T_{*})-\Big{(}(1+\tfrac{1}{\beta})^{\frac{\beta}{\beta+1} }+\mathcal{O}(\varepsilon)\Big{)}\mathrm{sgn}\,(y-y_{*})|y-y_{*}|^{\frac{\beta }{\beta+1}}+\big{(}1+\mathcal{O}(\varepsilon)\big{)}(y-y_{*})\] \[\sigma(y,T_{*}) =\sigma(y_{*},T_{*})-\Big{(}(1+\tfrac{1}{\beta})^{\frac{\beta}{ \beta+1}}+\mathcal{O}(\varepsilon)\Big{)}\mathrm{sgn}\,(y-y_{*})|y-y_{*}|^{ \frac{\beta}{\beta+1}}+\big{(}1+\mathcal{O}(\varepsilon)\big{)}(y-y_{*})\] \[S(y,T_{*}) =S(y_{*},T_{*})+\partial_{y}S(y_{*},T_{*})(y-y_{*})+\mathcal{O} \Big{(}|y-y_{*}|^{\frac{\beta+2}{\beta+1}}\Big{)}\]
_for \(y\) in a neighborhood of diameter \(\approx 1\) about a point \(y_{*}\) which is computable from the initial data._
The above theorem provides the first constructive proof of stable \(C^{\frac{\beta}{\beta+1}}\)-cusp type singularity formation for \(\beta\geq 1\). We refer to SS 3.2 and Theorem 3.1 below for the precise assumptions on the initial data, and the precise statement of our main result.
### Prior results
It has been known for some time that solutions of the Euler equations and more general systems of hyperbolic PDE can form singularities in finite time from smooth initial data, see the seminal papers [14, 16, 18, 22, 24]. These classical results were non-constructive and asserted a breakdown of smoothness3 without a description of the actual mechanism of singularity formation. See [6, 10, 19] for further references.
Footnote 3: These arguments are based on an ODE comparison-principle (which necessitates a priori assumptions on the boundedness of the density and the continuity of the gradient of the solution) with a Riccati equation which blows up in finite time, thus yielding a proof by contradiction.
Recently, a number of constructive _shock formation_ results have been established [1, 2, 3, 4, 5, 7, 8, 15, 17, 20, 21, 23, 25]. These constructive results show that for an appropriate class of smooth initial data the _first_ gradient singularity is of shock-type and no other singularity can occur prior. The solution at this first singularity, i.e. the _pre-shock_, only possesses limited Holder regularity, which describes the type of cusp that forms. To date, only the \(C^{\frac{1}{3}}\) Holder class pre-shock (corresponding to \(\beta=\frac{1}{2}\) in the language of Theorem 1.1) has been extensively studied [1, 3, 4, 5, 7, 15, 17, 23], as this \(C^{\frac{1}{3}}\) cusp singularity emerges in a stable fashion from a large open set of smooth and _generic_ initial data.
To the best of our knowledge, for the compressible Euler dynamics there are no results establishing the formation of \(C^{\frac{\beta}{\beta+1}}\)-cusp type pre-shocks with \(\beta>\frac{1}{2}\), corresponding to Holder exponents _strictly larger_ than \(\frac{1}{3}\). In fact, the intuition based on the study of the Burgers equation4 only suggests the possible emergence of a \(C^{\frac{1}{2k+1}}\)-cusp, corresponding to \(\beta=\frac{1}{2k}\) with \(k\geq 1\) an integer. Moreover, one expects in these particular cases that the shock formation process is generically unstable, but has finite-codimension stability. This intuition was successfully implemented in the context of the 2D Euler equations with azimuthal symmetry to establish the finite time formation of a \(C^{\frac{1}{5}}\) pre-shock (corresponding to \(\beta=\frac{1}{4}\)) in [2]. The finite-codimension stable
formation of \(C^{\frac{1}{2k+1}}\)-cusp type pre-shocks (corresponding to \(\beta=\frac{1}{2k}\)) for all integers \(k\geq 2\) was established recently in [13].
### New Ideas
This paper makes use of the _differentiated Riemann variables_ (4.11) first introduced in [1] and later utilized in [23] to study the generic \(C^{\frac{1}{3}}\) pre-shock formation from \(C^{4}\) initial data. The smoothness of the data employed in [1, 23] permitted the use of a pointwise characteristic-based approach. On the other hand, the initial data required for the formation of \(C^{\frac{\beta}{\beta+1}}\) pre-shocks with \(\beta\geq 1\) have limited regularity and, in particular, such data is not even \(C^{2}\) (see SS 3.2). As such, we have devised a novel approach to this analysis which consists of the following elements:
1. Both the dominant and subdominant differentiated Riemann variables, together with the entropy, are studied along the _fast acoustic characteristic_ (rather than the actual characteristic families which propagate each disturbance). This allows us to _effectively freeze_ both the dominant Riemann variable and the entropy in this flow, reducing the required bounds to only the subdominant Riemann variable (and its derivatives).
2. \(L^{p}\)-based energy estimates are then obtained for this system of variables, assuming limited regularity initial data in the Sobolev space \(W^{2,p}\). This energy method allows for a spatially-global analysis for functions whose second derivatives are not continuous, and produces bounds which are uniform in \(p\) and time, and hence allow us to pass to the limit as \(p\to\infty\).
3. We show that there exists an open set of initial data in the \(W^{2,p}\) topology for which \(C^{\frac{\beta}{\beta+1}}\) pre-shock formation occurs for \(\beta\geq 1\). This establishes that this new class of cusp-type first singularities is stable under perturbation of the initial data by small disturbances of class \(W^{2,\frac{\beta}{\beta-1}}(\mathbb{T})\) (or smoother).
### Riemann variables
The Riemann variables \(w\) and \(z\) are defined by
\[w:=u+\sigma\,,\qquad\text{and}\qquad z:=u-\sigma. \tag{1.6}\]
If we further adopt the notation \(k:=S\), then the Euler equations become
\[\partial_{t}w+\lambda_{3}\partial_{y}w =\tfrac{\alpha}{2\gamma}\sigma^{2}\partial_{y}k,\] \[\partial_{t}z+\lambda_{1}\partial_{y}z =\tfrac{\alpha}{2\gamma}\sigma^{2}\partial_{y}k,\] \[\partial_{t}k+\lambda_{2}\partial_{y}k =0, \tag{1.7}\]
where
\[\lambda_{1} :=u-\alpha\sigma=\tfrac{1-\alpha}{2}w+\tfrac{1+\alpha}{2}z,\] \[\lambda_{2} :=u=\tfrac{1}{2}w+\tfrac{1}{2}z,\] \[\lambda_{3} :=u+\alpha\sigma=\tfrac{1+\alpha}{2}w+\tfrac{1-\alpha}{2}z.\]
Define \(\psi,\phi,\) and \(\eta\) to be the flow of \(\lambda_{1},\lambda_{2},\) and \(\lambda_{3}\) respectively.
## 2. Motivation: Burgers' Equation
Our motivation comes from a simple solution to Burgers' equation,
\[w_{t}+ww_{y}=0, \tag{2.1}\]
which is a special case of (1.7) where \(\alpha=1\), \(z_{0}\equiv 0\), and \(k_{0}\) is constant. In addition to being a special case of the 1D isentropic Euler equations expressed in Riemann variables, Burgers' equation is the archetypal equation for shock formation and development.
If \(w\) is a solution of (2.1) and \(\eta\) is the flow of \(w\), i.e. \(\eta\) solves
\[\tfrac{d}{dt}\eta(x,t)=w(\eta(x,t),t),\qquad\eta(x,0)=x,\]
then \(w\) must remain constant along the flow lines of \(\eta\). From this it follows that if \(w(x,0)=w_{0}(x)\) then
\[\eta(x,t)=x+tw_{0}(x). \tag{2.2}\]
When \(w_{0}\) is \(C^{1}\) and \(w_{0}^{\prime}\) attains its global minimum at \(x_{*}\) the solution of Burgers' equation remains \(C^{1}\) smooth up to time
\[T_{*}=\left\{\begin{array}{ll}-\frac{1}{w_{0}^{\prime}(x_{*})}&w_{0}^{\prime}( x_{*})<0\\ +\infty&w_{0}^{\prime}(x_{*})\geq 0\end{array}\right.,\]
and in the case where \(w_{0}^{\prime}(x_{*})<0\) we have \(\eta_{x}(x_{*},T_{*})=0\) and \(\partial_{y}w(\eta(x_{*},T_{*}),T_{*})=-\infty\).
### Key example
Let \(\beta>0\) and let \(w\) be the solution of Burgers' equation (2.1) on \(\mathbb{R}\) with initial data
\[w_{0}(x)=-x+Cx|x|^{\frac{1}{\beta}}\]
for some \(C>0\). We compute that
\[w_{0}^{\prime}(x)=-1+(1+\tfrac{1}{\beta})C|x|^{\frac{1}{\beta}},\]
so \(w_{0}^{\prime}\) has a unique global minimum at \(x=0\), and it blows up at time \(T_{*}=1\). At the blowup time, we have
\[\eta(x,1)=Cx|x|^{\frac{1}{\beta}}.\]
Therefore, if \(y=\eta(x,1)\) then
\[x=\operatorname{sgn}\left(y\right)\bigl{(}\tfrac{|y|}{C}\bigr{)}^{\frac{\beta} {\beta+1}},\]
and
\[w(y,1)=w_{0}(x)=-\operatorname{sgn}\left(y\right)C^{-\frac{\beta}{\beta+1}}|y| ^{\frac{\beta}{\beta+1}}+y.\]
### Stability for \(\beta\geq 1\)
When \(\beta\geq 1\), the geometry of the shock profile \(w(\cdot,T_{*})\) obtained in SS 2.1 is stable under perturbations in \(C^{1,\frac{1}{\beta}}\). Let \(\mathcal{E}\in C^{1,\frac{1}{\beta}}_{\mathrm{loc}}(\mathbb{R})\) be such that
* \(\mathcal{E}^{\prime}(0)<1\), and
* \(\left[\mathcal{E}^{\prime}\right]_{C^{0,\frac{1}{\beta}}}<(1+\frac{1}{\beta})C\),
and let
\[w_{0}(x)=-x+Cx|x|^{\frac{1}{\beta}}+\mathcal{E}(x).\]
Then
\[w_{0}^{\prime}(x) =-1+\mathcal{E}^{\prime}(0)+(1+\tfrac{1}{\beta})C|x|^{\frac{1}{ \beta}}+(\mathcal{E}^{\prime}(x)-\mathcal{E}^{\prime}(0))\] \[\geq-1+\mathcal{E}^{\prime}(0)+\bigl{[}(1+\tfrac{1}{\beta})C- \left[\mathcal{E}^{\prime}\right]_{C^{0,\frac{1}{\beta}}}|x|^{\frac{1}{\beta}}.\]
So \(w_{0}^{\prime}\) still has a unique global minimum at \(x=0\). Since \(\mathcal{E}^{\prime}(0)<1\), the minimum of \(w_{0}^{\prime}\) is negative and we have
\[T_{*}=\frac{1}{1-\mathcal{E}^{\prime}(0)}.\]
For all \(x\) we have
\[\mathcal{E}(x)=\mathcal{E}(0)+\mathcal{E}^{\prime}(0)x+h(x)x|x|^{\frac{1}{ \beta}}\]
where
\[h(x)=\frac{\int_{0}^{x}\mathcal{E}^{\prime}(x^{\prime})-\mathcal{E}^{\prime}(0 )\;dx^{\prime}}{x|x|^{\frac{1}{\beta}}}.\]
It is immediate that
\[|h(x)|\leq\tfrac{\beta}{1+\beta}[\mathcal{E}^{\prime}]_{C^{0,\frac{1}{\beta}} }<C\]
everywhere. If we let \(y=\eta(x,T_{*})\), then we have
\[y=x+T_{*}w_{0}(x)=T_{*}\mathcal{E}(0)+T_{*}(C+h(x))x|x|^{\frac{1}{\beta}}=:y_{*} +a(x)x|x|^{\frac{1}{\beta}}.\]
Since \(|h(x)|<C\) everywhere, we have \(a(x)>0\) everywhere, so \(\operatorname{sgn}\left(y-y_{*}\right)=\operatorname{sgn}\left(x\right)\) and
\[x=\operatorname{sgn}\left(y-y_{*}\right)\bigl{|}\frac{y-y_{*}}{a(x)}\bigr{|}^{ \frac{\beta}{\beta+1}}.\]
It now follows that
\[w(y,T_{*})=w_{0}(x) =-x+Cx|x|^{\frac{1}{\beta}}+\mathcal{E}(x)\] \[=\mathcal{E}(0)+(\mathcal{E}^{\prime}(0)-1)x+(C+h(x))\text{sgn} \left(x\right)\lvert x\rvert^{1+\frac{1}{\beta}}\] \[=\mathcal{E}(0)-\frac{(1-\mathcal{E}^{\prime}(0))}{\lvert a(x) \rvert^{\frac{\beta}{\beta+1}}}\text{sgn}\left(y-y_{*}\right)\lvert y-y_{*} \rvert^{\frac{\beta}{\beta+1}}+(C+h(x))\text{sgn}\left(y-y_{*}\right)\bigl{|} \frac{y-y_{*}}{a(x)}\bigr{|}\] \[=\mathcal{E}(0)-\frac{(1-\mathcal{E}^{\prime}(0))}{\lvert a(x) \rvert^{\frac{\beta}{\beta+1}}}\text{sgn}\left(y-y_{*}\right)\lvert y-y_{*} \rvert^{\frac{\beta}{\beta+1}}+(1-\mathcal{E}^{\prime}(0))(y-y_{*}).\]
We have proven the following:
**Proposition 2.1** (**Motivating Example: Stable Shock Formation for Burgers)**.: _If \(\beta\geq 1\), \(C>0\), and_
\[w_{0}=-x+Cx|x|^{\frac{1}{\beta}}+\mathcal{E}(x),\]
_where \(\mathcal{E}\in C^{1,\frac{1}{\beta}}_{loc}(\mathbb{R})\) with \(\mathcal{E}^{\prime}(0)<1\) and \(\left[\mathcal{E}^{\prime}\right]_{C^{0,\frac{1}{\beta}}}<(1+\frac{1}{\beta})C\) then Burgers' equation (2.1) has a unique \(C^{1}\) solution, \(w\), with \(w\bigr{|}_{t=0}=w_{0}\) on \(\mathbb{R}\times[0,T_{*})\), where \(T_{*}=\frac{1}{1-\mathcal{E}^{\prime}(0)}\). Moreover, \(w\) is continuous up to time \(T_{*}\) and at time \(T_{*}\) we have_
\[w(y,T_{*})=\mathcal{E}(0)-\operatorname{sgn}\left(y-y_{*}\right)b(y)\lvert y- y_{*}\rvert^{\frac{\beta}{\beta+1}}+(1-\mathcal{E}^{\prime}(0))(y-y_{*}) \tag{2.3}\]
_where \(y_{*}:=\frac{\mathcal{E}(0)}{1-\mathcal{E}^{\prime}(0)}\) and the function \(b\) satisfies_
\[\frac{\left((1-\mathcal{E}^{\prime}(0)\right)^{\frac{2\beta+1}{\beta+1}}}{ \left(C+\frac{\beta}{\beta+1}\mathcal{E}^{\prime}\right]_{C^{0,\frac{1}{\beta }}}}<b(y)<\frac{\left(1-\mathcal{E}^{\prime}(0)\right)^{\frac{2\beta+1}{\beta+1 }}}{\left(C-\frac{\beta}{\beta+1}\mathcal{E}^{\prime}\right]_{C^{0,\frac{1}{ \beta}}}}\]
_for all \(y\in\mathbb{R}\)._
In this paper, we will generalize Proposition 2.1 by letting the parameter \(\alpha\) be any positive number (i.e. letting \(\gamma\) be any number greater than \(1\)) and by letting \(z_{0}\) and \(k_{0}-\fint_{\mathbb{T}}k_{0}\) be sufficiently small instead of identically zero. We will also work on the torus \(\mathbb{T}\) instead of \(\mathbb{R}\). See Theorem 3.1 below for details.
### Finite-Codimension Stability for \(0<\beta<1\), \(\beta\neq\frac{1}{2}\)
In the case where \(0<\beta<1\) and \(\beta\neq\frac{1}{2}\), the geometry of the shock profile obtained in SS 2.1 is no longer stable under smooth perturbations. To see this, let \(0<\beta<1\) and let the perturbation now be \(\mathcal{E}(x)=\frac{\varepsilon}{2}x^{2}\varphi(x)\) where \(\varepsilon\neq 0\) is a small constant and \(\varphi\) is a smooth bump function that is identically 1 for all \(\lvert x\rvert\leq R\), for some \(R\gg 1\). Then one can check that \(w_{0}^{\prime}\) has a unique global minimum at the point
\[x_{*}=-\left\lvert\frac{\varepsilon\beta^{2}}{(\beta+1)C}\right\rvert^{\frac{ \beta}{1-\beta}},\]
and
\[w_{0}^{\prime}(x_{*})=-1-\frac{\varepsilon^{\frac{1}{1-\beta}}}{\left(((1+ \frac{1}{\beta})C)^{\frac{\beta}{\beta-1}}}\biggl{[}\beta^{-\frac{\beta}{1- \beta}}-\beta^{-\frac{1}{1-\beta}}\biggr{]}<0.\]
So the solution \(w\) still blows up in finite time, but since \(x_{*}\neq 0\) one can now check that if \(y=\eta(x,T_{*})\) and \(y_{*}=\eta(x_{*},T_{*})\) we have
\[y-y_{*}\sim(x-x_{*})^{3}\]
in a sufficiently small neighborhood5 of \(x_{*}\) and therefore
Footnote 5: One can check that the implicit constants in the expression \(y-y_{*}\sim(x-x_{*})^{3}\) can be made \(\sim|\varepsilon|^{1-\frac{\beta}{1-\beta}}\), and therefore they blow up as \(\varepsilon\to 0\) when \(\beta>\frac{1}{2}\), remain close to \(1\) when \(\beta=\frac{1}{2}\), and vanish as \(\varepsilon\to 0\) when \(\beta<\frac{1}{2}\).
\[w(y,T_{*})-w(y_{*},T_{*})\sim(y-y_{*})^{\frac{1}{3}}\]
for \(|y-y_{*}|\) small enough. When \(\beta=\frac{1}{2}\), this is still the same type of shock profile as before, but for all other choices of \(\beta\) it is different. Perturbations of the form we have specified can be made arbitrarily small in all \(C^{N}\) norms by sending \(\varepsilon\to 0\), so for \(0<\beta<1,\beta\neq\frac{1}{2}\), the shock formation described in SS 2.1 is unstable.
However, when \(0<\beta<1\), the example in SS 2.1 is _finite-codimension-stable_ with respect to suitable perturbations, as we shall show next. Let \(1+\frac{1}{\beta}=n+\delta\), where \(n\in\mathbb{Z}\) and \(\delta\in(0,1]\). Since \(\beta<1\), we have \(n\geq 0\). Let \(\mathcal{E}\in C^{n,\delta}_{\text{loc}}(\mathbb{R})\) satisfy
* \(\mathcal{E}^{\prime}(0)<1\),
* \(\partial_{x}^{j}\mathcal{E}(0)=0\) for all \(2\leq j\leq n\),
* \([\partial_{x}^{n}\mathcal{E}]_{C^{0,\delta}}<C(1+\frac{1}{\beta})(1+\frac{1}{ \beta}-1)\cdots(1+\frac{1}{\beta}-(n-1))\),
and choose initial data
\[w_{0}(x)=-x+Cx|x|^{\frac{1}{\beta}}+\mathcal{E}(x).\]
We compute that
\[w_{0}^{\prime}(x) =-1+\mathcal{E}^{\prime}(0)+(1+\tfrac{1}{\beta})C|x|^{\frac{1}{ \beta}}+(\mathcal{E}^{\prime}(x)-\mathcal{E}^{\prime}(0))\] \[\geq-1+\mathcal{E}^{\prime}(0)+\bigg{[}(1+\tfrac{1}{\beta})C- \tfrac{[\partial_{x}^{n}\mathcal{E}]_{C^{0,\delta}}}{\tfrac{1}{\beta}(\tfrac{1 }{\beta}-1)\cdots(\tfrac{1}{\beta}+2-n)}\bigg{]}|x|^{\frac{1}{\beta}}.\]
So \(w_{0}^{\prime}\) has a unique global minimum at \(x=0\). Since \(\mathcal{E}^{\prime}(0)<1\), the minimum of \(w_{0}^{\prime}\) is negative and we have
\[T_{*}=\frac{1}{1-\mathcal{E}^{\prime}(0)}\]
as before. For all \(x\) we have
\[\mathcal{E}(x)=\mathcal{E}(0)+\mathcal{E}^{\prime}(0)x+h(x)x|x|^{\frac{1}{ \beta}}\]
where
\[h(x)=\frac{\int_{0}^{x}\int_{0}^{x_{1}}\cdots\int_{0}^{x_{n-1}}\partial_{x}^{ n}\mathcal{E}(x_{n})\,dx_{n}dx_{n-1}\ldots dx_{1}}{x|x|^{\frac{1}{\beta}}}.\]
It is immediate that
\[|h(x)|\leq\frac{[\partial_{x}^{n}\mathcal{E}]_{C^{0,\delta}}}{(1+\frac{1}{ \beta})(1+\frac{1}{\beta}-1)\cdots(1+\frac{1}{\beta}-(n-1))}<C\]
for all \(x\in\mathbb{R}\). Therefore, if \(y=\eta(x,T_{*})\) then
\[y=x+T_{*}w_{0}(x)=T_{*}\mathcal{E}(0)+T_{*}(C+h(x))x|x|^{\frac{1}{\beta}}=:y_{* }+a(x)x|x|^{\frac{1}{\beta}}.\]
Since \(|h(x)|<C\) everywhere, we have \(a(x)>0\) everywhere, so \(\operatorname{sgn}\left(y-y_{*}\right)=\operatorname{sgn}\left(x\right)\) and
\[x=\operatorname{sgn}\left(y-y_{*}\right)\big{|}\frac{y-y_{*}}{a(x)}\big{|}^{ \frac{\beta}{\beta+1}}.\]
It now follows that
\[w(y,T_{*})=w_{0}(x) =-x+Cx|x|^{\frac{1}{\beta}}+\mathcal{E}(x)\] \[=\mathcal{E}(0)+(\mathcal{E}^{\prime}(0)-1)x+(C+h(x))\text{sgn} \left(x\right)\lvert x\rvert^{1+\frac{1}{\beta}}\] \[=\mathcal{E}(0)-\tfrac{(1-\mathcal{E}^{\prime}(0))}{\lvert a(x) \rvert^{\frac{\beta}{\beta+1}}}\text{sgn}\left(y-y_{*}\right)\lvert y-y_{*} \rvert^{\frac{\beta}{\beta+1}}+(C+h(x))\text{sgn}\left(y-y_{*}\right)\lvert \frac{y-y_{*}}{a(x)}\rvert\]
\[w(y,t)=(T-t)^{\beta}W\big{(}\tfrac{y-y_{0}}{(T-t)^{\beta+1}}\big{)} \tag{2.6}\]
where \(W\in C^{1}_{\text{loc}}(\mathbb{R};\mathbb{R})\) is the _similarity profile_, \(T\in\mathbb{R}\) is the fixed blowup time, and \(\beta>0\)6 is the _scaling exponent_. A function \(w\) of the form (2.6) solves Burgers' equation if and only if the similarity profile solves the _similarity equation_
Footnote 6: We are ignoring the \(\beta=0\) case, where the self-similar profile is given by \(W(x)=-x\). In this case, the profile doesn’t satisfy the matching condition \(W\sim|x|^{\frac{\beta}{\beta+1}}\) as \(x\to\pm\infty\), and so it is different. See [11] for more.
\[-\beta W+(1+\beta)xW_{x}+WW_{x}=0. \tag{2.7}\]
Consider the implicit equation
\[x=-W-CW|W|^{\frac{1}{\beta}} \tag{2.8}\]
where \(C>0\) is a constant.7 Since \(\frac{dx}{dW}=-1-C(1+\frac{1}{\beta})|W|^{\frac{1}{\beta}}\leq-1\) everywhere, it follows that \(W\to x\) is a \(C^{1}\) diffeomorphism of \(\mathbb{R}\) and therefore (2.8) defines a \(C^{1}\) similarity profile \(W\) on all of \(\mathbb{R}\) which is also strictly decreasing. In fact, \(W\) is locally \(C^{n,\delta}\), where \(n\in\mathbb{Z}^{+},\delta\in(0,1]\), and \(1+\frac{1}{\beta}=n+\delta\). It is also easy to check that this profile \(W\) solves the similarity equation (2.7).
Footnote 7: The previous literature up to this point has worked with the implicit equation \(x=-W-CW^{1+\frac{1}{\beta}}\), see e.g. [12, §11.2], [11, §2.4], [9]. However, this equation does not define a global solution for most values of \(\beta\). For example, if \(\beta=1\) then solution \(W\) is not single-valued and there is no solution for \(x>\frac{1}{4C}\). Our implicit equation remedies these issues, and gives the correct regularity which the similarity profile should have near \(x=0\). E.g., when \(\beta=1\) the similarity profile will be \(C^{1,1}\) but not \(C^{2}\).
Let \(W\) be the solution to (2.8). To determine the growth of \(W\) at infinity, we first note that \(x=-W(1+C|W|^{\frac{1}{\beta}})\), so \(\operatorname{sgn}\left(x\right)=-\operatorname{sgn}\left(W\right)\) everywhere. Therefore,
\[\frac{-\operatorname{sgn}\left(x\right)C^{-\frac{\beta}{\beta+1}}|x|^{\frac{ \beta}{\beta+1}}}{W(x)}=\frac{|x|^{\frac{\beta}{\beta+1}}}{|W|}=\left|\frac{W( 1+C|W|^{\frac{1}{\beta}})}{|W|^{1+\frac{1}{\beta}}}\right|^{\frac{\beta}{\beta +1}}=\left|\frac{1}{|W|^{\frac{1}{\beta}}}+C\right|^{\frac{\beta}{\beta+1}}.\]
Since \(|W|\to\infty\) as \(|x|\to\infty\), we have
\[\frac{-\operatorname{sgn}\left(x\right)C^{-\frac{\beta}{\beta+1}}|x|^{\frac{ \beta}{\beta+1}}}{W(x)}\to 1\qquad\text{ as }x\to\pm\infty. \tag{2.9}\]
From (2.9) it follows that if \(w\) is defined by (2.6) with our choice of \(W\) then
\[\lim_{t\to T^{-}}w(y,t)=-\operatorname{sgn}\left(y-y_{0}\right)C^{-\frac{\beta }{\beta+1}}|y-y_{0}|^{\frac{\beta}{\beta+1}}. \tag{2.10}\]
Let us now examine the behavior of \(W\) near \(x=0\). Since \(x=-W(1+C|W|^{\frac{1}{\beta}})\), we compute that in a neighborhood of \(x=0\) we have
\[-x+Cx|x|^{\frac{1}{\beta}} =W+CW|W|^{\frac{1}{\beta}}-CW|W|^{\frac{1}{\beta}}(1+C|W|^{\frac {1}{\beta}})^{\frac{1}{\beta}}-C^{2}W|W|^{\frac{2}{\beta}}(1+C|W|^{\frac{1}{ \beta}})^{\frac{1}{\beta}}\] \[=W-(1+\tfrac{1}{\beta})C^{2}W|W|^{\frac{2}{\beta}}+\mathcal{O} \big{(}(1+\tfrac{1}{\beta})C^{3}|W|^{1+\frac{3}{\beta}}\big{)}\] \[=W+\mathcal{O}\big{(}(1+\tfrac{1}{\beta})C^{2}|x|^{1+\frac{2}{ \beta}}\big{)}.\]
The last equation is true because \(W(0)=0\) and \(W\) is \(C^{1}\). Therefore, we conclude that
\[W=-x+Cx|x|^{\frac{1}{\beta}}+\mathcal{O}\big{(}(1+\tfrac{1}{\beta})C^{2}|x|^{ 1+\frac{2}{\beta}}\big{)} \tag{2.11}\]
in a neighborhood of zero. Notice that the similarity profile agrees with the initial data chosen in SS 2.1 near 0 if we drop the highest order term in this expansion.
## 3. The result
### Local well-posedness
It follows from the standard well-posedness theory of the Euler equations (1.5) that if \(s>\frac{3}{2}\) and \(w_{0},z_{0},k_{0}\in H^{s}(\mathbb{T})\) then there exists a maximal time of existence \(T_{*}\in(0,+\infty]\), such that the following hold
* the equations (1.7) have a unique classical solution \((w,z,k)\in C^{1}(\mathbb{T}\times[0,T_{*});\mathbb{R}^{3})\) with initial data \((w_{0},z_{0},k_{0})\);
* the unique solution \((w,z,k)\) also satisfies \[(w,z,k)\in C([0,T_{*});H^{s}(\mathbb{T})^{3})\cap C^{1}([0,T_{*});H^{s-1}( \mathbb{T})^{3}).\] (3.1) Furthermore, if \(T_{*}<\infty\), then the following **Eulerian blowup criterion** holds: \[\int_{0}^{T_{*}}\Bigl{(}\|\partial_{y}w(\cdot,t)\|_{L^{\infty}}+\|\partial_{y }z(\cdot,t)\|_{L^{\infty}}+\|\partial_{y}k(\cdot,t)\|_{L^{\infty}}\Bigr{)}dt=+ \infty\,.\] (3.2)
Throughout the remainder of the paper, the initial data \(w_{0},z_{0},k_{0}\) will always be assumed to lie in \(W^{2,q}(\mathbb{T})\) for some \(1<q\leq\infty\). In particular, the classical \(H^{s}\) local-existence theory applies for a suitable value of \(s>\frac{3}{2}\). To see this, recall that if \(n\in\mathbb{Z},n\geq 2\), and \(1<q\leq\infty\), then
\[W^{n,q}(\mathbb{T})\subset H^{s}(\mathbb{T})\]
with continuous embedding for
\[s=s(n,q)=\left\{\begin{array}{ll}n-(\frac{1}{q}-\frac{1}{2})&p\leq 2\\ n&p\geq 2\end{array}\right.\quad.\]
For all \(n\geq 2\), \(1<q\leq\infty\), we have \(s>\frac{3}{2}\). So if \(w_{0},z_{0},k_{0}\in W^{2,q}(\mathbb{T})\) for some \(1<q\leq\infty\), then \(w_{0},z_{0},k_{0}\in H^{s}(\mathbb{T})\) for some \(s>\frac{3}{2}\) and our local well-posedness theory applies to the initial data \(w_{0},z_{0},k_{0}\). As such, the _blowup time_\(T_{*}\in(0,+\infty]\) will be the time at which the corresponding solution can no longer be continued as a solution in \(C_{t}H^{s}_{x}\cap C^{1}_{t}H^{s-1}_{x}\) for any \(s>\frac{3}{2}\).
### Assumptions on the initial data
Let \(\beta>0\), and let \(\overline{w}_{0}\) be a \(2\pi\)-periodic function such that
* \(\overline{w}_{0}(x)=-x+\frac{\beta}{\beta+1}x|x|^{\frac{1}{\beta}}\) for \(|x|\leq 1\),
* \(|\overline{w}_{0}(x)|<1\) for all \(x\in\mathbb{T}\),
* \(0\leq\overline{w}^{\prime}_{0}(x)\leq 1\) for \(1\leq|x|\leq\pi\).
For our theorem we take \(\beta\geq 1\), and the initial data \((w_{0},z_{0},k_{0})\) will be such that \(w_{0}\) is a small perturbation of \(1+\overline{w}_{0}\) in \(C^{1,\frac{1}{\beta}}\), \(z_{0}\) is a small perturbation of \(0\) in \(W^{2,p}\), and \(k_{0}\) is a small perturbation of a constant state in \(W^{2,p}\), where \(p=\frac{\beta}{\beta-1}\) is the Holder conjugate of \(\beta\).
We note that when \(\beta>1\) and \(p=\frac{\beta}{\beta-1}\), \(\overline{w}_{0}\in C^{1,\frac{1}{\beta}}(\mathbb{T})\setminus W^{2,p}( \mathbb{T})\). Since \(W^{2,p}(\mathbb{T})\) is not dense in \(C^{1,\frac{1}{\beta}}\) for \(\beta>1\), small enough perturbations of \(\overline{w}_{0}\) in \(C^{1,\frac{1}{\beta}}\) will not be in \(W^{2,p}\), so \(w_{0}\) will not be in \(W^{2,p}\). When \(\beta=1\) however, \(C^{1,\frac{1}{\beta}}=C^{1,1}=W^{2,\infty}\), and we will be able to assume that \((w_{0},z_{0},k_{0})\) lie in an open set in \(W^{2,\infty}(\mathbb{T})\). However, \(w_{0}\) will still not be \(C^{2}\), because in order for \(w_{0}\) to be sufficiently close to \(1+\overline{w}_{0}\) in \(W^{2,\infty}\), \(w_{0}^{\prime\prime}\) it will have to have a jump discontinuity at 0.
### Notation
At the beginning of most sections we will introduce a size parameter \(\varepsilon>0\). Throughout the paper, the hypothesis that \(\varepsilon>0\) was chosen to be sufficiently small will be implicit in the statements of all of our lemmas and propositions. How small \(\varepsilon>0\) will need to be in order for our results to hold will depend on our choice of \(\gamma\) (hence \(\alpha\)) and on \(1-\|\overline{w}_{0}\|_{L^{\infty}}\), but doesn't depend on anything else.
In what follows, we will often identify functions on the torus \(\mathbb{T}\) with \(2\pi\)-periodic functions on \(\mathbb{R}\) when the notation is most convenient. This identification will allow us to write expressions like "\(|x|\leq 1\)". We will always use \(x\in\mathbb{T}\) to denote the Lagrangian label and \(y\in\mathbb{T}\) to denote the Eulerian variable.
We will write \(a\lesssim b\) to indicate that \(a\leq Cb\), where the constant \(C\) is independent of \(\varepsilon\), the spatial variables \(x,y\), time \(t\), and the exponent \(p\in(1,\infty]\) (see SS 7). The constant \(C\) can, however, be dependent on \(\gamma\), and therefore also on \(\alpha\), and it can depend on \(1-\|\overline{w}_{0}\|_{L^{\infty}}\) (see SS 5). Furthermore, \(C\) will always be independent of our choice of initial data \((w_{0},z_{0},k_{0})\) satisfying the hypotheses specified at the beginning of each section, and it will be independent of our specific choice of \(\overline{w}_{0}\) satisfying the hypotheses listed in SS 3.2. We will write \(a\sim b\) to mean that \(a\lesssim b\lesssim a\), and we will further adopt the notation \(f=\mathcal{O}(g)\) to mean that \(|f|\lesssim|g|\).
Often below we will have functions \(f\) defined on \(\mathbb{T}\times[0,T_{*})\) and maps \(\Psi:\mathbb{T}\times[0,T_{*})\to\mathbb{T}\), and we will use the notation
\[f\circ\Psi(x,t):=f(\Psi(x,t),t).\]
We will write \(\Psi^{-1}\) to denote the function such that \(\Psi^{-1}\circ\Psi(x,t)=\Psi\circ\Psi^{-1}(x,t)=x\) for all \(t\) when such an inverse exists. For functions \(f\) defined on \(\mathbb{T}\times[0,T_{*})\) we will write
\[[f]_{C^{0,\delta}}:=\sup_{\begin{subarray}{c}y,y^{\prime}\in\mathbb{T}\\ y\neq y^{\prime}\end{subarray}}\frac{|f(y,t)-f(y^{\prime},t)|}{|y-y^{\prime}|^ {\delta}}\]
so that \([\cdot]_{C^{0,\delta}}\) only corresponds to Holder continuity in the spacial variable and \([f]_{C^{0,\delta}}\) is always implicitly a function of \(t\) if \(f\) varies in \(t\).
### Statement of the theorem
**Theorem 3.1** (Main Result).: _Let \(\alpha>0\), \(\beta\geq 1\), let \(p=\frac{\beta}{\beta-1}\in(1,\infty]\) be the Holder conjugate of \(\beta\), and let \(\overline{w}_{0}\) be defined as in SS 3.2 for this choice of \(\beta\). There exists a sufficiently small \(\varepsilon_{0}\in(0,1]\), which only depends on \(\alpha\) and on \(\|\overline{w}_{0}\|_{L^{\infty}}\), such that for any \(0<\varepsilon\leq\varepsilon_{0}\) the following holds._
_Let \(w_{0},z_{0},k_{0}\) be functions on \(\mathbb{T}\) satisfying_
* \(\|w_{0}-(1+\overline{w}_{0})\|_{C^{1,\frac{1}{\beta}}}<\varepsilon\)_,_
* \(\|z_{0}\|_{W^{2,p}}<\varepsilon\)_,_
* \(\|k_{0}-\int_{\mathbb{T}}k_{0}\|_{W^{2,p}}<\varepsilon\)_,_
* \(w_{0}\in W^{2,q}(\mathbb{T})\) _for some_ \(1<q<\infty\)_._
_The maximal time of existence \(T_{*}\) for the solution \((w,z,k)\) of (1.7) is_
\[T_{*}=\tfrac{2}{1+\alpha}+\mathcal{O}(\varepsilon). \tag{3.3}\]
_For such solutions, \(w\sim 1\) on \(\mathbb{T}\times[0,T_{*}]\), and \(z,k,\partial_{y}z,\) and \(\partial_{y}k\) remain \(\mathcal{O}(\varepsilon)\) on \(\mathbb{T}\times[0,T_{*}]\) while \(\partial_{y}w\) diverges to \(-\infty\) as \(t\to T_{*}\). At the time of blowup, there exists a unique point \(y_{*}\in\mathbb{T}\) where \(w(\cdot,T_{*})\) is \(C^{1}\) away from\(y_{*}\) but forms a preshock at \((y_{*},T_{*})\) with a profile given by_
\[w(y,T_{*})=w(y_{*},T_{*})-\big{(}(1+\tfrac{1}{\beta})^{\frac{\beta}{\beta+1}} +\mathcal{O}(\varepsilon)\big{)}\mathrm{sgn}\,(y-y_{*})|y-y_{*}|^{\frac{\beta }{\beta+1}}+(1+\mathcal{O}(\varepsilon))(y-y_{*}) \tag{3.4}\]
_for \(|y-y_{*}|<\frac{\beta}{\beta+1}+\mathcal{O}(\varepsilon)\). Additionally, we know that \(y_{*}=1+\mathcal{O}(\varepsilon)\)._
_The variables \(z\) and \(k\) remain in \(C^{1,\frac{1}{\beta+1}}(\mathbb{T})\) uniformly up to time \(T_{*}\) with_
\[\|z(\cdot,t)\|_{C^{1,\frac{1}{\beta+1}}}\lesssim\varepsilon,\qquad\|k(\cdot,t )\|_{C^{1,\frac{1}{\beta+1}}}\lesssim\varepsilon. \tag{3.5}\]
_For any fixed \(\gamma>1\), the implicit constants in all of the above statements are independent of our choice of \(\varepsilon\) sufficiently small and are independent of our choice of \((w_{0},z_{0},k_{0})\) satisfying our above hypotheses. The implicit constants do depend, however, on our choice of \(\gamma>1\)._
### Outline of the proof
We show that the classical solution \((w,z,k)\) of (1.7) with initial data satisfying some of our hypotheses only exists for a finite time \(T_{*}\), and we will characterize \(T_{*}\) as the time when the flow \(\eta\) of the fastest wave speed \(\lambda_{3}\) stops being a diffeomorphism because \(\eta_{x}(\cdot,T_{*})\) has a zero. For solutions with appropriate initial data, we will prove that \(w,z,k,\partial_{y}z\), and \(\partial_{y}k\) are all bounded on \(\mathbb{T}\times[0,T_{*}]\) while \(\partial_{y}w\) diverges to \(-\infty\) as \(t\) approaches \(T_{*}\). The key technical step of the paper will be obtaining uniform \(L^{p}\) energy estimates for the functions \(w\circ\eta,z\circ\eta,k\circ\eta\), together with their derivatives of order \(\leq 2\). Using these \(L^{p}\) energy estimates, we will show that the unique label \(x_{*}\in\mathbb{T}\) for which \(\eta_{x}(x_{*},T_{*})=0\) is \(x_{*}=0\). Lastly, we invert the map \(x\to\eta(x,T_{*})\) for \(x\) near 0 and arrive at the shock profile described in Theorem 3.1.
## 4. Key identities
In this section, \((w,z,k)\) will be the solution of (1.7) on \(\mathbb{T}\times[0,T_{*})\) for given initial data \((w_{0},z_{0},k_{0})\in H^{s}(\mathbb{T})\) for some \(s>\frac{3}{2}\).
In general, if \(\varphi\in\mathbb{R}\) and \(\lambda:=u+\varphi\sigma=\frac{1+\varphi}{2}w+\frac{1-\varphi}{2}z\), then
\[\partial_{t}\sigma+\lambda\partial_{y}\sigma=-\sigma\big{(}\tfrac{\alpha- \varphi}{2}\partial_{y}w+\tfrac{\alpha+\varphi}{2}\partial_{y}z\big{)}. \tag{4.1}\]
Setting \(\varphi=-\alpha,0,\) and \(\alpha\) we get
\[\sigma\!\circ\!\psi =\sigma_{0}e^{-\alpha\int_{0}^{t}\partial_{y}w\circ\psi},\] \[\sigma\!\circ\!\phi =\sigma_{0}e^{-\alpha\int_{0}^{t}\partial_{y}\frac{w+z}{2}\circ \phi},\] \[\sigma\!\circ\!\eta =\sigma_{0}e^{-\alpha\int_{0}^{t}\partial_{y}z\circ\eta}. \tag{4.2}\]
Since
\[\psi_{x} =e^{\int_{0}^{t}\partial_{y}\lambda_{1}\circ\psi}, \phi_{x} =e^{\int_{0}^{t}\frac{1}{2}(\partial_{y}w+\partial_{y}z)\circ\phi}, \eta_{x} =e^{\int_{0}^{t}\partial_{y}\lambda_{3}\circ\eta} \tag{4.3}\]
It follows that
\[\psi_{x} =(\tfrac{\sigma_{0}}{\sigma\circ\psi})^{\frac{1-\alpha}{2\alpha} }e^{\frac{1+\alpha}{2}\int_{0}^{t}\partial_{y}z\circ\psi}, \tag{4.4}\] \[\phi_{x} =(\tfrac{\sigma_{0}}{\sigma\circ\phi})^{\frac{1}{\alpha}},\] (4.5) \[\eta_{x} =(\tfrac{\sigma_{0}}{\sigma\circ\eta})^{\frac{1-\alpha}{2\alpha} }e^{\frac{1+\alpha}{2}\int_{0}^{t}\partial_{y}w\circ\eta}. \tag{4.6}\]
\[\partial_{t}(k\!\circ\!\psi) =-\alpha(\sigma\partial_{y}k)\!\circ\!\psi, \tag{4.7}\] \[\partial_{t}(k\!\circ\!\phi) =0,\] (4.8) \[\partial_{t}(k\!\circ\!\eta) =\alpha(\sigma\partial_{y}k)\!\circ\!\eta. \tag{4.9}\]
It follows from (4.5) and (4.8) that
\[\partial_{y}k\!\circ\!\phi=\sigma_{0}^{-\frac{1}{\alpha}}k_{0}^{\prime}\sigma ^{\frac{1}{\alpha}}\!\circ\!\phi. \tag{4.10}\]
Define
\[q^{w}:=\partial_{y}w-\tfrac{1}{2\gamma}\sigma\partial_{y}k,\qquad\text{and} \qquad q^{z}:=\partial_{y}z+\tfrac{1}{2\gamma}\sigma\partial_{y}k. \tag{4.11}\]
Taking \(\partial_{t}\) of \(\eta_{x}q^{w}\!\circ\!\eta\) and \(\psi_{x}q^{z}\!\circ\!\psi\) and then integrating gives us the Duhamel formulas
\[\eta_{x}q^{w}\!\circ\!\eta =e^{\frac{1}{4\gamma}k\circ\eta}\bigg{[}(w_{0}^{\prime}-\tfrac{1} {2\gamma}\sigma_{0}k_{0}^{\prime})e^{-\frac{1}{4\gamma}k_{0}}+\tfrac{\alpha} {4\gamma}\int_{0}^{t}\!e^{-\frac{1}{4\gamma}k\circ\eta}\eta_{x}(\sigma\partial _{y}kq^{z})\!\circ\!\eta\,ds\bigg{]}, \tag{4.12}\] \[\psi_{x}q^{z}\!\circ\!\psi =e^{\frac{1}{4\gamma}k\circ\psi}\bigg{[}(z_{0}^{\prime}+\tfrac{1} {2\gamma}\sigma_{0}k_{0}^{\prime})e^{-\frac{1}{4\gamma}k_{0}}-\tfrac{\alpha} {4\gamma}\int_{0}^{t}\!e^{-\frac{1}{4\gamma}k\circ\psi}\psi_{x}(\sigma\partial _{y}kq^{w})\!\circ\!\psi\,ds\bigg{]}. \tag{4.13}\]
## 5. Initial Estimates
In this section, \(\beta\geq 1\), \(\overline{w}_{0}\) will be a function satisfying the hypotheses described in SS 3.2, and \((w,z,k)\) will be the solution of (1.7) on \(\mathbb{T}\times[0,T_{*})\) for given initial data \((w_{0},z_{0},k_{0})\) satisfying
* \(\|w_{0}-(1+\overline{w}_{0})\|_{C^{1}}<\varepsilon\),
* \(\|z_{0}\|_{C^{1}}<\varepsilon\),
* \(\|k_{0}-\int_{\mathbb{T}}k_{0}\|_{C^{1}}<\varepsilon\),
* \(w_{0},z_{0},k_{0}\in W^{2,q}(\mathbb{T})\) for some \(1<q\leq\infty\).
The last assumption is simply there so that we can apply our local well-posedness theory and Lipschitz continuation criterion described in SS 3.1. All implicit constants will be independent of our choice of \((w_{0},z_{0},k_{0})\) satisfying these constraints and independent of our choice of \(\beta\geq 1\), but will depend on our choice of \(\alpha>0\).
**Lemma 5.1**.: _Suppose that \(T\in[0,\frac{4}{1+\alpha}\wedge T_{*}]\) and that for all \((y,t)\in\mathbb{T}\times[0,T]\) we have_
* \(w\sim 1\),
* \(|k|,|\partial_{y}k|,|z|,|\partial_{y}z|\lesssim\varepsilon\)_._
_Then if \(\varphi<\alpha\) and \(\Psi\) is defined to be the flow of \(\lambda:=u+\varphi\sigma\) we have_
* \(\eta_{x}\leq 3+\alpha\)
* \(\eta_{x}q^{w}\!\circ\!\eta=\overline{w}_{0}^{\prime}+\mathcal{O}(\varepsilon)\),
* \(\int_{0}^{t}|\partial_{y}w\!\circ\!\Psi(x,s)|\;ds\lesssim\frac{t}{\alpha-\varphi}\),
_for all \((x,t)\in\mathbb{T}\times[0,T]\). Note that the implicit constants in our conclusions are independent of our choice of \(\varphi<\alpha\) or \(T\in[0,\frac{4}{1+\alpha}\wedge T_{*}]\) and depend only on the initial data and the implicit constants in our hypotheses._
Proof of Lemma 5.1.: Fix \(\varphi<\alpha\) and define
\[g(x,t):=\eta^{-1}(\Psi(x,t),t).\]
We compute that
\[\partial_{t}g(x,t) =\partial_{t}\eta^{-1}(\Psi(x,t),t)+\partial_{t}\eta^{-1}(\Psi(x, t),t)\Psi_{t}(x,t)\] \[=\frac{-\eta_{t}(g(x,t),t)+\Psi_{t}(x,t)}{\eta_{x}(g(x,t),t)}\] \[=\frac{-\lambda_{3}\!\circ\!\eta((g(x,t),t)+\lambda\!\circ\! \Psi(x,t)}{\eta_{x}(g(x,t),t)}\] \[=\bigg{(}\frac{-\lambda_{3}\!\circ\!\eta+\lambda\!\circ\!\eta}{ \eta_{x}}\bigg{)}(g(x,t),t)\] \[=-(\alpha-\varphi)\frac{\sigma\!\circ\!\eta}{\eta_{x}}(g(x,t),t). \tag{5.1}\]
Note that \(\partial_{t}g(x,t)<0\) everywhere. We also know that
\[\Psi(x,t) =x+\int_{0}^{t}\!\!\lambda\!\circ\!\Psi(x,\tau)\;d\tau,\quad \text{ and }\] \[\Psi(x,t) =\eta(g(x,t),t)=g(x,t)+\int_{0}^{t}\!\!\lambda_{3}\!\circ\!\eta(g (x,t),\tau)\;d\tau,\] so \[x-g(x,t) =\int_{0}^{t}\!\!\lambda_{3}\!\circ\!\eta(g(x,t),\tau)-\lambda \!\circ\!\Psi(x,\tau)\;d\tau.\]
Using our hypotheses, along with (4.12), we conclude that that for all \((x,t)\in\mathbb{T}\times[-\varepsilon,T]\), we have
\[\sup_{[-\varepsilon,t]}|\eta_{x}q^{w}\!\circ\!\eta|\leq 1+\mathcal{O}( \varepsilon)+\mathcal{O}(\varepsilon)\sup_{[-\varepsilon,t]}\eta_{x}.\]
Since
\[\eta_{x} =1+\int_{0}^{t}\!\!\eta_{x}\partial_{y}\lambda_{3}\!\circ\!\eta\;ds\] \[=1+\int_{0}^{t}\!\!\eta_{x}(\tfrac{1+\alpha}{2}q^{w}\!\circ\! \eta+\mathcal{O}(\varepsilon))\;ds,\]
it follows that
\[\sup_{[0,t]}\eta_{x} \leq 1+\tfrac{1+\alpha}{2}t+\mathcal{O}(\varepsilon t)+\mathcal{O} (\varepsilon t)\sup_{[0,t]}\eta_{x}\] \[\implies\sup_{[0,t]}\eta_{x} \leq\frac{1+\tfrac{1+\alpha}{2}t+\mathcal{O}(\varepsilon t)}{1- \mathcal{O}(\varepsilon t)}\leq 3+\alpha.\]
The last inequality is true for \(\varepsilon>0\) taken to be small enough, since \(t\leq\frac{4}{1+\alpha}\). Plugging this into (4.12) and letting \(\varepsilon\) be sufficiently small gives us
\[q^{w}\!\circ\!\eta=\frac{\overline{w}_{0}^{\prime}+\mathcal{O}(\varepsilon) }{\eta_{x}}.\]
It follows that
\[q^{w}\!\circ\!\Psi(x,t)=-\tfrac{1}{\alpha-\varphi}\partial_{t}g(x,t)\frac{ \overline{w}_{0}^{\prime}(g(x,t))+\mathcal{O}(\varepsilon)}{\sigma\!\circ\! \Psi(x,t)}.\]
Since \(\partial_{t}g<0\), it follows that
\[|q^{w}\!\circ\!\Psi(x,t)|\lesssim-\tfrac{1}{\alpha-\varphi}\partial_{t}g(x,t).\]
So
\[\int_{0}^{t}\!\!|q^{w}\!\circ\!\Psi(x,\tau)|\;d\tau\lesssim\frac{x-g(x,t)}{ \alpha-\varphi}\lesssim\tfrac{t}{\alpha-\varphi}\]
Our result follows immediately from this inequality and our hypotheses.
**Lemma 5.2**.: _For all \((y,t)\in\mathbb{T}\times[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\) we have_
* \(\tfrac{1}{2}-\mathcal{O}(\varepsilon)\leq w\leq\tfrac{3}{2}+\mathcal{O}(\varepsilon)\)_,_
* \(|z|,|k|,|\partial_{y}k|\lesssim\varepsilon\)_._
_Further, for all \((x,t)\in\mathbb{T}\times[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\) we have \(\phi_{x}\sim 1\)._
Proof.: This follows from a simple bootstrap argument. Let \(T\in[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\) and assume that
* \(\tfrac{1}{4}\leq w\leq 2\) for all \((y,t)\in\mathbb{T}\times[0,T]\),
* \(|z|,|k|,|\partial_{y}k|\leq 1\) for all \((y,t)\in\mathbb{T}\times[0,T]\),
* \(4^{-\tfrac{1}{\alpha}}\leq\phi_{x}\leq 4^{\tfrac{1}{\alpha}}\) for all \((x,t)\in\mathbb{T}\times[0,T]\).
Then it follows from (4.8) that \(k\circ\phi=k_{0}=\mathcal{O}(\varepsilon)\) and \(\partial_{y}k\circ\phi=\phi_{x}^{-1}k_{0}^{\prime}=\mathcal{O}(\varepsilon)\) for all \((x,t)\in\mathbb{T}\times[0,T]\). Our bootstrap assumptions tell us that \(|\sigma|\lesssim 1\) for \((y,t)\in\mathbb{T}\times[0,T]\), so \(\sigma^{2}\partial_{y}k=\mathcal{O}(\varepsilon)\) for \((y,t)\in\mathbb{T}\times[0,T]\). It therefore follows from equations (1.5) that
\[w\!\circ\!\eta =w_{0}+\mathcal{O}(\varepsilon t)=1+\overline{w}_{0}+\mathcal{O} (\varepsilon),\] \[z\!\circ\!\psi =z_{0}+\mathcal{O}(\varepsilon t)=z_{0}+\mathcal{O}(\varepsilon),\] \[\text{so }\tfrac{1}{2}-\mathcal{O}(\varepsilon)\leq w\leq\tfrac{3}{2}+\mathcal{O}(\varepsilon),\] \[\text{and }z=\mathcal{O}(\varepsilon).\]
This implies that \(\tfrac{1}{4}-\mathcal{O}(\varepsilon)\leq\sigma\leq\tfrac{3}{4}+\mathcal{O}(\varepsilon)\) for \((y,t)\in\mathbb{T}\times[0,T]\) and it therefore follows from (4.5) that
\[(3+\mathcal{O}(\varepsilon))^{-\tfrac{1}{\alpha}}\leq\phi_{x}\leq(3+\mathcal{O} (\varepsilon))^{\tfrac{1}{\alpha}}.\]
**Lemma 5.3**.: _For all \((x,t)\in\mathbb{T}\times[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\) we have_
* \(\eta_{x}q^{w}\circ\eta=\overline{w}_{0}^{\prime}+\mathcal{O}(\varepsilon)\)_,_
* \(\eta_{x}\leq 3+\alpha\)_,_
* \(\int_{0}^{t}|\partial_{y}w\circ\psi|\lesssim 1\)_,_
* \(\int_{0}^{t}|\partial_{y}w\circ\phi|\lesssim 1\)_,_
* \(|\partial_{y}z\circ\psi|\lesssim\varepsilon\)_,_
* \(\psi_{x}\sim 1\)_._
Proof of Lemma 5.3.: Another simple bootstrap argument. Let \(T\in[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\) and let us assume that
\[|q^{z}|\leq C\varepsilon\]
where \(C>1\) is a constant to be determined. Since \(\partial_{y}k=\mathcal{O}(\varepsilon)\) and \(\sigma\sim 1\) (see Lemma 5.2), it follows from the bootstrap assumption and (4.4) that \(\psi_{x}\sim 1\) provided that \(\varepsilon\) is small enough relative to \(C\).
The bootstrap assumption and the bounds from Lemma 5.2 let allow us to use Lemma 5.1 to conclude that \(\int_{0}^{t}|q^{w}\!\circ\!\eta|\lesssim 1\). Since \(\psi_{x}\sim 1\) for \((x,t)\in\mathbb{T}\times[0,T]\), we now conclude from (4.13) that
\[|\psi_{x}q^{z}\!\circ\!\psi-z_{0}^{\prime}|\lesssim\varepsilon.\]
It therefore follows that if \(C>1\) is chosen large enough and \(\varepsilon\) is chosen small enough the bootstrap argument closes.
The rest of the inequalities are now immediate from Lemma 5.1.
**Proposition 5.4** (**Lagrangian blowup criterion)**.: _The blowup time \(T_{*}\) (defined by the Eulerian blowup criterion (3.2)) satisfies_
\[T_{*}=\tfrac{2}{1+\alpha}+\mathcal{O}(\varepsilon), \tag{5.2}\]
_and therefore the bounds from the previous lemmas hold up to time \(T_{*}\). Furthermore, the Eulerian blowup criterion (3.2) implies the Lagrangian blowup criterion_
\[\liminf_{t\to T_{*}}\Bigl{(}\min_{x\in\mathbb{T}}\eta_{x}(x,t)\Bigr{)}=0, \tag{5.3}\]
_and we have the bound_
\[\inf_{\begin{subarray}{c}1\leq|x|\leq\pi\\ 0\leq t\leq T_{*}\end{subarray}}\eta_{x}(x,t)>\tfrac{1}{2}. \tag{5.4}\]
_Lastly, we note that the Lagrangian blowup criterion (5.3) also implies the Eulerian blowup criterion (3.2), and so we may use (5.3) as the definition of \(T_{*}\)._
Proof of Proposition 5.4.: We know from Lemma 5.3 that for \((x,t)\in\mathbb{T}\times[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\) we have \(\eta_{x}q^{w}\circ\eta=\overline{w}_{0}^{\prime}+\mathcal{O}(\varepsilon)\), so it follows from Lemma 5.2 that
\[\eta_{x}=1+\int_{0}^{t}\!\!\eta_{x}\partial_{y}\lambda_{3}\!\circ\!\eta\;ds=1+ \tfrac{1+\alpha}{2}\!\int_{0}^{t}\!\!\eta_{x}q^{w}\!\circ\!\eta\;ds+\mathcal{ O}(\varepsilon)=1+\tfrac{1+\alpha}{2}t\overline{w}_{0}^{\prime}+\mathcal{O}(\varepsilon) \tag{5.5}\]
for \((x,t)\in\mathbb{T}\times[0,\tfrac{4}{1+\alpha}\wedge T_{*}]\). It follows that
\[0\leq\eta_{x}(0,t)=1-\tfrac{1+\alpha}{2}t+\mathcal{O}(\varepsilon)\]
for \(t\in[0,\tfrac{4}{1+\alpha},\wedge T_{*}]\). Therefore, we must have \(T_{*}\leq\tfrac{2}{1+\alpha}+\mathcal{O}(\varepsilon)\).
Since \(T_{*}=\tfrac{4}{1+\alpha}\wedge T_{*}\), it follows that \(k,z,\partial_{y}k,\) and \(\partial_{y}z\) are \(\mathcal{O}(\varepsilon)\) and \(w\) is \(\mathcal{O}(1)\) up to time \(T_{*}\). Therefore, it follows from (3.2) that
\[\limsup_{t\to T_{*}}\|\partial_{y}w(\cdot,t)\|_{L^{\infty}}=\infty.\]
Since \(\partial_{y}w\!\circ\!\eta=(\overline{w}_{0}^{\prime}+\mathcal{O}(\varepsilon ))\eta_{x}^{-1}\), we conclude that
\[\infty=\limsup_{t\to T_{*}}\|\partial_{y}w(\cdot,t)\|_{L^{\infty}}=\limsup_{t \to T_{*}}\|\partial_{y}w\circ\eta(\cdot,t)\|_{L^{\infty}}\leq 2\limsup_{t \to T_{*}}\|\eta_{x}^{-1}(\cdot,t)\|_{L^{\infty}}\;,\]
and hence (5.3) holds.
Since
\[\eta_{x}\geq 1-\tfrac{1+\alpha}{2}t+\mathcal{O}(\varepsilon)\]
for all \((x,t)\in\mathbb{T}\times[0,T_{*})\), we conclude that \(T_{*}\geq\tfrac{1+\alpha}{2}+\mathcal{O}(\varepsilon)\), otherwise (5.3) would be violated.
To see why (5.4) is true, just note that \(\overline{w}_{0}^{\prime}(x)\geq 0\) for \(1\leq|x|\leq\pi\), so that \(\eta_{x}\geq 1+\mathcal{O}(\varepsilon)\) for \(1\leq|x|\leq\pi\).
Lastly, to see that (5.3) implies (3.2), we show that _not_ (3.2) implies _not_ (5.3). More precisely, if there exists \(0<M<\infty\) such that
\[\int_{0}^{T_{*}}\Bigl{(}\|\partial_{y}w(\cdot,t)\|_{L^{\infty}}+\|\partial_{y }z(\cdot,t)\|_{L^{\infty}}\Bigr{)}dt\leq M,\]
then (4.2) and the triangle inequality gives \(e^{-\alpha M}\leq\tfrac{\sigma\circ\eta}{\sigma_{0}}(x,t)\leq e^{\alpha M}\) for all \((x,t)\in\mathbb{T}\times[0,T_{*}]\), which may be combined with (4.6) to yield \(e^{-M}\leq\eta_{x}(x,t)\leq e^{M}\) for all \((x,t)\in\mathbb{T}\times[0,T_{*}]\). Thus, (5.3) fails, concluding the proof.
Let us now prove an important inequality for \(\eta_{x}q^{w}\circ\eta\) using (5.2). For \(\frac{1}{1+\alpha}\leq t\leq T_{*}\), (5.5) tells us that
\[\overline{w}^{\prime}_{0}=\tfrac{2}{1+\alpha}(\eta_{x}-1+\mathcal{O}(\varepsilon ))\tfrac{1}{t}.\]
Therefore, for \(\frac{1}{1+\alpha}\leq t\leq T_{*}\) we have
\[\eta_{x}q^{w}\circ\eta=\overline{w}^{\prime}_{0}+\mathcal{O}(\varepsilon) =\tfrac{1}{t}\tfrac{2}{1+\alpha}\eta_{x}+\tfrac{1}{t}\tfrac{2}{1+ \alpha}(-1+\mathcal{O}(\varepsilon))\] \[\leq 2\eta_{x}+\tfrac{1}{t}\tfrac{2}{1+\alpha}(-1+\mathcal{O}( \varepsilon))\] \[\leq 2\eta_{x}-1+\mathcal{O}(\varepsilon)\] \[\leq-\tfrac{1}{2}+2\eta_{x}.\]
For \(0\leq t\leq\frac{1}{1+\alpha}\), (5.5) gives
\[\eta_{x}=1+\tfrac{1+\alpha}{2}t\overline{w}^{\prime}_{0}+\mathcal{O}( \varepsilon)\geq 1-\tfrac{1+\alpha}{2}t+\mathcal{O}(\varepsilon)\geq\tfrac{1}{ 2}+\mathcal{O}(\varepsilon).\]
Therefore, for \(0\leq t\leq\frac{1}{1+\alpha}\) we get
\[\eta_{x}q^{w}\circ\eta=\overline{w}^{\prime}_{0}+\mathcal{O}(\varepsilon) \leq 1+\mathcal{O}(\varepsilon)=(1-4\eta_{x})+\mathcal{O}(\varepsilon)+4 \eta_{x}\leq-\tfrac{1}{2}+4\eta_{x}.\]
We have now proven that
\[\eta_{x}q^{w}\circ\eta\leq-\tfrac{1}{2}+4\eta_{x}\qquad\forall\;(x,t)\in \mathbb{T}\times[0,T_{*}). \tag{5.6}\]
### An identity for \(\partial_{y}^{2}k\)
Note that since \(\sigma\) and \(u\) are \(C^{1}\) on \(\mathbb{T}\times[0,T_{*})\), it follows from (4.5) and the fact that \(\phi_{t}=u\circ\phi\) that \(\phi\) is \(C^{2}\) on \(\mathbb{T}\times[0,T_{*})\). Since \(\phi_{x}\sim 1\), it follows that \(\phi^{-1}\) is \(C^{2}\) on \(\mathbb{T}\times[0,T_{*})\). It follows \(k=k_{0}\circ\phi^{-1}\in W^{2,q}_{\text{loc}}(\mathbb{T}\times[0,T_{*}))\).
Differentiating (4.10) in \(x\) and using (4.5) gives us
\[(\partial_{y}^{2}k-\tfrac{1}{\alpha}\sigma^{-1}\partial_{y}\sigma\partial_{y} k)\circ\phi=\phi_{x}^{2}(k^{\prime\prime}_{0}-\tfrac{1}{\alpha}\sigma_{0}^{-1} \sigma_{0}^{\prime}k^{\prime}_{0}).\]
Therefore,
\[\partial_{y}^{2}k=\tfrac{1}{\alpha}\sigma^{-1}\partial_{y}\sigma\partial_{y}k +\left[\phi_{x}^{2}(k^{\prime\prime}_{0}-\tfrac{1}{\alpha}\sigma_{0}^{-1} \sigma_{0}^{\prime}k^{\prime}_{0})\right]\circ\phi^{-1}. \tag{5.7}\]
## 6. Identities along the 3-characteristic
In this section, \(\beta\geq 1\), \(\overline{w}_{0}\) will be a function satisfying the hypotheses described in SS 3.2, and \((w,z,k)\) will be the solution of (1.7) on \(\mathbb{T}\times[0,T_{*})\) for given initial data \((w_{0},z_{0},k_{0})\) satisfying
* \(\|w_{0}-(1+\overline{w}_{0})\|_{C^{1}}<\varepsilon\),
* \(\|z_{0}\|_{C^{1}}<\varepsilon\),
* \(\|k_{0}-\int_{\mathbb{T}}k_{0}\|_{C^{1}}<\varepsilon\),
* \(w_{0},z_{0},k_{0}\in W^{2,q}(\mathbb{T})\) for some \(1<q\leq\infty\).
All implicit constants will be independent of our choice of \((w_{0},z_{0},k_{0})\) satisfying these constraints and independent of our choice of \(\beta\geq 1\), but will depend on our choice of \(\alpha>0\).
Adopt the notation
\[W :=w\circ\eta, \mathring{W} :=q^{w}\circ\eta,\] \[Z :=z\circ\eta, \mathring{Z} :=q^{z}\circ\eta,\] \[K :=k\circ\eta, \mathring{K} :=\partial_{y}k\circ\eta,\] \[\Sigma :=\sigma\circ\eta, \mathring{\Sigma} :=\partial_{y}\sigma\circ\eta.\]
Using (1.7) and the equation \(\eta_{t}=\lambda_{3}\circ\eta\) gives us
\[K_{t} =\alpha\Sigma\mathring{K}, \tag{6.1}\] \[Z_{t} =\Sigma(2\alpha\mathring{Z}-\tfrac{1}{2\gamma}K_{t}) \tag{6.2}\]
\[\Sigma_{t} =-\Sigma(\alpha\dot{Z}-\tfrac{1}{2\gamma}K_{t}) \tag{6.3}\] \[\eta_{xt} =\eta_{x}(\tfrac{1+\alpha}{2}\dot{W}+\tfrac{1-\alpha}{2}\dot{Z}+ \tfrac{1}{2\gamma}K_{t}), \tag{6.4}\]
Taking \(\partial_{x}\) of (6.1) and using gives us
\[K_{xt}=\eta_{x}(\alpha\dot{\Sigma}\hat{K}+\alpha\Sigma\partial_{y}^{2}k \!\circ\!\eta) \tag{6.5}\]
Since \(K_{xt}=\eta_{xt}\dot{\hat{K}}+\eta_{x}\check{\hat{K}}_{t}\), it follows from (5.7), (6.4), and (6.5) that
\[\dot{K}_{t}=\tfrac{K_{xt}-\eta_{xt}\check{\hat{K}}}{\eta_{x}} =-\hat{K}\dot{\hat{Z}}+\tfrac{1}{2\gamma}\Sigma\check{K}^{2}+ \alpha\Sigma\big{[}\phi_{x}^{2}(k_{0}^{\prime\prime}-\tfrac{1}{\alpha}\sigma _{0}^{-1}\sigma_{0}^{\prime}k_{0}^{\prime})\big{]}\!\circ\!\phi^{-1}\!\circ\!\eta\] \[=\mathcal{O}(\varepsilon)+\mathcal{O}(k_{0}^{\prime\prime}\circ \phi^{-1}\!\circ\!\eta). \tag{6.6}\]
Therefore, taking \(\partial_{t}\) of (6.1) gives us
\[K_{tt}=\mathcal{O}(\varepsilon+k_{0}^{\prime\prime}\circ\phi^{-1}\!\circ\! \eta). \tag{6.7}\]
Taking \(\partial_{t}\) of (6.3) and using (6.7) yields
\[\Sigma_{tt}=-\alpha\Sigma\dot{\hat{Z}}_{t}+\mathcal{O}(\varepsilon+k_{0}^{ \prime\prime}\circ\phi^{-1}\!\circ\!\eta). \tag{6.8}\]
Taking \(\partial_{t}\) of (4.12) gives us
\[\partial_{t}(\eta_{x}\dot{\hat{W}})=\tfrac{1}{4\gamma}\eta_{x}K_{t}(\dot{\hat {W}}+\dot{\hat{Z}}). \tag{6.9}\]
Taking \(\partial_{t}\) of (6.4) and using (6.7) and (6.9) produces
\[\eta_{xtt}=\eta_{x}\big{[}\tfrac{1+\alpha}{2}\dot{\hat{W}}(\tfrac{1-\alpha}{2} \dot{Z}+\tfrac{3}{4\gamma}K_{t})+\tfrac{1-\alpha}{2}\dot{\hat{Z}}_{t}+ \mathcal{O}(\varepsilon+k_{0}^{\prime\prime}\circ\phi^{-1}\!\circ\!\eta) \big{]}. \tag{6.10}\]
It is immediate that
\[\eta_{x}\dot{Z}=Z_{x}+\tfrac{1}{2\gamma}\Sigma K_{x}.\]
Taking \(\partial_{t}\) of this equation and using (6.2) gives us
\[\partial_{t}(\eta_{x}\dot{\hat{Z}})=2\alpha\partial_{x}(\Sigma\dot{\hat{Z}})+ \tfrac{1}{2\gamma}(\Sigma_{t}K_{x}-\Sigma_{x}K_{t}). \tag{6.11}\]
Since
\[\partial_{t}(\eta_{x}\dot{\hat{Z}})=\eta_{xt}\dot{\hat{Z}}+\eta_{x}\dot{\hat{Z }}_{t},\]
it follows from (6.4) and (6.11) that
\[2\alpha\Sigma\dot{\hat{Z}}_{x} =\eta_{x}\dot{\hat{Z}}_{t}-2\alpha\Sigma_{x}\dot{\hat{Z}}+\tfrac {1}{2\gamma}(\Sigma_{x}K_{t}-\Sigma_{t}K_{x})+\eta_{xt}\dot{Z}\] \[=\eta_{x}\dot{\hat{Z}}_{t}-2\alpha\Sigma_{x}\dot{\hat{Z}}+\tfrac {1}{2\gamma}(\Sigma_{x}K_{t}-\Sigma_{t}K_{x})+\eta_{x}\dot{\hat{Z}}(\tfrac{1+ \alpha}{2}\dot{\hat{W}}+\tfrac{1-\alpha}{2}\dot{\hat{Z}}+\tfrac{1}{2\gamma}K_{ t}). \tag{6.12}\]
Taking \(\partial_{x}\) of (6.3) gives us
\[\Sigma_{tx}=-\Sigma_{x}(\alpha\dot{\hat{Z}}-\tfrac{1}{2\gamma}K_{t})-\Sigma( \alpha\dot{\hat{Z}}_{x}-\tfrac{1}{2\gamma}K_{tx}). \tag{6.13}\]
So it follows that taking \(\partial_{t}\) of (6.11) 8 and using (6.13), (6.3), (6.4), (6.10), (6.8), (6.5), (6.7), and (6.12) results in
Footnote 8: For those concerned that we may not have enough derivatives in \(L^{2}\) to do this, we can circumnavigate this issue by first taking data in \(U_{\varepsilon}\cap C^{\infty}\), deriving the estimates from § 7-8 in this case, and then passing the estimates to the limit to all solutions with data in \(V_{\varepsilon}\).
\[\eta_{x}\dot{\hat{Z}}_{tt}=2\alpha\Sigma\dot{\hat{Z}}_{xt}+\eta_{x}\dot{\hat{Z }}_{t}(-\dot{\hat{W}}+\mathcal{O}(\varepsilon))+\eta_{x}\big{(}\mathcal{O}( \varepsilon)\dot{\hat{W}}+\mathcal{O}(\varepsilon)+\mathcal{O}(\varepsilon k_{0 }^{\prime\prime}\!\circ\!\phi^{-1}\!\circ\!\eta)\big{)}. \tag{6.14}\]
This identity is the main computational step for the energy estimates in the next section.
Similar computations to those used to obtain (6.1)-(6.4) allow us to write
\[\dot{\hat{Z}}_{t}=\big{(}-\partial_{y}\lambda_{3}q^{z}+2\alpha\sigma\partial_{y }^{2}z+\alpha(2\partial_{y}\sigma-\tfrac{1}{2\gamma}\sigma\partial_{y}k) \partial_{y}z+\tfrac{3\alpha}{2\gamma}\sigma\partial_{y}\sigma\partial_{y}k+ \tfrac{\alpha}{\gamma}\sigma^{2}\partial_{y}^{2}k\big{)}\!\circ\!\eta.\]
This gives us
\[\hat{Z}_{t}(\cdot,0) =-(\tfrac{1+\alpha}{2}w_{0}^{\prime}+\tfrac{1-\alpha}{2}z_{0}^{ \prime})(z_{0}^{\prime}+\tfrac{1}{2\gamma}\sigma_{0}k_{0}^{\prime})\] \[\qquad+2\alpha\sigma_{0}z_{0}^{\prime\prime}+\alpha(2\sigma_{0}^{ \prime}-\tfrac{1}{2\gamma}\sigma_{0}k_{0}^{\prime})z_{0}^{\prime}+\tfrac{3 \alpha}{2\gamma}\sigma_{0}\sigma_{0}^{\prime}k_{0}^{\prime}+\tfrac{\alpha}{ \gamma}\sigma_{0}^{2}k_{0}^{\prime\prime}. \tag{6.15}\]
So
\[|\hat{Z}_{t}(x,0)|\lesssim|z_{0}^{\prime\prime}|+|k_{0}^{\prime\prime}|+\varepsilon. \tag{6.16}\]
This inequality will be utilized in the energy estimates below.
## 7. Energy estimates
In this section, \(1<p<\infty\), \(\beta\geq 1\), \(\overline{w}_{0}\) will be a function satisfying the hypotheses described in SS 3.2, and \((w,z,k)\) will be the solution of (1.7) on \(\mathbb{T}\times[0,T_{*})\) for given initial data \((w_{0},z_{0},k_{0})\) satisfying
* \(\|w_{0}-(1+\overline{w}_{0})\|_{C^{1}}<\varepsilon\),
* \(\|z_{0}\|_{C^{1}}<\varepsilon\),
* \(\|k_{0}-\int_{\mathbb{T}}k_{0}\|_{C^{1}}<\varepsilon\),
* \(w_{0}\in W^{2,q}(\mathbb{T})\) for some \(1<q<\infty\),
* \(z_{0},k_{0}\in W^{2,p}(\mathbb{T})\).
All implicit constants will be independent of our choice of \(p\in(1,\infty)\), independent of our choice of \((w_{0},z_{0},k_{0})\) satisfying these constraints, and independent of our choice of \(\beta\geq 1\), but will depend on our choice of \(\alpha>0\).
Let \(b>0\) be a parameter to be determined, and suppose that the quantity
\[E_{p}(t):=\int_{\mathbb{T}}\!\!\Sigma^{-bp}(x,t)\eta_{x}(x,t)|\hat{Z}_{t}(x,t )|^{p}\,dx \tag{7.1}\]
defines a finite, differentiable function for \(t\in[0,T_{*})\). 9
Footnote 9: We will address this assumption below. See Remark 8.1
Using the identities from SS 6, we have
\[\dot{E}_{p} =-bp\int_{\tfrac{\Sigma t}{\Sigma}}\!\Sigma^{-bp}\eta_{x}|\mathring {Z}_{t}|^{p}+\int\!\!\Sigma^{-bp}\eta_{xt}|\mathring{Z}_{t}|^{p}+p\int\!\! \Sigma^{-bp}\eta_{x}\mathrm{sgn}\,(\mathring{Z}_{t})|\mathring{Z}_{t}|^{p-1} \mathring{Z}_{tt}\] \[=\int\!\!\Sigma^{-bp}\eta_{x}|Z_{t}|^{p}\big{[}\tfrac{1+\alpha}{ 2}\mathring{W}+\mathcal{O}((1+b)(1+p)\varepsilon)\big{]}+p\int\!\!\Sigma^{- bp}\eta_{x}\mathrm{sgn}\,(\mathring{Z}_{t})|\mathring{Z}_{t}|^{p-1} \mathring{Z}_{tt}\] \[=\int\!\!\Sigma^{-bp}\eta_{x}|Z_{t}|^{p}\big{[}\tfrac{1+\alpha}{ 2}\mathring{W}+\mathcal{O}((1+b)(1+p)\varepsilon)\big{]}+2\alpha p\int\!\! \Sigma^{1-bp}\mathrm{sgn}\,(\mathring{Z}_{t})|\mathring{Z}_{t}|^{p-1} \mathring{Z}_{xt}\] \[+p\int\!\!\Sigma^{-bp}\eta_{x}|Z_{t}|^{p}[-\mathring{W}+ \mathcal{O}(\varepsilon)]+p\int\!\!\Sigma^{-bp}\eta_{x}\mathrm{sgn}\,( \mathring{Z}_{t})|Z_{t}|^{p-1}\mathcal{O}(\varepsilon)\big{[}\mathring{W}+ \mathcal{O}(1)+\mathcal{O}(k_{0}^{\prime\prime}\!\circ\!\phi^{-1}\!\circ\! \eta)\big{]}\] \[=\int\Sigma^{-bp}\eta_{x}|\mathring{Z}_{t}|^{p}\big{[}\big{(} \tfrac{1+\alpha}{2}-p\big{)}\mathring{W}+\mathcal{O}((1+p)(1+b)\varepsilon) \big{)}\big{]}\] \[+2\alpha p\int\!\!\Sigma^{1-bp}\mathrm{sgn}\,(\mathring{Z}_{t})| \mathring{Z}_{t}|^{p-1}\mathring{Z}_{xt}\] \[+p\int\!\!\Sigma^{-bp}\eta_{x}\mathrm{sgn}\,(\mathring{Z}_{t})|Z_ {t}|^{p-1}\mathcal{O}(\varepsilon)\big{[}\mathring{W}+\mathcal{O}(1)+ \mathcal{O}(k_{0}^{\prime\prime}\!\circ\!\phi^{-1}\!\circ\!\eta)\big{]}.\]
Since
\[\Sigma^{1-bp}\mathrm{sgn}\,(\mathring{Z}_{t})|\mathring{Z}_{t}|^{p-1} \mathring{Z}_{xt} =\tfrac{1}{p}\Sigma^{1-bp}\partial_{x}\big{(}|\mathring{Z}_{t}|^ {p}\big{)}\] \[=\tfrac{1}{p}\partial_{x}\big{(}\Sigma^{1-bp}|\mathring{Z}_{t}|^ {p}\big{)}-\tfrac{1-bp}{p}\Sigma^{-bp}\Sigma_{x}|\mathring{Z}_{t}|^{p}\] \[=\tfrac{1}{p}\partial_{x}\big{(}\Sigma^{1-bp}|\mathring{Z}_{t}|^ {p}\big{)}-\tfrac{1-bp}{2p}\Sigma^{-bp}\eta_{x}|\mathring{Z}_{t}|^{p}( \mathring{W}+\mathcal{O}(\varepsilon)),\]
it follows that
\[\tfrac{1}{p}\dot{E}_{p} =\int\!\!\Sigma^{-bp}\eta_{x}|\dot{Z}_{t}|^{p}\big{[}(\alpha b-1+ \tfrac{1-\alpha}{2p})\ddot{W}+\mathcal{O}((1+b)\varepsilon)\big{]}\] \[+\int\!\!\Sigma^{-bp}\eta_{x}\mathrm{sgn}\,(\dot{Z}_{t})|\dot{Z}_ {t}|^{p-1}\mathcal{O}(\varepsilon)\big{[}\dot{W}+\mathcal{O}(1)+\mathcal{O}(k _{0}^{\prime\prime}\!\circ\!\phi^{-1}\!\circ\!\eta)\big{]}\] \[=\int\!\!\Sigma^{-bp}\eta_{x}\dot{W}\big{[}(\alpha b-1+\tfrac{1- \alpha}{2p})|\dot{Z}_{t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]}\] \[+\mathcal{O}((1+b)\varepsilon)E_{p}+\int\!\!\Sigma^{-bp}\eta_{x} |\dot{Z}_{t}|^{p-1}\mathcal{O}(\varepsilon k_{0}^{\prime\prime}\!\circ\!\phi^{ -1}\!\circ\!\eta)\] \[\leq\int\!\!\Sigma^{-bp}\eta_{x}\dot{W}\big{[}(\alpha b-1+ \tfrac{1-\alpha}{2p})|\dot{Z}_{t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}| ^{p-1}\big{]}\] \[+\mathcal{O}((1+b)\varepsilon)E_{p}+\tfrac{p-1}{p}\mathcal{O}( \varepsilon)E_{p}+\tfrac{\mathcal{O}(\varepsilon)}{p}\int\Sigma^{-bp}\eta_{x} |\mathcal{O}(k_{0}^{\prime\prime}\!\circ\!\phi^{-1}\!\circ\!\eta)|^{p}\] \[=\int\!\!\Sigma^{-bp}\eta_{x}\dot{W}\big{[}(\alpha b-1+\tfrac{1- \alpha}{2p})|\dot{Z}_{t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]}\] \[+\mathcal{O}((1+b)\varepsilon)E_{p}+\mathcal{O}\big{(}\frac{ \varepsilon}{p}\|\Sigma^{-b}\|_{L_{x}^{\infty}}^{p}\|k_{0}^{\prime\prime}\|_{ L_{x}^{p}}^{p}\big{)}.\]
It's easy to check that
\[(\alpha b-2+\tfrac{3-\alpha}{2p})|\dot{Z}_{t}|^{p}+\tfrac{1}{p} \varepsilon^{p}|\mathcal{O}(1)|^{p} \leq(\alpha b-1+\tfrac{1-\alpha}{2p})|\dot{Z}_{t}|^{p}+ \mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\] \[\leq(\alpha b-\tfrac{1+\alpha}{2p})|\dot{Z}_{t}|^{p}+\tfrac{1}{p }\varepsilon^{p}|\mathcal{O}(1)|^{p},\]
so
\[\eta_{x}\dot{W}\big{[}(\alpha b-1+\tfrac{1-\alpha}{2p})|\dot{Z}_{ t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]}\] \[=\eta_{x}(\ddot{W})_{+}\big{[}(\alpha b-1+\tfrac{1-\alpha}{2p})| \dot{Z}_{t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]}\] \[\qquad-\eta_{x}(\ddot{W})_{-}\big{[}(\alpha b-1+\tfrac{1-\alpha}{ 2p})|\dot{Z}_{t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]}\] \[\leq\eta_{x}(\ddot{W})_{+}\big{[}(\alpha b-\tfrac{1+\alpha}{2p})| \dot{Z}_{t}|^{p}+\tfrac{1}{p}\varepsilon^{p}|\mathcal{O}(1)|^{p}\big{]}\] \[\qquad-\eta_{x}(\ddot{W})_{-}\big{[}(\alpha b-2+\tfrac{3-\alpha}{ 2p})|\dot{Z}_{t}|^{p}-\tfrac{1}{p}\varepsilon^{p}|\mathcal{O}(1)|^{p}\big{]}\] \[=\big{[}(\alpha b-\tfrac{1+\alpha}{2p})\mathbf{1}_{\{\dot{W}>0\} }+(\alpha-2+\tfrac{3-\alpha}{2p})\mathbf{1}_{\{\dot{W}<0\}}\big{]}|\dot{Z}_{t }|^{p}\eta_{x}\dot{W}\] \[\qquad+\big{[}(\alpha b-\tfrac{1+\alpha}{2p})\mathbf{1}_{\{\dot{W} >0\}}+(\alpha b-2+\tfrac{3-\alpha}{2p}\mathbf{1}_{\{\dot{W}<0\}}\big{]}\tfrac{ 1}{p}\varepsilon^{p}|\mathcal{O}(1)|^{p}\eta_{x}|\ddot{W}|.\]
Now choose \(b\) to be \(b=\frac{2}{\alpha}+\frac{1}{2}\). The above inequality becomes
\[\eta_{x}\dot{W}\big{[}(\alpha b-1+\tfrac{1-\alpha}{2p})|\dot{Z}_{ t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]} \leq\big{[}\tfrac{4p-1+(p-1)\alpha}{2p}\mathbf{1}_{\{\dot{W}>0\} }+\tfrac{3+(p-1)\alpha}{2p}\mathbf{1}_{\{\dot{W}<0\}}\big{]}|\dot{Z}_{t}|^{p} \eta_{x}\dot{W}\] \[\quad+\tfrac{4p-1+(p-1)\alpha}{2p^{2}}\varepsilon^{p}|\mathcal{O} (1)|^{p}\eta_{x}|\dot{W}|.\]
Using (5.6) and the fact that \(\eta_{x}\dot{W}=\mathcal{O}(1)\), we get
\[\eta_{x}\dot{W}\big{[}(\alpha b-1+\tfrac{1-\alpha}{2p})|\dot{Z}_{ t}|^{p}+\mathcal{O}(\varepsilon)|\dot{Z}_{t}|^{p-1}\big{]} \leq-\tfrac{3+(p-1)\alpha}{4p}|\dot{Z}_{t}|^{p}\] \[\quad+\tfrac{4(4p-1+(p-1)\alpha)}{2p}\eta_{x}|\dot{Z}_{t}|^{p}+ \tfrac{4p-1+(p-1)\alpha}{2p^{2}}\varepsilon^{p}|\mathcal{O}(1)|^{p}|\mathcal{O} (1)|.\]
Since \(b=\frac{2}{\alpha}+\frac{1}{2}=\mathcal{O}(1)\), we have \(\Sigma^{-b}=\mathcal{O}(1)\). Using this last inequality, we now have
\[\dot{E}_{p}+\tfrac{3+(p-1)\alpha}{4}\int\Sigma^{-bp}|\dot{Z}_{t}| ^{p}\] \[\leq\big{[}\tfrac{4(4p-1+(p-1)\alpha)}{2}+\mathcal{O}(\varepsilon) \big{]}E_{p}+\big{[}\tfrac{4p-1+(p-1)\alpha}{2p}+\varepsilon+\varepsilon\|k_{0}^{ \prime\prime}\|_{L_{x}^{p}}^{p}\big{]}\varepsilon^{p}|\mathcal{O}(1)|^{p}| \mathcal{O}(1)|\]
\[\|\hat{Z}_{x}\|_{L^{\infty}_{x}}+\|\hat{Z}_{t}\|_{L^{\infty}_{x}}\lesssim\|z^{ \prime\prime}_{0}\|_{L^{\infty}_{x}}+\|k^{\prime\prime}_{0}\|_{L^{\infty}_{x}}+\varepsilon. \tag{7.7}\]
## 8. Estimates along the 3-characteristic
Let \(\beta\geq 1\), \(1<p\leq\infty\), \(\overline{w}_{0}\) will be a function satisfying the hypotheses described in SS 3.2, and let \(p^{\prime}\) be the Holder conjugate of \(p\), i.e. \(\frac{1}{p^{\prime}}+\frac{1}{p}=1\), with \(p^{\prime}=1\) when \(p=\infty\). Suppose that
* \(\|w_{0}-(1+\overline{w}_{0})\|_{C^{1}}<\varepsilon\),
* \([w^{\prime}_{0}]_{C^{0,\frac{1}{p^{\prime}}}}<2\),
* \(w_{0}\in W^{2,q}(\mathbb{T})\) for some \(1<q\leq\infty\),
* \(\|z_{0}\|_{W^{2,p}}<\varepsilon\),
* \(\|k_{0}-\fint_{\mathbb{T}}k_{0}\|_{W^{2,p}}<\varepsilon\).
Our energy estimates (7.6), (7.7) from the previous section let us conclude that
\[[\hat{Z}]_{C^{0,\frac{1}{p^{\prime}}}}\leq\|\hat{Z}_{x}\|_{L^{p}_{x}}\lesssim\varepsilon. \tag{8.1}\]
Recall that
\[\eta_{x}\mathring{W} =w^{\prime}_{0}+(e^{\frac{1}{4\gamma}(K-k_{0})}-1)w^{\prime}_{0} -\tfrac{e^{\frac{1}{4\gamma}(K-k_{0})}}{2\gamma}\sigma_{0}k^{\prime}_{0}+ \tfrac{\alpha}{4\gamma}e^{\frac{1}{4\gamma}K}\int_{0}^{t}e^{-\tfrac{1}{4\gamma }K}\Sigma K_{x}\hat{Z}\;ds,\] \[K_{x} =\Sigma^{\frac{1}{\alpha}}\eta_{x}(\tfrac{k^{\prime}_{0}}{ \sigma^{\frac{1}{\alpha}}}\circ\phi^{-1}\circ\eta),\] \[\eta_{x} =1+\tfrac{1+\alpha}{2}\int_{0}^{t}\eta_{x}\mathring{W}\;ds+ \tfrac{1-\alpha}{2}\int_{0}^{t}\eta_{x}\hat{Z}\;ds+\tfrac{\alpha}{2\gamma} \int_{0}^{t}\Sigma K_{x}\;ds.\]
Using these equations and (8.1), it is immediate that
\[[\eta_{x}\mathring{W}-w_{0}^{\prime}]_{C^{0,\frac{1}{p^{\prime}}}} \lesssim\varepsilon[w_{0}^{\prime}]_{C^{0,\frac{1}{p^{\prime}}}}+ \varepsilon+\varepsilon(\sup_{[0,t]}[K_{x}]_{C^{0,\frac{1}{p^{\prime}}}}),\] \[{}_{C^{0,\frac{1}{p^{\prime}}}} \lesssim\varepsilon+\varepsilon[\eta_{x}]_{C^{0,\frac{1}{p^{ \prime}}}}+[\tfrac{k_{0}^{\prime}}{\sigma_{0}^{\frac{1}{p^{\prime}}}}\circ \phi^{-1}\circ\eta]_{C^{0,\frac{1}{p^{\prime}}}},\] \[{}_{C^{0,\frac{1}{p^{\prime}}}} \leq t\sup_{[0,t]}\big{(}\tfrac{1+\alpha}{2}[\eta_{x}\mathring{W}-w _{0}^{\prime}]_{C^{0,\frac{1}{p^{\prime}}}}+\tfrac{1+\alpha}{2}[w_{0}^{\prime }]_{C^{0,\frac{1}{p^{\prime}}}}+\mathcal{O}(\varepsilon)[\eta_{x}]_{C^{0, \frac{1}{p^{\prime}}}}+\mathcal{O}(\varepsilon)+[K_{x}]_{C^{0,\frac{1}{p^{ \prime}}}}\big{)}.\]
We know that
\[[\tfrac{k_{0}^{\prime}}{\sigma_{0}^{\frac{1}{p}}}\circ\phi^{-1}\circ\eta]_{C^{ 0,\frac{1}{p^{\prime}}}}\leq\big{\|}\tfrac{\eta_{x}}{\phi_{x}}(\sigma_{0}^{-1 /\alpha}k_{0}^{\prime\prime}-\tfrac{1}{\alpha}\sigma_{0}^{-1-1/\alpha}\sigma _{0}^{\prime}k_{0}^{\prime})\circ\phi^{-1}\circ\eta\big{\|}_{L_{x}^{p}} \lesssim\varepsilon, \tag{8.2}\]
so, using the fact that \(T_{*}=\tfrac{2}{1+\alpha}+\mathcal{O}(\varepsilon)\), a simple bootstrap argument lets us conclude that
\[[\eta_{x}\mathring{W}-w_{0}^{\prime}]_{C^{0,\frac{1}{p^{\prime}}}} =\mathcal{O}(\varepsilon(1+[w_{0}^{\prime}]_{C^{0,\frac{1}{p^{ \prime}}}})), \tag{8.3}\] \[{}_{C^{0,\frac{1}{p^{\prime}}}} =\mathcal{O}(\varepsilon),\] (8.4) \[{}_{C^{0,\frac{1}{p^{\prime}}}} \leq(1+\mathcal{O}(\varepsilon))[w_{0}^{\prime}]_{C^{0,\frac{1}{ p^{\prime}}}}+\mathcal{O}(\varepsilon). \tag{8.5}\]
Since \(W_{x}=\eta_{x}\mathring{W}+\tfrac{1}{2\gamma}\Sigma K_{x}\) and \(Z_{x}=\eta_{x}\mathring{Z}-\tfrac{1}{2\gamma}\Sigma K_{x}\), we conclude that
\[[W_{x}-w_{0}^{\prime}]_{C^{0,\frac{1}{p^{\prime}}}}=\mathcal{O}(\varepsilon) \qquad\text{ and }\qquad[Z_{x}]_{C^{0,\frac{1}{p^{\prime}}}}=\mathcal{O}(\varepsilon).\]
We know that
\[\mathring{K}=\Sigma^{\frac{1}{\alpha}}(\tfrac{k_{0}^{\prime}}{\sigma^{\frac{1 }{\alpha}}}\circ\phi^{-1}\circ\eta),\]
so it follows from (8.2) that
\[[\mathring{K}]_{C^{0,\frac{1}{p^{\prime}}}}=\mathcal{O}(\varepsilon). \tag{8.6}\]
Using this along with (6.1) gives us
\[[K_{t}]_{C^{0,\frac{1}{p^{\prime}}}}=\mathcal{O}(\varepsilon). \tag{8.7}\]
Using (8.1) and (8.7) in (6.2) and (6.3) gives us
\[[Z_{t}]_{C^{0,\frac{1}{p^{\prime}}}} =\mathcal{O}(\varepsilon), \tag{8.8}\] \[{}_{C^{0,\frac{1}{p^{\prime}}}} =\mathcal{O}(\varepsilon). \tag{8.9}\]
Since \(W=2\Sigma+Z\), it follows that
\[[W_{t}]_{C^{0,\frac{1}{p^{\prime}}}}=\mathcal{O}(\varepsilon). \tag{8.10}\]
Putting it all together, we have
\[\|W-w_{0}\|_{C^{1,\frac{1}{p^{\prime}}}},\|Z\|_{C^{1,\frac{1}{p^{ \prime}}}},\|K\|_{C^{1,\frac{1}{p^{\prime}}}},\|W_{t}\|_{C^{0,\frac{1}{p^{ \prime}}}},\|Z_{t}\|_{C^{0,\frac{1}{p^{\prime}}}},\|K_{t}\|_{C^{0,\frac{1}{p^{ \prime}}}}=\mathcal{O}(\varepsilon)\,,\] \[\|\Sigma t\|_{C^{0,\frac{1}{p^{\prime}}}},\|\mathring{Z}\|_{C^{0, \frac{1}{p^{\prime}}}},\|\mathring{K}\|_{C^{0,\frac{1}{p^{\prime}}}},\| \mathring{K}\|_{C^{0,\frac{1}{p^{\prime}}}}=\mathcal{O}(\varepsilon). \tag{8.11}\]
### Remarks
Note that while the hypotheses of Theorem 3.1 preclude \(w_{0}\) from being \(C^{2}\), the hypotheses stated at the beginnings of sections SS 5-8 allow for \(w_{0},z_{0},\) and \(k_{0}\) to all be \(C^{\infty}\). One can check that if we start with initial data \((w_{0},z_{0},k_{0})\) satisfying the hypotheses of SS 5-8, we can mollify \(w_{0},z_{0},\) and \(k_{0}\) to produce a sequence of smooth functions satisfying the hypotheses of SS 5-8. In particular, the estimates (8.11) will hold uniformly for this sequence, and will provide us with compactness which we can use to pass the Holder estimates to the limit. The fact that we can do this allows us to circumvent any concerns that the function \(E_{p}(t)\) used in SS 7 wasn't a-priori known to be differentiable or even finite.
We should also note that the hypothesis that \(\beta\geq 1\) wasn't utilized anywhere in SS 5-8, and the results of these sections still hold for other choices of \(\beta>0\), albeit with the possibility of \(\beta\)-dependence being
introduced into some of the implicit constants and the possibility of the range of sufficiently small \(\varepsilon>0\) being \(\beta\)-dependent. All implicit constants should be uniform in \(\beta\geq r\) for any \(r>0\), but without a lower bound on \(\beta\) we may not have uniform implicit constants.
## 9. Proving the theorem
Let \(\beta\geq 1\), and let \(p\in(1,\infty]\) be the Holder conjugate of \(\beta\). Suppose
* \(\|w_{0}-(1+\overline{w}_{0})\|_{C^{1,\frac{1}{\beta}}}<\varepsilon\),
* \(\|z_{0}\|_{W^{2,p}}<\varepsilon\),
* \(\|k_{0}-\int_{\mathbb{T}}k_{0}\|_{W^{2,p}}<\varepsilon\),
* \(w_{0}\in W^{2,q}\) for some \(1<q<\infty\).
Then we know that
\[[W_{x}-\overline{w}_{0}^{\prime}]_{C^{0,\frac{1}{\beta}}}=\mathcal{O}( \varepsilon),\]
so for \(|x|\leq 1\) we have
\[W_{x} =\overline{w}_{0}^{\prime}+(W_{x}-\overline{w}_{0}^{\prime})\] \[=-1+|x|^{\frac{1}{\beta}}+(W_{x}(0,t)+1)+\mathcal{O}(\varepsilon )|x|^{\frac{1}{\beta}}\] \[=W_{x}(0,t)+(1+\mathcal{O}(\varepsilon))|x|^{\frac{1}{\beta}}. \tag{9.1}\]
Therefore, for \(|x|\leq 1\) our Holder estimates from the previous section give us
\[\eta_{x}(x,t) =1+\int_{0}^{t}\partial_{x}(\lambda_{3}\!\circ\!\eta)(x,s)\;ds\] \[=1+\tfrac{1+\alpha}{2}\int_{0}^{t}W_{x}(x,s)\;ds+\frac{1-\alpha} {2}\int_{0}^{t}Z_{x}(x,s)\;ds\] \[=\eta_{x}(0,t)+\tfrac{1+\alpha}{2}\int_{0}^{t}(W_{x}(x,s)-W_{x}( 0,s))\;ds+\tfrac{1-\alpha}{2}\int_{0}^{t}(Z_{x}(x,s)-Z_{x}(0,s))\;ds\] \[=\eta_{x}(0,t)+(\tfrac{1+\alpha}{2}+\mathcal{O}(\varepsilon))|x| ^{\frac{1}{\beta}}t. \tag{9.2}\]
So \(x=0\) is the unique minimizer of \(\eta_{x}(\cdot,t)\) over \(|x|\leq 1\) for all \(t>0\). It now follows from (5.3) and (5.4) that \(\eta_{x}(\cdot,T_{*})\) has a unique zero at \(x=0\).
For the remainder of this section, let us restrict our attention to labels \(x\) with \(|x|\leq 1\). Let \(y=\eta(x,T_{*})\). We conclude from (9.2) and (5.2) that
\[y=\eta(0,T_{*})+\int_{0}^{x}\eta_{x}(x^{\prime},T_{*})\;dx^{\prime} =:y_{*}+\int_{0}^{x}\eta_{x}(x^{\prime},T_{*})-\eta_{x}(0,T_{*}) \;dx^{\prime}\] \[=y_{*}+\int_{0}^{x}(1+\mathcal{O}(\varepsilon))|x^{\prime}|^{ \frac{1}{\beta}}\;dx^{\prime}\] \[=y_{*}+(\tfrac{\beta}{\beta+1}+\mathcal{O}(\varepsilon))\mathrm{ sgn}\;(x)|x|^{1+\frac{1}{\beta}}.\]
It follows from this equation that \(\mathrm{sgn}\,(y-y_{*})=\mathrm{sgn}\,(x)\), and
\[x=\mathrm{sgn}\,(y-y_{*})\big{(}(1+\tfrac{1}{\beta})^{\frac{\beta}{\beta+1}}+ \mathcal{O}(\varepsilon)\big{)}|y-y_{*}|^{\frac{\beta}{\beta+1}}. \tag{9.3}\]
Therefore, (9.1) gives us
\[w(y,T_{*}) =W(x,T_{*})\] \[=W(0,T_{*})+W_{x}(0,T_{*})x+\int_{0}^{x}W_{x}(x^{\prime},T_{*})-W_ {x}(0,T_{*})\;dx^{\prime}\]
\[=w(y_{*},T_{*})+(-1+\mathcal{O}(\varepsilon))x+\int_{0}^{x}(1+ \mathcal{O}(\varepsilon))|x^{\prime}|^{\frac{1}{\beta}}\;dx^{\prime}\] \[=w(y_{*},T_{*})+(-1+\mathcal{O}(\varepsilon))x+(1+\mathcal{O}( \varepsilon))\tfrac{\beta}{\beta+1}\mathrm{sgn}\,(x)|x|^{1+\frac{1}{\beta}}\] \[=w(y_{*},T_{*})-\big{(}(1+\tfrac{1}{\beta})^{\frac{\beta}{\beta+1 }}+\mathcal{O}(\varepsilon)\big{)}\mathrm{sgn}\,(y-y_{*})|y-y_{*}|^{\frac{ \beta}{\beta+1}}+(1+\mathcal{O}(\varepsilon))(y-y_{*}).\]
We have just proven that (3.4) holds for all \(y\in\mathbb{T}\) such that \(|x|\leq 1\).
Let's now estimate \(y_{*}\) and the radius of the neighborhood around \(y_{*}\) for which \(|x|\leq 1\). (5.2) gives us
\[y_{*}=\eta(0,T_{*}) =\tfrac{1+\alpha}{2}\int_{0}^{T_{*}}W(0,t)\;dt+\tfrac{1-\alpha}{ 2}\int_{0}^{T_{*}}Z(0,t)\;dt\] \[=\tfrac{1+\alpha}{2}\int_{0}^{T_{*}}w_{0}(0)+\mathcal{O}( \varepsilon)\;dt+\mathcal{O}(\varepsilon)\] \[=\tfrac{1+\alpha}{2}T_{*}(1+\mathcal{O}(\varepsilon))\] \[=1+\mathcal{O}(\varepsilon).\]
We also compute that
\[\eta(1,T_{*})-y_{*} =\int_{0}^{1}\eta_{x}(x,T_{*})\;dt\] \[=\int_{0}^{1}1+\tfrac{1+\alpha}{2}T_{*}w_{0}^{\prime}(x)+ \mathcal{O}(\varepsilon)\;dx\quad=1+(1+\mathcal{O}(\varepsilon))\int_{0}^{1} -1+|x|^{\frac{1}{\beta}}\;dx+\mathcal{O}(\varepsilon)\] \[=\tfrac{\beta}{\beta+1}+\mathcal{O}(\varepsilon).\]
Therefore, the neighborhood \(\{y\in\mathbb{T}:x\in[0,1]\}\) corresponds to \(\{y>y_{*}:|y-y_{*}|\leq r\}\) for some \(r=\tfrac{\beta}{\beta+1}+\mathcal{O}(\varepsilon)\). An analogous computation proves an analogous result for labels \(x\in[-1,0]\).
Let us now get the Holder regularity estimates for \(\partial_{y}z(\cdot,t)\) and \(\partial_{y}k(\cdot,t)\). Let \(y=\eta(x,t)\). Since \(\eta_{x}\leq 3+\alpha\) everywhere (see Lemma 5.1), we know that
\[|y_{2}-y_{1}|=\bigg{|}\int_{x_{1}}^{x_{2}}\eta_{x}(x,t)\;dx\bigg{|}\leq(3+ \alpha)|x_{2}-x_{1}| \tag{9.4}\]
for all \(x_{1},x_{2}\in\mathbb{T},t\in[0,T_{*}]\). We also know that \(\eta_{x}\geq\tfrac{1}{2}+\mathcal{O}(\varepsilon)\) for \(0\leq t\leq\tfrac{1}{1+\alpha}\) and that \(\eta_{x}>\tfrac{1}{2}\) for \(1\leq|x|\leq\pi\). For \(|x|\leq 1\) and \(\tfrac{1}{1+\alpha}\leq t\leq T_{*}\) (9.2) gives us
\[\eta_{x}\geq(\tfrac{1}{2}+\mathcal{O}(\varepsilon))|x|^{\frac{1}{\beta}}\]
for all \(|x|\leq 1,\tfrac{1}{1+\alpha}\leq t\leq T_{*}\). It follows that
\[\tfrac{1}{\eta_{x}}\leq\begin{cases}(2+\mathcal{O}(\varepsilon))|x|^{-\frac{ 1}{\beta}}&|x|\leq 1\text{ and }\tfrac{1}{1+\alpha}\leq t\leq T_{*}\\ 2+\mathcal{O}(\varepsilon)&\text{otherwise}\end{cases}.\]
If we define \(y_{0}=\eta(0,t)\), then for all \(\tfrac{1}{1+\alpha}\leq t\leq T_{*},\eta(-1,t)\leq y_{1}\leq y_{2}\leq\eta(1,t)\) we have
\[x_{2}-x_{1} =\int_{y_{1}}^{y_{2}}\frac{1}{\eta_{x}(x,t)}dy\] \[\leq(2+\mathcal{O}(\varepsilon))\int_{y_{1}}^{y_{2}}|x|^{-\frac{1 }{\beta}}\;dy\] \[\leq(2+\mathcal{O}(\varepsilon))(3+\alpha)^{\frac{1}{\beta}}\int_ {y_{1}}^{y_{2}}|y-y_{0}|^{-\frac{1}{\beta}}\;dy\]
\[\leq(1+\tfrac{1}{\beta})(2+\mathcal{O}(\varepsilon))(3+\alpha)^{\frac{1}{ \beta}}\left\{\begin{array}{ll}|y_{2}-y_{0}|^{\frac{\beta}{\beta+1}}+|y_{0}-y_ {1}|^{\frac{\beta}{\beta+1}}&\quad y_{1}<y_{0}\leq y_{2}\\ \left\|y_{2}-y_{0}|^{\frac{\beta}{\beta+1}}-|y_{1}-y_{0}|^{\frac{\beta}{\beta +1}}\right\|&\quad\text{otherwise}\end{array}\right.\] \[\leq 2^{\frac{1}{\beta+1}}(1+\tfrac{1}{\beta})(2+\mathcal{O}( \varepsilon))(3+\alpha)^{\frac{1}{\beta}}|y_{2}-y_{1}|^{\frac{\beta}{\beta+1}}.\]
It is straightforward from here to verify that
\[|x_{2}-x_{1}|\lesssim|y_{2}-y_{1}|^{\frac{\beta}{\beta+1}}\qquad\forall\,x_{1 },x_{2}\in\mathbb{T},t\in[0,T_{*}]. \tag{9.5}\]
Since
\[\partial_{y}z(y,t) =\mathring{Z}(x,t)-\tfrac{1}{2\gamma}\Sigma(x,t)\mathring{K}(x,t),\] \[\partial_{y}k(y,t) =\hat{K}(x,t),\]
it now follows from (8.11) and (9.5) that
\[[\partial_{y}z(\cdot,t)]_{C^{0,\frac{1}{\beta+1}}}\lesssim\varepsilon,\qquad [\partial_{y}k(\cdot,t)]_{C^{0,\frac{1}{\beta+1}}}\lesssim\varepsilon.\]
This gives us the estimates (3.5) from Theorem 3.1.
**Acknowledgments.** S.S. was supported by NSF grant DMS-2007606 and the Department of Energy Advanced Simulation and Computing (ASC) Program. I.N. and V.V. were supported by the NSF CAREER grant DMS-1911413.
|
2306.09164 | Network Architecture Design toward Convergence of Mobile Applications
and Networks | With the quick proliferation of extended reality (XR) services, the mobile
communications networks are faced with gigantic challenges to meet the
diversified and challenging service requirements. A tight coordination or even
convergence of applications and mobile networks is highly motivated. In this
paper, a multi-domain (e.g. application layer, transport layer, the core
network, radio access network, user equipment) coordination scheme is first
proposed, which facilitates a tight coordination between applications and
networks based on the current 5G networks. Toward the convergence of
applications and networks, a network architectures with cross-domain joint
processing capability is further proposed for 6G mobile communications and
beyond. Both designs are able to provide more accurate information of the
quality of experience (QoE) and quality of service (QoS), thus paving the path
for the joint optimization of applications and networks. The benefits of the
QoE assisted scheduling are further investigated via simulations. A new
QoE-oriented fairness metric is further proposed, which is capable of ensuring
better fairness when different services are scheduled. Future research
directions and their standardization impacts are also identified. Toward
optimized end-to-end service provision, the paradigm shift from loosely coupled
to converged design of applications and wireless communication networks is
indispensable. | Shuangfeng Han, Zhiming Liu, Tao Sun, Xiaoyun Wang | 2023-06-15T14:42:24Z | http://arxiv.org/abs/2306.09164v1 | # Network Architecture Design toward Convergence of Mobile Applications and Networks
###### Abstract
**Abstract: With the quick proliferation of extended reality (XR) services, the mobile communications networks are faced with gigantic challenges to meet the diversified and challenging service requirements. A tight coordination or even convergence of applications and mobile networks is highly motivated. In this paper, a multi-domain (e.g. application layer, transport layer, the core network, radio access network, user equipment) coordination scheme is first proposed, which facilitates a tight coordination between applications and networks based on the current 5G networks. Toward the convergence of applications and networks, a network architectures with cross-domain joint processing capability is further proposed for 6G mobile communications and beyond. Both designs are able to provide more accurate information of the quality of experience (QoE) and quality of service (QoS), thus paving the path for the joint optimization of applications and networks. The benefits of the QoE assisted scheduling are further investigated via simulations. A new QoE-oriented fairness metric is further proposed, which is capable of ensuring better fairness when different services are scheduled. Future research directions and their standardization impacts are also identified. Toward optimized end-to-end service provision, the paradigm shift from loosely coupled to converged design of applications and wireless communication networks is indispensable.**
_Index terms_-Quality of experience, quality of service, network architecture, fairness index, scheduler, 6G
## I Introduction
One of the fundamental motivations for the design and operation of the wireless communication networks is to ensure the end-to-end mobile service requirements are satisfied efficiently. However, this is actually a challenging task. Firstly, there has long been a loose coordination between the over-the-top (OTT) companies and the mobile operators. The quality of service (QoS) [1] and quality of experience (QoE) [2,3] of the mobile services, which are of paramount significance to the OTTs, are in reality controlled and managed by the operators. The OTTs are unwilling to pay additional money to the operators for enhanced guarantee of the QoS and QoE. Rather, they would like to see updated versions of specifications in the standard bodies to make sure their services will be better supported in the wireless networks. For information security and privacy purpose, the OTTs may want to cipher their services and would not allow more exposure to the operators. However, without enough knowledge on the services characteristics (e.g. service priority, data packet profile, urgency, etc.), it is challenging for the operators to make a proper prediction and guarantee of QoE/QoS with optimized resource allocation, thus leading to a waste of network resources. Secondly, the infrastructure vendors generally have their private implementations of resource control and user scheduling algorithms, to which the mobile operators generally don't have sufficient control. This means even if the mobile operators have accurate mastery of users' QoE/QoS requirements, the eventual fulfillment of these requirements may be beyond control of the operators, especially when the radio resources are not sufficient enough during traffic peak period.
Despite the above challenges, the wireless communication industry has been working toward the performance improvement of extended reality (XR) and multimedia services for many years. For example, Service and System Aspect (SA) working group on "Services" (SA1) in 3GPP has studied the service requirements and key performance indicators (KPIs) of low latency services, interactive services, and tactile communications in 5G [4]. SA2 ("System Architecture and Services") has standardized new 5G quality of service identifiers (5QI) to support interactive services including XR, featured by low latency and high data rate [5]. SA4 ("Multimedia Codecs, Systems and Services") has been documenting relevant traffic characteristics and providing a survey of XR applications. Since the radio access network (RAN) side functions are extremely important to satisfy the end-to-end mobile applications, 3GPP has studied the traffic models, simulation and evaluation methodologies in Release 17 in RAN1 ("Physical layer"), and management and optimization of QoE in RAN3 ("Overall UTRAN architecture"). In Release 18, the study is extended to understanding key features (e.g. periodicity, jitter, delay) of XR services at the base station (BS), based on which the QoS parameters and resource allocation will be optimized.
The above studies have been focused on better understanding and characterizing the mobile applications, and defining more accurate QoS parameters. However, these works have been carried out under the framework of 5G network architecture, to which large modification or evolution are not allowed in the 5G era. There exists an inherent gap between the mobile applications and the networks, especially
when the applications need to adjust their data format to better match the fluctuations of wireless channels, because the applications generally don't have access to the related RAN information in time.
In order to resolve this contradiction, RAN capability exposure to the application layer has been investigated in the Internet Engineering Task Force (IETF) Application-Layer Traffic Optimization (ALTO) and Network Exposure Function (NEF), but with limited generality and flexibility. Mobile and Wireless Information Exposure (MoWIE), a framework to provide systematic information from networks to applications, was recently proposed [6]. However, there still seems a clear lack of tight coordination between the applications and networks. In particular, the applications may not know how its data would be eventually scheduled at the BS, and thus are unable to predict whether the QoE/QoS is satisfactory or not.
Toward a tight coordination or even convergence of applications and mobile networks in future 6G and beyond, a paradigm shift of the network architectures design is highly motivated. The structure and contribution of the paper are as follows. The design methodologies are discussed in Section II. Then, a multi-domain coordination scheme is proposed, where the predicted QoE/QoS at RAN is negotiated with the core network (CN) and application layer. A network architecture with cross-domain coordination center (CDCC) is further proposed, which collects and jointly processes the data from all the related domains and outputs control information to all the domains. In Section III, the benefits of the QoE assisted scheduling are investigated, where better throughput and fairness are demonstrated via simulations. In Section IV some future research directions and their standardization impacts are identified. The paper is concluded in Section V.
## II Network Architecture Design toward Convergence of Applications and Networks
### _Design Methodologies_
The basic operation of wireless service provision is that the source-encoded service data from application layer will first go through the transport layer, then arrive at the CN, where the QoS parameters will be configured. Based on these QoS requirements, the real-time scheduler in the RAN will allocate proper wireless resources and configure suitable air interface technologies to transmit and receive the data packets. The CN will provide information to system operation, administration and management (OAM), which will be responsible for radio resource allocation between different BS or each network slice accordingly.
Essentially the QoS is an end-to-end service requirement, whose definition and provision should involve the application layer, transport layer, RAN, CN, mobile users, and even the OAM. However, this is not supported by the current network design methodology. Toward the end-to-end mobile service enhancements and a better tradeoff between fairness and efficiency, the interaction between the application layer and the network need be strengthened.
One approach to facilitating application and network convergence is conveying the high-dimensional application information to the network, including user experience indicators like QoE, and user profile, etc. These information traditionally are not available to the wireless networks and may have direct and important impacts on the system optimization. QoE is an important representative of the high-dimensional information, which can more intuitively and comprehensively evaluate the end-to-end service provision. Either predicted in the network or fed back from UE, QoE is able to facilitate a closed-loop wireless service provision. Additionally, if more accurate requirements of the services, especially those composed of multiple sub-services with distinguished features can be known to the CN/RAN, more efficient and reasonable resource allocation can be achieved.
Another approach is more exposure of the network information to the applications, rather than just the RAN capability. This information may include the QoE/QoS performance predicted by RAN, the (statistical ) buffer status for each UE, the scheduling results, the statistical channel condition, etc. Based on these information, the application layer and the transport layer may more efficiently adjust their schemes.
Ideally, the above two approaches need to be jointly taken into consideration when designing the future network architecture. In the following, we will present two designs toward better convergence of applications and networks.
### _Multi-domain Coordination Scheme_
As the first step forward, a multi-domain coordination scheme is proposed in Fig.1, with the following characteristics:
1) Based on the QoE/QoS requests from the applications, the CN is able to better understand the preferred QoE/QoS of each user or service slice and further determine the QoE/QoS parameters to improve on the fairness and efficiency. These QoE/QoS parameters will be transmitted to the RAN and set as performance targets during the user scheduling and wireless
resource allocation.
Another approach for the RAN to have access to the QoE information is that the user equipment (UE) estimates or predicts the QoE performance in the application layer and then feeds it back to the RAN in case the user experience is not satisfactory. Based on this QoE information, the RAN is able to make a better suggestion to the CN about QoE/QoS.
2) The achievable QoE/QoS of each user guaranteed by the RAN scheduler can be predicted in the RAN, based on the analysis of users' service requirements and the corresponding channel conditions. The achievable QoE/QoS will be fed back to the CN to refine the QoE/QoS parameters. The final QoE/QoS, decided at the CN, will be used at the RAN. Via this closed-loop design, QoE/QoS is not merely dependent on the services but also dependent on RAN capabilities and other users' service requirements and channel conditions. Besides, the RAN can also predict possible transport layer parameter adjustment suggestion and feed it back to the transport layer.
3) In case the wireless network (CN and RAN) can not satisfy the mobile service requirements, the CN will inform the application server about the QoE/QoS suggestion for each user, and the application server will configure its source coding format (e.g. video definition and frame rate). With AI capabilities at the application server, the AI-enabled source coding optimization is also possible. For example, if the field of view can be predicted in the virtual reality system, the source coding can be focused on the predicted area and thus the source coding and transmission can be much simpler [7]. Another example is that the videos captured in the surveillance system can be analyzed by the AI capability and only the analysis results (either picture or mere text) are transmitted.
Intrinsically, the probability that the RAN fails to satisfy each user's service requirements could be significantly reduced following the design approach in Fig. 1. The involved interactions are multi-dimensional, between CN and RAN, CN and applications, applications and UEs, and even joint operation of CN, RAN, OAM, application, transport, and UEs. This ensures a better fairness between users and services, and better efficiency of network resource utilization. Note that for latency sensitive services the negotiation process need to be minimized.
### _Network Architecture with CDCC_
The scheme in Fig.1 serves as an starting point for the network architecture evolution toward better convergence of applications and networks. The drawback of this scheme is that each domain doesn't have enough and real time information of other domains and thus the final decision making is not optimal. Besides, the negotiation between different domains may lead to unsatisfactory latency. A further step forward would be implementing the CDCC in the network, which functions as the mind of the whole network, and is responsible for information acquisition from all the domains, data processing, AI model training, inference, and control information generation for each domain. The logical architecture and the interfaces are shown in Fig.2, and the practical deployment of the CDCC can be in the third party platform, the CN, or the RAN.
Different from the coordination scheme illustrated in Fig.1, with the joint processing of all the related information in the CDCC, the control information to all the domains can be globally optimized. For example, the service data format will be input to the application layer, based on which the source coding will be re-configured.
Traditionally, the transport layer like Transmission Control Protocol (TCP) cannot distinguish whether a drop is because of congestion or other flaws in the wireless communication. Most of the time, the congestion control mechanism will reduce the sending rate unnecessarily, leading to performance degradation. Enabled by the new architecture, the transport layer parameters (e.g. for the congestion control) will be input to the transport layer, which can more accurately track the radio link quality and improve the transport layer throughput.
The achievable QoE/QoS parameters will be calculated in the CDCC based on the the scheduling results (which user is scheduled and how many data is transmitted) from RAN and other information input from other domains. This information will be transmitted to the CN and RAN, which will be utilized to guarantee the QoS flow quality in CN and MAC scheduling in RAN.
The OAM control information can also input to the OAM, which will adjust its resource allocation strategy accordingly. For example, when the QoE/QoS is still not satisfactory after rounds of MAC scheduling, the OAM may allocate more resources to this cell via load balancing. In the extreme case where even the intelligent load balancing can not help, the CDCC need to collaborate with applications like navigation to change the navigation route if the UE is on a vehicle.
The time granularity of parameter configuration in different domains are quite diversified, e.g. the application layer data format is changed per seconds, the RAN function configuration and RAN resource management is conducted
over one hundred milliseconds, while the user scheduling and resource allocation in the media access control (MAC) layer is per millisecond. Therefore, there are some practical limitations on the data collection from each domain. For lack of space, the design of the joint information processing in CDCC is not discussed in this paper, which is actually very complicated.
### _Deployment of CDCC in the Third Party Platform_
When the CDCC is deployed in the third party platform, it may be difficult to collect real time data or information from all the domains, possibly with latency being over several hundreds of micro seconds. Therefore, the CDCC is not able to handle the real time inference or time sensitive control signaling, particularly for the RAN. For fast coordination between services and network, the multi-domain coordination scheme in Fig.1 may also be necessary in Fig.2. The key benefits of CDCC lie in its capability of global optimization of non-time-sensitive parameters. For example, based on the characteristics of the mobile applications, the CDCC is able to optimize the cache policy for the mobile service contents, long term resource allocation policy for each BS, the long term QoE/QoS targets of each user.
### _Deployment of CDCC in the CN_
When the CDCC is deployed in the CN, data collection from the application layer, transport layer and the CN is simpler than in that in _C.1_. The initial QoE/QoS suggestion needs to be calculated in the RAN, because it would be challenging to feed back all the necessary RAN information to the CN in real time. The RAN makes fast prediction about the achievable QoE/QoS of each user based on information like the wireless channel conditions, service load, and buffer size and feeds back the statistical information to the CDCC, which will make final decision about QoE/QoS.
### _Deployment of CDCC in the RAN_
When the CDCC is deployed in the RAN, it is able to collect all the necessary information from RAN like the MAC scheduler results, thus being able to make globally optimized prediction of the achievable QoE/QoS in the real time manner.
## III **QoE Information Assisted Scheduling: A Bridge to Application and Network Convergence**
QoS has long been considered as the most successful system quality strategy that objectively measures the performance of a video transmission system from the source to the destination. However, QoS doesn't include the influence factors related to the physical, emotional and mental constitution of the mobile users and context information of the social and economical background and status. Consequently, QoS is unable to reflect accurately the true experience of XR services in the mobile communication networks.
Thanks to its advantages, QoE has been widely recognized as a preferred strategy in the context of video and XR transmissions. QoE management schemes have been surveyed in [2], where QoE optimization of multimedia services is implemented in the CN or mobile edge, which only indirectly affects the resource allocation of RAN. With the architectures proposed in Section II.B, the QoE of each user is jointly determined based on the information input from all the related domains, whose reliability and accuracy is greatly improved. In this section, we assume the CDCC is deployed in RAN and therefore is capable of inference on QoE and QoS in real-time or near real-time manner (milli-second level). We will show the benefits brought by the QoE information assisted MAC scheduling in the RAN.
### _MAC Scheduler Enabled by the New Network Architecture_
Different from traditional MAC schedulers which generally utilize the QoS information (e.g. the latency and packet loss rate requirement) to allocate the wireless resources to each user, the proposed MAC scheduler BCQQ (Buffer, CQI, QoS, and QoE) utilizes the buffer condition, channel quality indicator (CQI), QoS parameters, and QoE information as input, as shown in Fig. 3.
During data collection period, the buffer information, especially its dynamic change, can be collected by the interface between Packet Data Convergence Protocol (PDCP) layer and MAC layer. The collection period is consistent with scheduling period, generally at millisecond level. CQI information is directly collected by the physical layer feedback. QoS requirement information like latency and packet loss rate requirement can be directly configured by the session management function (SMF) in CN, and utilized as input to the CDCC. The QoE information is calculated by the UE in the application layer and fed back to the BS through a new interface between UE and the BS. This QoE information goes directly from the application layer to the RAN and is utilized in the CDCC, which will generate the final QoE and QoS based on all the information input. The final QoE and QoS will serve as the performance target for the MAC scheduling.
Simply put, the basic principal of the QoE assisted scheduler is that for each transmit slot one user with the highest scheduling priority will be selected. In calculating the priority of the \(i\)-th user, the following equation is adopted.
Fig. 3: The MAC scheduler enabled by the new network architecture
\[L_{i}(t)=\frac{\textit{Buffer}_{i}(t)}{\textit{Buffer}_{i}}\cdot\Bigg{[}\frac{ \left[\frac{\log\alpha_{i}}{\beta_{i}}\right]}{\beta_{i}}\cdot q_{i}(t)\cdot rate _{i}(t) \tag{1}\]
In Eq.1, _Buffersize_ represents the total buffer space allocated by the BS for each user. _Buffer(t)_ represents the occupied buffer at time \(t\). When the occupation ratio gets higher, the priority of this user need to increase. \(\alpha\) and \(\beta\) represent target packet loss rate and delay of the traffic, respectively. The priority increases with a lower \(\alpha\) and lower \(\beta\). q(t) is the QoE for the _i_-th user, representing the target satisfaction degree of the mobile service requirements. Note that there are various modeling and evaluation methods regarding the QoE, how the method is selected is not discussed in this paper. The achievable _rate(t)_ is calculated based on the CQI feedback, which represents the amount of data that the channel can support for the _i_-th user at time \(t\). The designed scheduler algorithm has the following features.
1. The algorithm takes into consideration the buffer occupation ratio, which represents the matching degree between the capability of air interface to send data and service arrival data volume. A higher buffer occupation ratio leads to a higher priority. Generally the buffer size for each user is limited. In case the buffer occupation ratio is high and the scheduling priority is still comparatively low, the CDCC need to send service adjustment request to the application layer and transport layer, which will reduce the service data volume accordingly. This can help to significantly avoid the overflow of the buffer. Alternatively, the BS can allocate dynamic buffer size for each user, to alleviate the burden on application layer and transport layer. The dynamic partition of memory size for data buffering and data processing is also an interesting research topic.
2. The essential QoS parameters like packet loss rate and delay are included in the algorithm, which are designed to satisfy the service requirements in the wireless communication network. The stricter the requirements on delay or packet loss rate, the higher the scheduling priority. However, traditional scheduling policies to meet the requirement of these QoS parameters will not necessarily lead to satisfactory QoE performance. QoE oriented scheduling is fundamentally motivated, where the QoE information serves as an efficient metric to further refine the QoS fulfillment scheduling policy.
3. The buffer occupation ratio, packet loss, service delay, achievable air interface rate and QoE information are jointly considered. The real-time scheduler output (e.g. how much resource is allocated and how many data is transmitted for each user) will be input to the CDCC, which will make intelligent decisions on the application layer format, transport layer scheme, buffer size configuration, and QoE/QoS target for each user to maximize the end-to-end mobile services. This is not possible in the current 5G network architecture. For example, the scheduler can predict the buffer status in the future time window, and this information can be input to the transport layer, which will accordingly adjust its data transmit policy, to better track the wireless channel and the buffer status. Another example is the source coding adjustment in the application layer. With the global optimization capability in the CDCC, the source coding of all the users' applications can be jointly configured, which can match the transport layer capability and the air interface capability, to maximize the end-to end QoE/QoS performance.
### _Performance Evaluation_
The following simulation is given to show the performance of the proposed scheduler, with simulation parameters listed in Table I.
Five users (No.1 to No.5) are considered. The traffic type of No.1-No.3 users is FTP (File Transfer Protocol) download, with a 10\({}^{\circ}\)-6 packet loss rate, an acceptable delay of 300ms, and a 0.5Mb average download traffic packet size. The traffic type of other users (No.4-No.5) is Live HD Video, with a 10\({}^{\circ}\)-6 packet loss rate, an acceptable delay of 150ms, a maximum packet size of 2Mb, and random packet arrival distribution.
To show the performance improvement from the above proposed QoE assisted BCQQ scheduling, we choose the classic scheduler MLWDF [1] and LEASCH [8] as references. MLWDF is an optimization of proportional fairness algorithm, while LEASCH is a pure deep reinforcement learning algorithm for radio resource scheduling in 5G MAC.
Fig.4 shows the throughput performance comparison of all the three schedulers. We can see that the arrived data volume of Live HD Video far exceeds that of FTP download. The simulation results show the overall system throughput of all
\begin{table}
\begin{tabular}{|p{113.8pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**User number** & 5 & **Cell number** & 1 \\ \hline
**Traffic I** & FTP Download & BS **number** & 1 \\ \hline
**Traffic II** & Live HD Video & **Cell radius** & 1km \\ \hline
**Cell peak rate** & 6Gbps & **Moving speed** & 3km/h \\ \hline
**Buffersize** & 5MB/UE & **Scheduling period** & 1ms \\ \hline \end{tabular}
\end{table} TABLE I: Simulation Parameters
Fig. 4: Performance in multi-traffic scenario
the five users are 2378 Mbps with BCQQ, 2019 Mbps with LEASCH, and 1796 Mbps with MLWDL The performance gains of BCQQ over LEASCH and MLWDL are 17.8% and 32.4%, respectively. Compared with LEASCH and MLWDL, QoE-assisted BCQQ scheduler allocates more scheduling opportunities to Live HD Video users based on QoE feedback while ensuring the QoS requirements of FTP users.
### _QoE Oriented Fairness Index_
Jain's fairness index (JFI) [9] has been widely utilized to quantify fairness among various fairness measures due to its highly intuitive interpretation.During a certain time window, the closer is the actual amount of each user's data has been scheduled, the better is the fairness, with JFI approaching to 1 when the scheduled data size is same for all users. However, JFI becomes inappropriate in scenarios with diversified service types, such as Live HD, game, voice, and instant messaging. The experience of high data rate applications like Live HD would be seriously frustrated if only absolute volume of service data is considered. To better reflect the QoE/QoS satisfaction of the scheduled users, in the following, we design a new fairness index based on users' QoE.
\[QoE\_FI\!-\!\!\sum_{i=1}^{n}\!\!\sum_{j\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
CN and RAN to ensure the best end-to-end QoE/QoS performance need further investigations and specifications. In essence, the design issue is how to encode the source optimally if the end-to-end data path is known, which is similar to the joint source-channel joint encoding.
4) How to further optimize the transport layer to better match the transport schemes with the source coding and radio link fluctuations is also a very important research and standardization direction [10].
5) How to jointly optimize the parameters from different domains in CDCC is fundamentally a challenging task. This may require a super AI model like GPT-4, being in charge of all the optimization issues.
6) The current research focus of AI enabled RAN architecture is how to introduce AI platform and functions in BSs to enhance the performance of traditional radio resource management, radio resource control, and user scheduling schemes. This will promote openness of the existing 4G and 5G BSs. However, these "add on" AI features were not meant to change the air interface communication protocols. To fully unleash the potential of AI capabilities in wireless communications, the quest for AI enabled design of air interface for 5G [11] and future 6G communication networks [12] is well motivated.
## V Conclusions
In this paper, we have presented a design framework of convergence of mobile applications and networks. The challenges faced with the industry is first analyzed. The design methodologies are further discussed, following by the design of a multi-domain coordination scheme and a network architecture with cross-domain coordination center. The benefits of the QoE assisted scheduling are investigated, where better throughput and fairness are demonstrated via simulations. Some future research directions and their standardization impacts are identified.
|
2304.07730 | Detecting prethermal Floquet phases of Rydberg atom arrays | We study the prethermal Floquet phases of a two-dimensional (2D) Rydberg atom
array on a rectangular lattice in the presence of a periodic drive with large
drive amplitude. We derive an analytic, albeit perturbative, Floquet
Hamiltonian using Floquet perturbation theory (FPT) which charts out these
phases and shows that the transition between them can be accessed by tuning the
drive frequency. Using both numerical exact diagonalization on finite-size
arrays and analytical first-order Floquet Hamiltonian derived using FPT, we
show that these prethermal Floquet phases and the transitions between them can
be detected by studying the dynamics of equal-time density-density correlation
functions of the Rydberg atoms. Our analysis thus provides a simple way of
detecting these phases and associated transitions in this system; such a
detection can be achieved in standard experiments which we discuss. | Somsubhra Ghosh, Diptiman Sen, K. Sengupta | 2023-04-16T09:04:14Z | http://arxiv.org/abs/2304.07730v1 | # Detecting prethermal Floquet phases of Rydberg atom arrays
###### Abstract
We study the prethermal Floquet phases of a two-dimensional (2D) Rydberg atom array on a rectangular lattice in the presence of a periodic drive with large drive amplitude. We derive an analytic, albeit perturbative, Floquet Hamiltonian using Floquet perturbation theory (FPT) which charts out these phases and shows that the transition between them can be accessed by tuning the drive frequency. Using both numerical exact diagonalization on finite-size arrays and analytical first-order Floquet Hamiltonian derived using FPT, we show that these prethermal Floquet phases and the transitions between them can be detected by studying the dynamics of equal-time density-density correlation functions of the Rydberg atoms. Our analysis thus provides a simple way of detecting these phases and associated transitions in this system; such a detection can be achieved in standard experiments which we discuss.
## I Introduction
Quantum systems involving ultracold atoms in optical lattices have been the subject of intense theoretical and experimental studies in recent years [1; 2; 3; 4; 5; 6; 7; 8]. The reason for such interest in these systems stems from their ability to act as emulators of strongly correlated models. Moreover, they allow us to explore parameter regimes and quantum dynamics of the emulated models which is usually impossible to access in standard laboratory setups. A typical example is the emulation of the Bose-Hubbard model using \({}^{87}\)Rb atoms; this has led detailed theoretical and experimental studies on both the superfluid-insulators transition and non-equilibrium quantum dynamics of this model [1; 2; 9; 10; 11; 12; 13; 14]. Another such example is the emulation of the tilted Bose-Hubbard model which has led to realization of translation symmetry broken ground states in these systems [6; 7; 15; 16; 17].
More recently experimental systems involving ultracold Rydberg atoms have been experimentally realized [18; 19; 20; 21]. These atoms experience strong interatomic van der Waals interaction in their excited state leading to Rydberg blockade with tunable blockade radius [22; 23]. An array of such atoms in an one-dimensional (1D) optical lattice, namely, a Rydberg chain, is known to host both Ising and non-Ising quantum critical points [24; 25]; the signature of the associated phase transition has been experimentally verified [20]. The non-equilibrium dynamics of such atoms has also been theoretically studied [26; 27; 28; 29; 30; 31; 32; 33]. Interestingly, the violation of the eigenstate thermalization hypothesis (ETH) [34; 35] for dynamics starting from a class of initial states has been experimentally observed in these systems [21]; this phenomenon has been explained by invoking the existence of an atypical set of athermal mid-spectrum quantum states, namely, quantum scars [28; 29; 30; 31]. The presence of such scars in the eigenspectrum of the Floquet Hamiltonian of a periodically driven Rydberg chain has also been predicted [32; 33].
A natural extension of the above-mentioned studies on Rydberg chains is to investigate higher-dimensional Rydberg atom arrays. Such arrays are expected to host a rich variety of quantum ground states that have no 1D analogs. For the 2D square arrays, such studies have predicted the presence of several translational symmetry broken ground states with definite density-wave orders; these ordered states are separated from the featureless disordered ground state via a second-order phase transition [36; 37]. In more complicated non-bipartite lattices such as the Kagome lattice, where the atoms are designed to occupy the links of the lattice (or equivalently sites of a ruby lattice), such atom arrays are predicted to host a spin-liquid quantum ground state over a wide parameter regime [38]. More recently, phases of Rydberg atoms on a 3D pyrochlore lattice have also been studied [39]. However, the non-equilibrium dynamics of such atom arrays has not been studied so far.
In this work, we shall study the prethermal Floquet phases of a periodically driven Rydberg atom array arranged as a rectangular lattice. The effective Hamiltonian of such an array can be described in terms of two states on a site with coordinate \(\vec{r}=(j_{x},j_{y})\). The first of these is the ground state of the atoms; we shall denote this by \(|g_{\vec{r}}\rangle\). The other state is the Rydberg excited state and is denoted by \(|e_{\vec{r}}\rangle\). Using these as the basis states at each site, we can write the effective Hamiltonian of these atoms as [18; 19; 20; 21]
\[H = \sum_{\vec{r}}\left(\Omega\sigma_{\vec{r}}^{x}-\frac{\Delta}{2} \sigma_{\vec{r}}^{z}\right)+\frac{1}{2}\sum_{\vec{r},\vec{r}^{\prime}}V(|\vec {r}-\vec{r}^{\prime}|)\hat{n}_{\vec{r}}\hat{n}_{\vec{r}^{\prime}}, \tag{1}\]
where \(\vec{\sigma}_{\vec{r}^{\prime}}\) are Pauli matrices in the space of states described above, \(\sigma_{\vec{r}}^{x}=|g_{\vec{r}}\rangle\langle e_{\vec{r}}|+|e_{\vec{r}} \rangle\langle g_{\vec{r}}|\), and \(\hat{n}_{\vec{r}}=(1+\sigma_{\vec{r}}^{z})/2\) is the Rydberg excitation density at the site \(\vec{r}\) of the lattice. Here \(\Omega>0\) denotes the coupling strength between the ground state and the Rydberg excited state, \(\Delta\) denotes the detuning which we shall assume to be uniform throughout the array, and \(V(|\vec{r}-\vec{r}^{\prime}|)=V_{0}/|\vec{r}-\vec{r}^{\prime}|^{6}\) denotes the van der Waals interaction between two Rydberg excitations with strength \(V_{0}\). In what follows, we shall drive the detuning parameter, \(\Delta\), of the model periodically with drive amplitude \(\Delta_{0}\) and frequency \(\omega_{D}=2\pi/T\), where \(T\) is the time period. For the square-pulse
protocol, the drive term is given by
\[\Delta(t) = \delta-\Delta_{0}\ \ \mbox{for}\ \ 0\leq t<T/2, \tag{2}\] \[= \delta+\Delta_{0}\ \ \mbox{for}\ \ T/2\leq t<T,\]
and \(\Delta(t+T)=\Delta(t)\), while for the cosine drive protocol, we have
\[\Delta(t) = \delta+\Delta_{0}\cos(\omega_{D}t). \tag{3}\]
In this work, we shall restrict ourselves to the regime where the drive amplitude is large: \(\Delta_{0}\gg\delta,\Omega,V_{0}\).
The main results that we obtain in this work are as follows. First, using Floquet perturbation theory (FPT) which uses the inverse drive amplitude as the small parameter [40; 41; 42], we obtain an analytic, albeit perturbative, Floquet Hamiltonian for the driven system for both the square-pulse (Eq. 2) and the cosine (Eq. 3) protocols. Our analysis reveals several ordered Floquet phases with distinct density-wave orders which are separated from the disordered state by second-order critical points. We also show that tuning the drive frequency allows us to tune the system between these phases and through the critical points. Second, we complement our results obtained from the analytical Floquet Hamiltonian with that from numerical exact diagonalization (ED) starting from \(H\) (Eq. 1) with \(\Delta(t)\) given by Eq. 2.Our study reveals the existence of an exponentially long prethermal timescale in the large and intermediate drive amplitude regime; the properties of the driven system up to this timescale is well described by the analytic Floquet Hamiltonian. Third, using the exact evolution operator obtained from ED, we compute the density-density correlation function of Rydberg excitations of the driven atom array after \(n\) cycles of the drive. We show that such a correlator exhibits qualitatively distinct behavior in the density-wave ordered and the disordered Floquet phases; thus it serves as an experimentally relevant marker for the Floquet phases and the transitions between them. We demonstrate this explicitly for the disordered to the star and the checkerboard ordered Floquet phases (see Fig. 1); we find that the above-mentioned correlation function displays distinct long-time behaviors in the ordered and disordered phases as well as at the transition point between them. Thus it can be used to distinguish between these Floquet phases and also locate the transition between them. Finally, we discuss experiments which can test our theory and discuss possible extensions of our study in these systems.
The plan of the rest of the paper is as follows. In Sec. II, we derive the analytic Floquet Hamiltonian using FPT. This is followed by Sec. III, where we compare its prediction to numerical results obtained using ED and obtain the phase diagram of the driven system. Next, in Sec. IV, we compute the correlation function \(C\), study its behavior in different Floquet phases, and also discuss the stability of these phases. Finally, we chart out experiments which can verify our theory, discuss possible extensions of it in these systems, and conclude in Sec. V.
## II Analytic Floquet Hamiltonian
In this section, we shall derive the analytic Floquet Hamiltonian for both the cosine and the square pulse protocols using FPT. The details of the FPT method can be found in Refs. [40; 41; 42].
To obtain the Floquet Hamiltonian we first rewrite \(H(t)=H_{0}(t)+H_{1}\), where
\[H_{0}(t) = -\ \frac{\Delta(t)-\delta}{2}\sum_{\vec{r}}\sigma_{\vec{r}}^{z}, \tag{4}\] \[H_{1} = H_{1a}+H_{1b},\ \ \ H_{1a}\ =\ \sum_{\vec{r}}\Omega\sigma_{\vec{r}}^{x},\] \[H_{1b} = -\ \sum_{\vec{r}}\frac{\delta}{2}\sigma_{\vec{r}}^{z}\ +\ \frac{1}{2}\sum_{\vec{r},\vec{r}}V(|\vec{r}-\vec{r}^{\prime}|)\hat{n}_{\vec{r}} \hat{n}_{\vec{r}^{\prime}}.\]
This decomposition of \(H(t)\) is made such that the term with the largest amplitude, \(\Delta_{0}\), is included in \(H_{0}\). We have also separated the terms in \(H_{1}\) into those which commute (\(H_{1b}\)) with \(H_{0}\) and those which do not (\(H_{1a}\)).
Next, we construct the evolution operator \(U_{0}(t,0)={\cal T}_{t}\exp[-i\int_{0}^{t}H_{0}(t^{\prime})dt^{\prime}/\hbar]\) (where \({\cal T}_{t}\) is the time-ordering operator) corresponding to \(H_{0}(t)\). For the square-pulse protocol, this yields
\[U_{s}^{(0)}(t,0) = e^{-i\Delta_{0}t\sum_{\vec{r}}\sigma_{\vec{r}}^{z}/2\hbar}\ \ \mbox{for}\ \ 0\leq t<T/2, \tag{5}\] \[= e^{-i\Delta_{0}(T-t)\sum_{\vec{r}}\sigma_{\vec{r}}^{z}/2\hbar}\ \ \mbox{for}\ \ T/2\leq t<T,\]
while for the cosine protocol we obtain
\[U_{c}^{(0)}(t,0) = e^{i\Delta_{0}\sin(\omega_{D}t)\sum_{\vec{r}}\sigma_{\vec{r}}^{z}/( 2\hbar\omega_{D})}. \tag{6}\]
Note that for both protocols \(U_{0}(T,0)=I\) where \(I\) denotes the identity matrix; hence the zeroth-order Floquet Hamiltonian is \(H_{F}^{(0)}=0\).
To find the first-order terms, we use standard perturbation theory which yields [42]
\[U_{c(s)}^{(1)}(T,0) = -\ \frac{i}{\hbar}\int_{0}^{T}dtU_{c(s)}^{(0)\dagger}(t,0)H_{1}U_{ c(s)}^{(0)}(t,0). \tag{7}\]
To evaluate \(U_{c(s)}^{(1)}\), we first note that the terms in \(H_{1b}\) (Eq. 4) that commute with \(U_{c/s}^{(0)}\) can be evaluated simply. This yields
\[U_{c(s)}^{(1b)}(T,0) = -\frac{iT}{\hbar}H_{1b},\] \[H_{Fc(s)}^{(1b)} = \frac{i\hbar}{T}U_{c(s)}^{(1b)}(T,0)=H_{1b}. \tag{8}\]
To evaluate the contribution of \(H_{1a}\) to \(U_{c(s)}^{(1)}\), we first note that \(U_{c(s)}^{(0)}\) is diagonal in the Fock basis and can be written as
\[U_{s}^{(0)}(t,0) = e^{-i\Delta_{0}tE_{m}/(2\hbar)}|m\rangle\langle m|\quad t\leq T/2 \tag{9}\] \[U_{c}^{(0)}(t,0) = e^{i\Delta_{0}E_{m}\sin\omega_{D}t/(2\hbar\omega_{D})}|m\rangle \langle m|,\]
where \(|m\rangle\) denotes a Fock state with \(m\) Rydberg excitations (or equivalently \(m\) spin-up sites) and \(L^{2}-m\) atoms in their ground state, and \(E_{m}\) is the eigenvalue of \(\sum_{\vec{r}}\sigma_{\vec{r}}^{z}\) in the state \(|m\rangle\). We note that these states are degenerate since their energies do not depend on the positions of the Rydberg excitations. In this picture it is easy to see that \(H_{1a}\) changes the number of such Rydberg excitations in any state by \(\pm 1\); thus \(H_{1a}|m\rangle\sim|m+1\rangle+|m-1\rangle\). Furthermore the energy differences between the states \(|m\rangle\) and \(|m\pm 1\rangle\) are given by \(\Delta E_{m}^{\pm}=E_{m}-E_{m\pm 1}=\mp 2\). Using this, and after some standard algebra detailed in Refs. [33], we find
\[H_{Fc}^{(1a)} = \Omega J_{0}\left(\frac{2\lambda}{\pi}\right)\sum_{\vec{r}}\sigma _{\vec{r}}^{x},\] \[H_{Fs}^{(1a)} = \Omega\frac{\sin\lambda}{\lambda}\sum_{\vec{r}}\left(\cos\lambda \sigma_{\vec{r}}^{x}-\sin\lambda\sigma_{\vec{r}}^{y}\right), \tag{10}\]
where \(\lambda=\Delta_{0}T/(4\hbar)\), and \(J_{0}\) denotes the zeroth-order Bessel function. The final first-order Floquet Hamiltonian is given by
\[H_{Fc(s)}^{(1)}=H_{Fc(s)}^{(1a)}+H_{Fc(s)}^{(1b)}. \tag{11}\]
The expressions of \(H_{F}^{(1)}\) for both the protocols suggest the existence of special drive frequencies at which, for a given drive amplitude \(\Delta_{0}\), \(H_{F}^{(1a)}\) vanishes. The frequencies correspond to \(\lambda=m\pi\), where \(m\) is a non-zero integer, for the square-pulse protocol and \(\lambda=\pi\eta_{m}/2\) for the cosine protocol, where \(\eta_{m}\) denotes the value of the \(m^{\rm th}\) zero of \(J_{0}\). They are given by
\[\omega_{m}^{*} = \frac{\Delta_{0}}{2m\hbar}\ \ \mbox{for square pulse protocol}, \tag{12}\] \[= \frac{\Delta_{0}}{\eta_{m}\hbar}\ \ \mbox{for cosine protocol}.\]
At these frequencies \([H_{F}^{(1)},\hat{n}_{\vec{r}}]=0\) leading to an approximate emergent conservation of \(\hat{n}_{\bf r}\). This conservation is approximate since it is not respected by higher order terms in the Floquet Hamiltonian.
To qualitatively understand the phases of \(H_{F}^{(1)}\), we now consider the regime where \(V_{0},\delta\ll\Delta_{0}\). In this regime for \(\omega_{D}\gg\omega_{1}^{*}\), the ground state of \(H_{F}^{(1)}\) is expected to be similar to the disordered paramagnetic phase found in Ref. [36]. In contrast at \(\omega_{D}=\omega_{1}^{*}\), the ground state of \(H_{F}^{(1)}\) constitutes a density-wave ordered state whose precise nature depends on the relative strength of \(\delta/\Omega\) and \(V_{0}/\Omega\).[36] Thus as we tune the drive frequency towards \(\omega_{1}^{*}\), we expect to find a second-order phase transition between these phases. Also, in the regime of large \(\Delta_{0}\), the higher order corrections to \(H_{F}^{(1)}\) are expected to be small; thus such phases should persist as long-lived prethermal phases of the driven system. We shall explore these phases in detail in Sec. III and their detection in Sec. IV.1.
Before ending this section, we note that the effect of having \(V_{0}\gg\delta,\Omega\) is to preclude Rydberg excitations on the neighboring sites of the lattice. In this regime, it is possible to obtain a slightly modified form of the Floquet Hamiltonian which supports similar phases. Such a prohibition can be implemented by using a local projection operator
\[P_{\vec{r}}=(1-\sigma_{\vec{r}}^{z})/2 \tag{13}\]
as shown in Ref. [18]. In this regime, the projected Hamiltonian is given by [24; 25]
\[H_{p}(t) = \sum_{\vec{r}}\left(\Omega\tilde{\sigma}_{\vec{r}}^{x}-\frac{ \Delta(t)}{2}\sigma_{\vec{r}}^{z}\right)+\frac{1}{2}\sum_{\vec{r},\vec{r}^{ \prime}}^{\prime}V(|\vec{r}-\vec{r}^{\prime}|)\hat{n}_{\vec{r}}\hat{n}_{\vec{ r}^{\prime}}\] \[\tilde{\sigma}_{\vec{r}}^{x} = P_{j_{x}-1,j_{y}}P_{j_{x},j_{y}-1}\sigma_{j_{x},j_{y}}^{x}P_{j_{x }+1,j_{y}}P_{j_{x},j_{y}+1}, \tag{14}\]
where \(\sum^{\prime}\) denotes a sum over sites where \(\vec{r}\) is not a nearest neighbor of \(\vec{r}^{\prime}\). Note that \(\tilde{\sigma}_{\vec{r}}^{x}\) can create a Rydberg excitation at site \(\vec{r}\) only if all its neighbors are in their ground states.
We can carry out an exactly similar perturbative analysis, charted out earlier in this section, starting from \(H_{p}(t)\). The computation involved is almost identical to that used to obtain \(H_{F}^{(1)}\) and we do not repeat it here. Such an analysis yields the Floquet Hamiltonians for the continuous and the square pulse protocols for the projected case.
\[H_{Fc}^{p(1)} = \Omega J_{0}\left(\frac{2\lambda}{\pi}\right)\sum_{\vec{r}}\tilde {\sigma}_{\vec{r}}^{x}+\frac{1}{2}\sum_{\vec{r},\vec{r}^{\prime}}^{\prime}V(| \vec{r}-\vec{r}^{\prime}|)\hat{n}_{\vec{r}}\hat{n}_{\vec{r}^{\prime}}\] \[-\frac{\delta}{2}\sum_{\vec{r}^{\prime}}\sigma_{\vec{r}}^{z}\] \[H_{Fs}^{p(1)} = \Omega\frac{\sin\lambda}{\lambda}\sum_{\vec{r}}\left(\cos\lambda \tilde{\sigma}_{\bf r}^{x}-\sin\lambda\tilde{\sigma}_{\vec{r}}^{y}\right) \tag{15}\] \[+\frac{1}{2}\sum_{\vec{r},\vec{r}^{\prime}}^{\prime}V(|\vec{r}- \vec{r}^{\prime}|)\hat{n}_{\vec{r}}\hat{n}_{\vec{r}^{\prime}}-\frac{\delta}{2} \sum_{\vec{r}}\sigma_{\vec{r}}^{z}.\]
We note that the phases of \(H_{F}^{p(1)}\) are qualitatively similar to those of \(H_{F}^{(1)}\); in particular, we can still tune the drive frequency towards \(\omega_{1}^{*}\) to obtain density-wave phases. The numerical advantage provided by \(H_{F}^{p(1)}\) comes from the fact that the dimension of its Hilbert space, \(\mathcal{D}_{p}\sim 1.503^{L^{2}}\)[45], grows slowly with system size \(L^{2}\) compared to its counterpart \(\mathcal{D}\) for \(H_{F}^{(1)}\), \(\mathcal{D}\sim 2^{L^{2}}\). We shall use this fact while dealing with numerical analysis of the Floquet phases in subsequent sections.
## III Prethermal Floquet phases
In this section, we study the Floquet phases using both the analytically obtained Floquet Hamiltonian (Eq. 15)
and the Floquet eigenstates obtained from exact numerical diagonalization of \(U(T,0)\). The numerical results presented in this section will be obtained for the square-pulse protocol with large \(V_{0}\). We shall also use the constraint that two neighboring sites cannot be simultaneously occupied by Rydberg excitations: \(\hat{n}_{\bf r}\hat{n}_{{\bf r}^{\prime}}=0\) if \({\bf r}\) and \({\bf r}^{\prime}\) are nearest neighbors. This approximation, which becomes accurate at large \(V_{0}\), allows us to access larger system size; the validity of this approximation will be discussed in detail in Sec. V.
For the square-pulse protocol and within the constrained subspace mentioned above, we can write the evolution operator as \(U(T,0)=U_{+}(T,T/2)U_{-}(T/2,0)\) where
\[U_{-}(t,0) = \exp[-iH_{p}(\Delta_{-})t/\hbar],\] \[U_{+}(t,T/2) = \exp[-iH_{p}[\Delta_{+}](t-T/2)/\hbar], \tag{16}\]
where \(\Delta_{\pm}=\delta\pm\Delta_{0}\) (Eq. 2).
To obtain the exact Floquet eigenstates, we first numerically diagonalize \(H_{p\pm}\equiv H_{p}(\Delta_{\pm})\). This allows us to obtain its eigenvalues and corresponding eigenvectors: \(H_{p\pm}|q_{\pm}\rangle=E_{q_{\pm}}|q_{\pm}\rangle\). Using these we can write the evolution operator as
\[U(T,0) = \sum_{q_{+},q_{+}^{\prime}}{\cal U}_{q_{+}^{\prime}q_{+}}|q_{+}^ {\prime}\rangle\langle q_{+}|,\] \[{\cal U}_{q_{+}^{\prime}q_{+}} = \sum_{p_{-}}e^{i(E_{q_{+}^{\prime}}+E_{p_{-}})T/(2\hbar)}c_{q_{+}^ {\prime}p_{-}}^{*}c_{q_{+}p_{-}}, \tag{17}\]
where \(c_{\alpha\beta}=\langle\beta|\alpha\rangle\) are the overlap coefficients between eigenstates of \(H_{+}\) and \(H_{-}\). Using the matrix elements \({\cal U}_{q_{+}^{\prime}q_{+}}\) (Eq. 17), we can numerically diagonalize \(U(T,0)\) to obtain its eigenvalues and eigenfunctions
\[U(T,0)|m\rangle = \Lambda_{m}(T)|m\rangle,\quad\Lambda_{m}(T)=e^{i\theta_{m}(T)}, \tag{18}\]
where the form of the eigenvalues \(\Lambda_{m}\) follows from the unitary nature of the evolution operator. We note that the \(|m\rangle\)'s are also eigenstates of the exact Floquet Hamiltonian \(H_{F}\) within the constrained subspace; their eigenvalues which correspond to the Floquet quasienergies are given by
\[\epsilon_{m} = \arccos[\mbox{Re}\Lambda_{m}(T)]\hbar/T. \tag{19}\]
In the limit of high drive frequency where \(T\) is sufficiently small, all the eigenvalues fall within the first Floquet Brillouin zone: \(-\pi\hbar/T\leq\epsilon_{m}\leq\pi\hbar/T\). In this case, one can meaningfully order the quasienergies \(\epsilon_{m}\); the lowest \(\epsilon_{m}\) corresponds to the Floquet ground state and characterizes the Floquet phase. For lower drive frequencies, the quasienergies are no longer restricted within the first Floquet Brillouin zone; in this regime, they can be folded back using the standard reduced zone scheme [42]. However, this makes it impossible to order them by their magnitude. In this section, we shall work in the high drive frequency regime where the eigenvalues can be ordered.
To characterize the properties of these eigenvalues, we now define the order parameters corresponding to various ordered phases of this model [36; 37]. These ordered states, namely, the star, the striated and the checkerboard states, are schematically sketched in Figs. 1 (a), (b), and (c) respectively. To characterize such orders, we label the sites of a \(L_{x}\times L_{y}\) lattice by an integer
\[j = (j_{x}-1)+(j_{y}-1)L_{x} \tag{20}\]
where \(1\leq j_{y}\leq L_{y}\) is the row index of the array and \(1\leq j_{x}\leq L_{x}\) is the \(x\) coordinate of the site. We then define an operator
\[\hat{O}_{c} = \frac{1}{L}\sum_{j}(-1)^{j+[j/L_{x}]}(\hat{n}_{j}-1/2), \tag{21}\]
where \([x]\) denotes the largest integer smaller than or equal to \(x\) and \(L=L_{x}L_{y}\). It is straightforward to see that \(\langle\hat{O}_{c}\rangle=1/2\) for the checkerboard phase, \(1/4\) for the striated phase and \(0\) for the star phase. In contrast, the operator \(\hat{O}_{s}\) defined as
\[\hat{O}_{s} = \frac{1}{L}\sum_{j}(-1)^{j+[j/(2L_{x})]}(\hat{n}_{j}-1/2) \tag{22}\]
vanishes for the checkerboard and striated phase; for the star phase \(\langle\hat{O}_{s}\rangle=1/4\). A representative plot showing behavior of \(O_{s}\) across the transition from the paramagnetic to the star phase is shown in Fig. 1 (d); we find that such a plot indicates a transition around \(\lambda\simeq 3\pi/4\) for \(V_{0}=25\Omega\), \(\delta=2\Omega\) and \(\Delta_{0}=100\Omega\). The behavior of \(O_{c}\) for transition from disordered to checkerboard or disordered to striated phases are similar.
Figure 1: Schematic representations of the (a) star, (b) striated, and (c) checkerboard phases. The red circles indicate sites with Rydberg excitations while the white ones represent atoms in their ground state. (d) Plot of \(O_{s}\) obtained using exact eigenstates of \(U(T,0)\) as a function \(\lambda=\Delta_{0}T/(4\hbar)\), where \(T\) is the time period of a square pulse (Eq. 2) for \(V_{0}=25\Omega\). The other parameters are \(\delta=2\Omega\), \(\Delta_{0}=100\Omega\), \(L_{x}=6\) and \(L_{y}=4\).
The above considerations prompt us to define an operator given by
\[\hat{O}_{1} = \hat{O}_{c}+4\hat{O}_{s} \tag{23}\]
which allows us to distinguish between all three phases: \(\langle\hat{O}_{1}\rangle=1/2\) (\(1/4\)) for checkerboard (striated) phase and 1 for the star phase. We note that all of these order parameters vanish in the paramagnetic phase leading to \(\langle\hat{O}_{1}\rangle=0\); this allows us to use \(\hat{O}_{1}\) to identify the Floquet phases of \(H_{F}\).
Next, to study the Floquet phases, we plot the expectation value
\[O_{0} = \langle m_{0}|\hat{O}_{1}|m_{0}\rangle, \tag{24}\]
where \(|m_{0}\rangle\) is the eigenstate corresponding to the lowest \(\epsilon_{m}\). The left panel of Fig. 2 shows the plot of \(O_{0}\) obtained using \(H_{Fs}^{p(1)}\) (Eq. 15) as a function of \(V_{0}/\Omega\) and \(\lambda=\Delta_{0}T/(4\hbar)\) in the large drive amplitude regime (\(\Delta_{0}=100\Omega\)). The right panel of Fig. 2 shows a similar plot obtained using exact diagonalization of \(U(T,0)\) as discussed earlier in this section; we find the two plots to be qualitatively similar for all \(V_{0}/\delta\), when \(V_{0},\delta\ll\Delta_{0}\). This demonstrates the validity of the FPT in this regime.
In the high drive frequency regime where \(\lambda\ll 1\), the plot reflects the presence of the paramagnetic phase (dark blue region) for which \(O_{0}=0\) for \(V_{0}\gg\delta=0.75\Omega\). In contrast, for \(V_{0}\sim\delta\), we find a smooth interpolation between the disordered and the checkerboard phase (light blue region). Such an interpolation is an artifact of using the constrained Hilbert space in our numerics; understandably, this approximation holds only for \(V_{0}\gg\delta\). The presence of the disordered phase at \(V_{0}\gg\delta\) and \(\lambda\ll 1\) is consistent with the result of the first-order Magnus expansion for which the Floquet Hamiltonian is just the time averaged value of \(H(t)\) and is given by Eq. 1 with \(\Delta\to\delta\). This model is known to have a paramagnetic phase for \(V_{0}\gg\delta\) in this regime [36].
For \(V_{0}\gg\delta\), we also find clear second-order transitions from the paramagnetic to the star phase as the drive time period \(T\) is varied. This transition occurs around \(\lambda\simeq\pi,2\pi\) where \(\Omega_{\rm eff}=\Omega\sin\lambda/\lambda\ll\delta,V_{0}\) (Eq. 12). For drive frequencies corresponding to \(\lambda\simeq\pi,2\pi\) and \(V_{0}\sim\delta\), we find the checkerboard phase (white region). By increasing \(V_{0}\) and keeping \(\lambda\simeq n\pi\) where \(n\) is an integer, we find a transition from the checkerboard to the star phase. This transition is expected to be first order since it is a transition between two phases with distinct classical orders. The disordered phase is absent since \(\Omega_{\rm eff}\simeq 0\) for these drive frequencies.
For lower drive frequencies, where the Floquet quasienergies are no longer restricted within the lowest Floquet Brillouin zone, we can not order the eigenvalues. We however note that such ordered states still exist at the special drive frequencies given by \(\lambda=n\pi\) as eigenstates of the Floquet Hamiltonian up to a prethermal timescale. Furthermore, their presence leaves a detectable signature in the correlation function of the systems as we shall discuss in the next section.
## IV Detection and stability of the Floquet phases
In this section, we shall first discuss the properties of the correlation functions using which we can detect the Floquet phases. This will be followed by a study of stability of these Floquet phases and the extent of the prethermal regime as a function of the drive amplitude. Throughout this section we shall work within the projected Hilbert space as discussed earlier.
Figure 2: (a) Plot of \(O_{0}\) obtained using eigenstates of \(H_{Fs}^{p(1)}\) as a function of \(V_{0}\) and \(\lambda=\Delta_{0}T/(4\hbar)\), where \(T\) is the time period of a square pulse (Eq. 2). The dark red region corresponds to the star order for which \(O_{0}\simeq 1\), the white region corresponds to the checkerboard order for which \(O_{0}\simeq 1/2\), and the light blue denotes the striated order for which \(O_{0}\simeq 1/4\). The dark blue region represents a disordered phase for which \(O_{0}=0\). For these plots, \(\delta=0.75\Omega\), \(\Delta_{0}=100\Omega\), \(L_{x}=6\) and \(L_{y}=4\). (b) Same as (a) but obtained using exact numerical diagonalization of \(U(T,0)\) within the projected Hilbert space.
Figure 3: Plot of \(C_{3}(nT)\) across the transition from the disordered to the star phase. The plot shows \(C_{3}(nT)\) as a function of the number of drive cycles \(n\) for (a) \(\lambda=0.05\), (b) \(\pi\), and (c) \(2.35\). The inset in panel (c) reflects the long-time oscillations near the transition. The inset shows the details of this oscillating behavior on a shorter time scale. Panel (d) shows the fluctuation of the value of \(C_{3}\) about its mean value as a function of \(\lambda\). For all plots \(V_{0}=25\Omega\), \(\delta=2\Omega\), and \(\Delta_{0}=100\Omega\). See text for details.
### Correlation functions
In this subsection, we show that the Floquet phases and the transitions from the disordered paramagnetic to the star Floquet phases can be detected via a study of correlation functions. A similar detection scheme, as we shall discuss, is expected to hold for transitions from the disordered to other ordered states. This is of primary importance since, unlike equilibrium ground states, these phases do not correspond to standard energy eigenstates states of a many-body system and cannot be accessed via standard thermodynamic measurements in experiments.
To this end, we compute the equal-time density-density correlation function of the driven system given by
\[C_{3}(nT) = \sum_{\vec{r}\vec{a}}\langle\psi(nT)|\hat{n}_{\vec{r}}\hat{n}_{ \vec{r}+\vec{a}}|\psi(nT)\rangle, \tag{25}\]
where \(\vec{a}\) is chosen so that \(\vec{r}\) and \(\vec{r}+\vec{a}\) form third-nearest neighboring sites of a 2D rectangular lattice. Note that the nearest-neighbor density-density correlation is identically zero within the projected Hilbert space and the next-nearest neighbor correlation is zero in the star phase and close to zero in the disordered phase; thus \(C_{3}\) represents the most local correlation function with appreciable dynamical fluctuation.
The plot of \(C_{3}\) is shown for three representative values of \(\lambda\) in Figs. 3(a), (b) and (c). In Fig. 3(a), \(C_{3}\) is plotted as a function of \(n\), the number of drive cycles, for \(\lambda=0.05\) and \(V_{0}=25\Omega\) and starting from an initial Fock state \(|\psi_{0}\rangle\) with star order (sketched in Fig. 1) so that \(\langle\psi_{0}|C_{3}|\psi_{0}\rangle=1/4\). The plot shows a rapid decay of the correlator towards its diagonal ensemble value \(\sim 0.1\) which is in accordance with the prediction of ETH. The oscillations around this value is a consequence of the finite system size. In contrast, for \(\lambda=\pi\), as shown in Fig. 3(b), \(C_{3}\) remains almost a constant showing very small oscillations around the initial value. This is a consequence of the fact that \(|\psi_{0}\rangle\) is almost exactly an eigenstate of the exact Floquet Hamiltonian. We note that this will happen as long as \(|\psi_{0}\rangle\) is a near-exact eigenstate of \(U\) or, equivalently, of \(H_{F}\); it need not necessarily be its lowest-lying eigenstate. In between, near the transition at \(\lambda=2.35\), we find that \(C_{3}\) shows long-time oscillatory behavior which is distinct from its counterparts shown in Figs. 3(a) and (b). In particular, the oscillation amplitudes are larger than their counterparts in the ordered phase; also they are much longer-lived than what is found in the disordered phase. This shows that \(C_{3}\) can distinguish between the Floquet phases and provides a straightforward tool for their detection. The fluctuation of \(C_{3}\) around its mean value at long-time is computed as
\[\sigma = \sqrt{\frac{1}{n_{f}-n_{i}}\sum_{n=n_{i}}^{n_{f}}C_{3}^{2}(nT)- \mu^{2}}\] \[\mu = \frac{1}{n_{f}-n_{i}}\sum_{n=n_{i}}^{n_{f}}C_{3}(nT). \tag{26}\]
with \(n_{i}=3001\) and \(n_{f}=10000\).
A plot of \(\sigma\) as a function of \(\lambda\) is shown in Fig. 3(d). We find that \(\sigma\) indicates a clear peak at the transition point \(\lambda=\lambda_{c}\simeq 2.3\) indicating a sharp increase in fluctuation of \(C_{3}\) at the transition. It therefore serves as a distinguishing feature of the transition from a disordered to the star ordered phase.
A similar signature in the behavior of \(C_{3}(nT)\) is also noticed when there is a transition between the checkerboard phase and the disordered phase at a lower value of the interaction potential \(V_{0}\). This is shown in Fig. 4 for \(\delta=\Omega\text{ and }V_{0}=1.5\Omega\), where we choose a checkerboard ordered state as our initial state \(|\psi_{0}\rangle\), so that \(\langle\psi_{0}|C_{3}|\psi_{0}\rangle=1\). Away from the transition (Figs 4(a) and (b)), the fluctuation of \(C_{3}\) is comparatively small, whereas close to the point of transition (Fig 4(c)) it peaks considerably. In particular, when \(\lambda=\pi\) in Fig. 4(c), the checkerboard state is almost an eigenstate of the exact evolution operator, which is why the quantum fluctuations dip to zero. In Fig. 4(d), we plot the average fluctuation of \(C_{3}(nT)\), \(\sigma\) as a function of \(\lambda\). The average is computed after the initial transient dynamics have settled down. It shows a peak around \(\lambda=2.75\). We later show in Fig 6 that there is a phase transition from the disordered phase to the checkerboard phase precisely at this point even when the full Hilbert space is used instead of the projected subspace.
Before ending this section, we note that such correlators are expected to show qualitatively similar behaviors across transitions from a disordered to any other ordered Floquet phase provided that we start from an initial Fock state which characterizes the order. However, it is not expected to provide a signature of a transition between two ordered phases; in this case, typically both the ordered phases exist as eigenstates of the Floquet Hamiltonian across the transition and the correlators do not evolve dynamically in either of the phases provided that the initial state is one of the ordered states.
### Stability of the Floquet phases
In this section, we discuss the stability of such Floquet phases and provide an estimate of the prethermal timescale over which such phases are expected to exist. To this end, we first note that the behavior of a driven ergodic system is expected to be described by a local Floquet Hamiltonian only up to a finite, prethermal, timescale \(t_{p}\), where \(\tau=t_{p}/T\). For \(n>\tau\), the system is expected to heat up to infinite temperature and can no longer be described by a local Floquet Hamiltonian [43]. However, it is known, in the context of Magnus expansion that \(t_{p}\sim\exp[c\omega_{D}]\) (where \(c\) is a constant of order 1 which depends on the system details) in the high drive frequency limit [44]. It can thus be large leading to a long prethermal time over which the Floquet phases are expected to be stable.
To estimate the prethermal timescale \(\tau\) for the driven
Rydberg system, we plot \(C_{3}(nT)\) as a function of number of drive cycles \(n\) at \(\lambda=\pi\) for several representative values of \(\Delta_{0}/\Omega\). The value of \(C_{3}(nT)\) obtained from \(H_{Fs}^{p(1)}\) is a constant and equals \(0.25\) for \(\lambda=\pi\); at large \(\Delta_{0}\), such a constant value is also found for \(C_{3}\) obtained using ED as can be seen from Fig. 3 (b). To characterize the difference between the results obtained using ED and that from \(H_{Fs}^{p(1)}\), we therefore study the deviation of \(C_{3}(nT)\) from its constant value.
The result of such a study is shown in Fig. 5. In Figs. 5(a), (b) and (c), we plot \(C_{3}(nT)\), obtained using ED, as a function of \(n\) for \(\lambda=\pi\). Fig. 5(a) shows such a plot for a low drive amplitude \(\Delta_{0}=0.9\Omega\); we find that \(C_{3}(nT)\) deviates from its initial value within the first few drive cycles. The time taken to achieve this deviation increases with increasing \(\Delta_{0}\) (Fig. 5(b) where \(\Delta_{0}=1.25\Omega\)) and around \(\Delta_{0}=1.45\Omega\), \(C_{3}(nT)\) remains fixed at its constant value predicted by \(H_{Fs}^{p(1)}\) for \(n\gg 1500\) drive cycles.
From these plots, we can obtain a qualitative estimate of \(\tau\). Here we choose \(\tau\) to be smallest number of drive cycles at which \(C_{3}(nT)\simeq 0.2\). The choice of \(C_{3}(nT)\simeq 0.2\) as the cut-off is motivated by the fact that the infinite-temperature ensemble average of this correlator is close to \(0.17\). A plot of \(\tau\) as a function of \(\Delta_{0}\) with \(\lambda=\pi\) is shown in Fig. 5(d). We find that \(\tau\) shows a steep rise around \(\Delta_{0}\simeq\Omega,\delta\). This allows us to conclude that the Floquet phases are stable for a very long timescale as long as we are in the regime \(\Delta_{0}\gg\Omega,\delta\).
## V Discussion
In this work, we have identified the Floquet phases of a periodically driven Rydberg atom arrays. Such phases can be tuned as a function of the drive frequency; our analysis identifies special drive frequencies which satisfies \(\Delta_{0}T/\hbar=4m\pi\), where \(m\) is a positive integer, for a square pulse protocol and \(\Delta_{0}T/\hbar=2\pi\eta_{m}\) for a cosine protocol. At these drive frequencies, one finds density-wave ordered Floquet phases. In the high drive amplitude regime, we find a large prethermal timescale for which these Floquet phases are stable and are accurately derived by the analytical first-order Floquet Hamiltonian \(H_{F}^{(1)}\) derived in Sec. II. We note here that although we have carried all the numerics using the square-pulse protocol, the results of Sec. II strongly suggest that analogous phenomenon exists for continuous drive protocols; the expression for the special frequencies for the cosine protocol is given by Eq. 12. We have also presented a method to detect these Floquet phases and the transitions between them via measurement of the equal-time density-density correlation function \(C_{3}(nT)\). We note that the Floquet phases, unlike their thermodynamic counterparts, are not readily accessible in experiments; our results therefore provide a useful experimental tool for detection of these phases.
In the previous sections, we have used the approximation of large \(V_{0}\) for obtaining these phases. This is not an essential feature of our analysis as can be seen by comparing Eqs. 11 and 15. The first of these (Eq. 11) obtains the Floquet Hamiltonian without any additional approximation for \(V_{0}\) while the second is derived in the large \(V_{0}\) regime. Both these Floquet Hamiltonians provided identical expressions for the special frequencies \(\omega_{m}^{*}\) (Eq. 12). The reason for choosing the latter when it comes to exact numerics is that it has a smaller Hilbert space which allows access to larger system sizes for carrying out ED. To ascertain this fact, we show a comparison in Fig. 6 between the Floquet phases obtained by applying ED on \(H_{Fs}^{(1)}\) keeping the full Hilbert space and that obtained by diagonalizing \(U\) within the constrained Hilbert space. The result of the phase diagram obtained from \(H_{FS}^{(1)}\) is
Figure 5: Plot of \(C_{3}(nT)\) as a function of \(n\) at \(\lambda=\pi\) for (a) \(\Delta_{0}/\Omega=0.9\), (b) \(1.25\), and (c) \(1.45\). (d) Plot of \(\tau\), measured as the minimal number of cycles after which \(C_{3}\simeq 0.2\), as a function of \(\Delta_{0}\) showing an exponential growth of \(\tau\) at large \(\Delta_{0}\). For all plots \(V_{0}=25\Omega\) and \(\delta=2\Omega\). The red dotted lines in panels (a), (b) and (c) indicate the line \(C_{3}=0.2\). See text for details.
Figure 4: Plot of \(C_{3}(nT)\) across the transition from the disordered to the checkerboard phase. The plot shows \(C_{3}(nT)\) as a function of the number of drive cycles \(n\) for (a) \(\lambda=0.025\), (b) \(\pi\), and (c) \(2.75\). Panel (d) shows the fluctuation of \(C_{3}\) about its mean value as a function of \(\lambda\). For all plots \(V_{0}=1.5\Omega\), \(\delta=\Omega\), and \(\Delta_{0}=100\Omega\). See text for details.
given in Fig. 6(a). Fig. 6(b) shows the Floquet phases obtained using the projected subspace for \(N=16\) sites; the phase diagram is similar to the one obtained for \(N=24\) sites (Fig. 2(b)). A comparison of this phase diagram with the one shown in Fig. 6(a) shows that they differ qualitatively only for \(V_{0}\leq\delta,\Omega\) and \(\Delta_{0}T/\hbar\ll\pi\); the spurious interpolating behavior obtained using the projected Hilbert space does not appear in this regime and is replaced by the disordered phase in the exact phase diagram. However, in other regimes, there is excellent agreement between the two phase diagrams including around \(V_{0}\simeq\delta\) when \(\lambda\geq\pi\). The last feature owes its existence to the reduction of \(\Omega_{\rm eff}\sim\sin\lambda/\lambda\) in this regime which is equivalent to an effective increase in \(V_{0}\).
Finally, we discuss experiments which can test our theory. We propose a standard experimental setup involving Rydberg atoms in a rectangular array where the detuning of these atoms are changed periodically with time according to either a square pulse or a cosine protocol. We predict the existence of special frequencies where the system should exhibit a star ordered Floquet phase at large \(V_{0}\). Such a phase would leave its imprint on the time evolution of the correlation function \(C_{3}\) starting from an initial Fock state with star order; in the ordered phase \(C_{3}\) will be very nearly time independent. The transition between the star and the disordered phase can be achieved by tuning the drive frequency; such a transition will be reflected in the behavior of \(C_{3}\) as discussed in Sec. IV.1.
In conclusion, we have discussed the Floquet phases of Rydberg atoms arranged in a rectangular array. We have provided a way of experimentally detecting of these phases and the transitions between them via measurements of equal-time correlation functions; moreover, we have identified the high drive amplitude regime where such phases are stable over a long prethermal time scale. Within this time scale, their properties can be described by the first-order Floquet Hamiltonian obtained using FPT.
## VI Acknowledgements
S.G. acknowledges CSIR, India for support through Project No. 09/080(1133)/2019-EMR-I. D.S. thanks SERB, India for support through project JBR/2020/000043. K.S. thanks SERB, India for support through project JCB/2021/000030.
|
2308.01094 | Scaling Data Science Solutions with Semantics and Machine Learning:
Bosch Case | Industry 4.0 and Internet of Things (IoT) technologies unlock unprecedented
amount of data from factory production, posing big data challenges in volume
and variety. In that context, distributed computing solutions such as cloud
systems are leveraged to parallelise the data processing and reduce computation
time. As the cloud systems become increasingly popular, there is increased
demand that more users that were originally not cloud experts (such as data
scientists, domain experts) deploy their solutions on the cloud systems.
However, it is non-trivial to address both the high demand for cloud system
users and the excessive time required to train them. To this end, we propose
SemCloud, a semantics-enhanced cloud system, that couples cloud system with
semantic technologies and machine learning. SemCloud relies on domain
ontologies and mappings for data integration, and parallelises the semantic
data integration and data analysis on distributed computing nodes. Furthermore,
SemCloud adopts adaptive Datalog rules and machine learning for automated
resource configuration, allowing non-cloud experts to use the cloud system. The
system has been evaluated in industrial use case with millions of data,
thousands of repeated runs, and domain users, showing promising results. | Baifan Zhou, Nikolay Nikolov, Zhuoxun Zheng, Xianghui Luo, Ognjen Savkovic, Dumitru Roman, Ahmet Soylu, Evgeny Kharlamov | 2023-08-02T11:58:30Z | http://arxiv.org/abs/2308.01094v1 | # Scaling Data Science Solutions with Semantics and Machine Learning: Bosch Case
###### Abstract
Industry 4.0 and Internet of Things (IoT) technologies unlocked unprecedented amount of data from factory production, posing big data challenges in volume and variety. In that context, distributed computing solutions such as cloud systems are leveraged to parallelise the data processing and reduce computation time. As the cloud systems become increasingly popular, there is increased demand that more users that were originally not cloud experts (such as data scientists, domain experts) deploy their solutions on the cloud systems. However, it is non-trivial to address both the high demand for cloud system users and the excessive time required to train them. To this end, we propose SemCloud, a semantics-enhanced cloud system, that couples cloud system with semantic technologies and machine learning. SemCloud relies on domain ontologies and mappings for data integration, and parallelises the semantic data integration and data analysis on distributed computing nodes. Furthermore, SemCloud adopts adaptive Datalog rules and machine learning for automated resource configuration, allowing non-cloud experts to use the cloud system. The system has been evaluated in industrial use case with millions of data, thousands of repeated runs, and domain users, showing promising results.
Keywords:ontology engineering knowledge graph semantic ETL machine learning cloud computing welding quality monitoring Industry 4.0 rule-based reasoning Datalog
## 1 Introduction
Background.Industry 4.0 [1] aims at highly automated smart factories that rely on IoT technology [2], spanning across data acquisition, communication, information processing and actuation. This has unlocked unprecedented amounts
of data that are generated by production systems [3] and, thus, drastically increased the demand for data-driven analytical solutions and cloud technology. We illustrate a common industrial scenario of development and deployment of data-driven solutions on cloud with a Bosch welding case7 of quality monitoring in Fig. 1: The data from a production environment such as welding machines (a) has first to be acquired in different formats, e.g., CSV, JSON, XML (b); then they should be integrated into a uniform format (c); After that, the project team (including welding experts, data scientists, managers. etc.) wants to run data analysis on cloud infrastructures on top of the large data volumes from many factories (d); After data analysis, these users need to discuss and log the results (e); The whole process involves iterative and cross-domain communication between the stakeholders (f).
Footnote 7: Automated welding is an impactful manufacturing process that is involved in the production of millions of cars annually, deployed world-wide at many factories. Data-driven analytics solutions for welding can greatly help in reducing the cost and waste in production quality. Errors in production can only be resolved by destroying newly produced cars in samples.
**Challenges.** From the scenario, we see that scaling data science solutions poses challenges related to dealing with the high data _volume_, _variety_, and more _users_, namely enabling non-cloud experts to leverage cloud systems. Indeed, industries equipped with IoT technologies produce huge volumes of production data. In the Bosch case, one factory alone produces more than 1.9 million welding records per month. The data generated by different software versions, locations, customers have a variety of data formats, feature names, available features, etc. Meanwhile, many users that are not cloud experts, such as domain experts and data scientists, want to deploy the data science solutions on the cloud. In a standard implementation of the workflow in Fig. 1, the project team requires extensive assistance from cloud experts, whenever they want to deploy solutions or make small changes to their solutions deployed on the cloud. To facilitate the adoption of cloud systems for more projects and users, one can equip all projects
Figure 1: Data analytics development cycle exemplified on the Bosch case of welding condition monitoring. In industrial data science projects, many users are non-cloud experts (e.g., welding experts, ML experts) and want to scale their solutions on the cloud.
with some cloud experts, or launch training programs about cloud technology for all users. Both require careful planing to balance time, cost, and benefits.
**Our Approach.** To address these scalability challenges in terms of data volume, data variety, and democratising cloud systems, we propose SemCloud: a semantics-enhanced cloud system, that scales semantic data integration and data analysis on the cloud with distributed computing, and allows non-cloud experts to deploy their solutions. Our system is motivated by a use case at Bosch aiming at scaling data science solutions in welding condition monitoring (Sect. 2). SemCloud consists of semantic artefacts such as domain ontologies, mappings, adaptive Datalog rules (Sect. 3) and _machine learning_ (ML) that learns the parameters in the adaptive Datalog rules.
In particular, the semantic data integration (extract-transform-load, ETL) (Sect. 3.2) maps diverse data sources to a unified data model and transforms them to uniform data formats. To allow distributed ETL, SemCloud slices the integrated data according to domain-specific data semantics (machine equipment identifiers in the Bosch case), separating the data into computationally independent subsets. SemCloud then parallelises the ETL and analysis of the data slices on distributed computing nodes (Sect. 3.3). Furthermore, SemCloud adopts a semantic abstraction and a graphical user interface (GUI) to democratise cloud deployment, improving transparency and usability for non-cloud experts. These include a cloud ontology that allows to encode ETL pipelines in knowledge graphs (Sect. 3.4), and a set of adaptive Datalog rules (Sect. 3.5) for automatically finding optimal resource configurations. These rules are adaptive because some of their predicates are functions learnt with machine learning (ML)(Sect. 3.6).
We note that the existing work on this topic addressed the cloud deployment issues only to a limited extent [4, 5], whereby they either only focus on the formal description of cloud, or on the limited adaptability of cloud systems. SemCloud exploits and significantly extends our previous works on ML in the context of Industry 4.0 [6, 7], and container-based big data pipelines [8] (Fig. 4) by enhancing the framework with semantic artefacts and modules for specifying container-based pipelines, including pipeline step templates for containerisation and management of inter-step communication and data transmission (Sect. 3.3).
We evaluated (Sect. 4) SemCloud extensively: the cloud deployment report to verify SemCloud performance on reducing computational time, with an industrial datasets of about 3.1 million welding spots; the performance of rule parameter learning and inference based on 3562 times of repeated runs of the system.
## 2 Motivating Use Case: Welding Quality Monitoring
In this section we discuss our motivating use case in more details, explain why scaling data science solution to large data sets and more users is critical and discuss requirements for the cloud system.
**Condition monitoring for automated welding.** Condition monitoring refers to a group of technologies for monitoring condition parameters in production machinery to identify potential developing faults [9]. The use case addresses
one type of condition monitoring, quality monitoring (another type is machine health monitoring), for resistance spot welding at Bosch, which is a fully automated welding process that is essential for producing high-quality car bodies globally in the automotive industry. During the welding process (Figure 1a), two welding gun electrodes press two or three worksheets (car body parts) with force, an electric current flows through the electrodes and worksheets, generating a large amount of heat due to electric resistance. The materials in a small area between the two worksheets melt and then congeal after cooling, forming a weld nugget (or welding spot) connecting the worksheets. The core of quality monitoring is to measure, estimate or predict some categorical or numerical quality indicators. The diameter (Figure 1a) of welding spots is typically used as the quality indicator of a single welding act according to industrial standards [9, 10]. Conventional practice adopts destructive methods to tear the welded car bodies apart, although they can only be applied to a small sample of car bodies, and the destroyed samples are waste and cannot be reused. Bosch is developing data-driven solutions to predict the welding quality, to reduce the waste and improve the coverage of quality monitoring.
**Bosch big data.** Welding condition monitoring faces big data challenges of variety and volume. In terms of _data variety_, Bosch has many data sources of different locations and conditions (Figure 2a). The production data alone are collected from at least four locations and three original equipment manufacturers (OEMs). These data differ in semantics and formats because of software versioning, customer customisation, as well as sensor and equipment discrepancy based on the concrete needs in the location. For example, they may be stored in various formats such as CSV, JSON, XML, etc., and may have different names for the same variables, have some variables missing in one source but present in another, or data may be measured with different sampling rate, etc.
In terms of _data volume_, data science models need a reasonably large amount of data to make the training meaningful and representative for the given data science tasks. For simplicity, we consider a representative example, whereby we assume one month data are meaningful, which was confirmed by data scientists at Bosch. In an example automobile factory responsible for manufacturing chassis (Figure 2b), there are 3 running production lines with a total of 45 welding machines. Each welding machine is responsible for a number of types of welding spots on the car bodies, with pre-designed welding programs. These machines
Figure 2: (a) The data variety issue. (b) The data volume issue exemplified with the production data.
perform welding operations at different speeds, ranging from one welding spot per second, to one spot per several minutes. The data related to one single welding spot consist of several protocols or databases. After integration, these data become to a set of relational tables with 263 attributes, and a simplified estimation gives that one factory produces 64.8k spots every day, and 1.944 million spots per months, which account for the production of about 432 cars. Considering an average of 125 KB data for one welding operation gives the estimation of data volume meaningful for training as 243 GB (The real amount varies and can be larger, e.g., it was estimated as 389.32 GB in one real case. Here we adopt the simplification with a similar magnitude.).
**Cloud deployment requirements.** Considering the challenges, the welding quality monitoring system should give quality estimation/prediction not with excessive response time, although the data volume is large. In addition, the data come from various sources with diverse formats. Moreover, industrial data science projects involve many users that are non-cloud experts (Fig. 1). They should be equipped with tools to help deploy their data science solutions without extensive cloud expertise. The cloud infrastructure has resources of computing, memory, storage, network, etc. which need to be configured for optimised performance. Based on the information, we derive the following requirements for the system:
* _R1, Scalability on Data Volume:_ The system should be able to reduce the computational time significantly when processing large data volumes.
* _R2, Scalability on Data Variety:_ The system should be able to handle data variety, integrating heterogeneous data to uniform data formats.
* _R3, Scalability on Users:_ The system should improve the _transparency_ of the cloud system, automate resource configuration, and allow good _usability_ for users, especially non-cloud experts,
## 3 SemCloud: Semantics-Enhanced Cloud System
To address the challenges and requirements, we propose our SemCloud system. We first give an architectural overview (Fig. 3) and then elaborate on the components.
### Architectural Overview
The architecture of SemCloud is shown in Fig. 3. The _Data Analysis Workflow Layer_ adopts a common workflow: data acquisition, data preparation, data analysis, results logging and interpretation; the raw data are first acquired, then prepared for data analysis, and, finally, the analysis results are generated, including models, predictions and human interpretation. In the data preparation stage, we employ _Semantic Data Integration_ (Fig. 3.1) that relies on domain ontologies and semantic mappings to transform diverse data sources into uniform data formats. SemCloud scales the data analysis workflow to the cloud by with the _Distributed Computing_ (Fig. 3.2), which includes the distributed ETL, distributed data analysis, and deployment orchestration that allocates cloud resources to the previous
two modules. _Adaptive Rule-based Resource Configuration_ (Fig. 3) provides a cloud ontology and GUI for the users to encode ETL pipelines in KGs, which contain resource configuration information that is automatically reasoned by a set of adaptive Datalog rules. These rules consist of aggregation operations and parameterised functions, where the parameters in the functions are learned via ML.
### Semantic Data Integration
To accommodate the diverse data sources/formats and convert all data to uniform data formats [6, 7], we employ domain ontologies as the data models and the semantic mappings (Data-to-DO, data to domain ontology) that map diverse data sources to the data models. In particular, the domain ontologies capture the knowledge of manufacturing processes, data, and assets. In the case of welding ontology, it is in OWL 2 QL, with 1542 axioms, which define 84 classes, 123 object properties and 246 datatype properties. The classes capture concepts such as welding operations, welding machines, welding products (spots), welding programs, sensor measurements, monitoring status, control parameters, etc. The diverse data sources have discrepancies in data formats (e.g., CSV, JSON, XML), feature names, feature composition (some features exist in some sources but not in others), etc. All features in the different data sources of the same welding process are mapped (one-to-one mapping for each data source) to object properties (for foreign keys) or datatype properties (for attributes) in the same domain ontology. All features are renamed, and data formats are unified in one of the selected formats, usually CSV (or relational database).
Figure 3: Architectural overview of SemCloud including (1) semantic data integration; (2) distributed computing; (3) adaptive rule-based reasoning; each of which consists of a set of semantic artefacts (barrels) or modules (boxes). \(\overleftarrow{\text{\textregistered}}\) indicates the requirements the artefacts or modules intended to address.
### Distributed ETL and Data Analysis
**Distributed ETL.** To enable distributed ETL, we need to find a strategy that makes the ETL parallelisable, treat data streams with different updating frequencies, and handle data dependencies. SemCloud achieves this by breaking down the ETL into pipelines of four steps: _retrieve_, _slice_, _prepare_, and _store_ (Fig. 4). Data retrieval constitutes the process of retrieving data from databases or online streams present in different factories and can normally be handled by a single computing node. These data are then split into subsets by the step _slice_ to achieve parallel processing according to data semantics that make the splitting meaningful and each subset independently processable. In the welding use case, each subset only belongs to one welding machine, because the data analysis of welding quality monitoring of one machine can be safely assumed to be observable or predictable with data from this particular machine, without considering other machines. In this way, the datasets are separated into subsets that are computationally independent. We then deploy the ETL stage and the subsequent two stages on the cloud system that has resources for computing, storage and networking, to reduce the overall computational time.
An important strategy here is the hierarchically dependent parallel pipelines. We consider two types of data streams: the less frequently updated one with usually smaller volume, and the (more) frequently updated one with usually larger volume. (1) The former one has only three ETL steps: retrieve, prepare and store, because it requires resource (computing, storage, network) of one single cloud node and does not need slicing to parallelise. The intermediate results of this ETL pipeline are stored in a database using in-memory storage for fast query access. In the welding case, the metadata and reference curves follow this ETL pipeline. (2) The latter one has four ETL steps because it involves the application of slicing for parallelising. The results of this ETL pipeline are stored on dedicated cloud storage. In the welding case, the processing of feedback curves and main protocol requires more resources and is implemented through this pipeline. The ETL of these two data streams are dependent because the
Figure 4: (a) Dependent parallel ETL pipelines that break down ETL into four steps: _Retrieve_, _Slice_, _Prepare_, _Store_; (b) The cloud deployment of one step in the ETL pipeline as a container, where the MOM is responsible for the communication between steps of the ETL pipeline. (c) Zoom in one step, we see the three parts: _Input Processing_, _Output Processing_ and _Pipeline Step Task_.
_prepare_ step of the frequent data stream must pull intermediate results of less frequent data stream.
**Distributed data analysis.** The key of distributed data analysis is to make assumptions of what computation is parallelisable, and split the computations into independent computing tasks. Here the target of data analysis is to predict the welding quality quantified by quality indicators such as spot diameters or Q-Values [11, 7]. The tasks include both classification (good or bad quality), regression (diameter values [12] or Q-Values), and forecasting [13] (predicting quality in the future). In practice, the latter two are preferred by domain experts because they provide more insights than a simple classification.
Both classic methods (feature engineering with e.g., linear regression) and deep learning (LSTM networks) are employed. We developed and tested various ML models [11, 13]. We used model performance for tuning the hyper-parameters and considered both model performance and adoption difficulty for selecting the best models [7]. These models take input features such as sensor measurements, monitoring status and control parameters and predict the quality indicators. The training was done with various regimes [14]: the ground truth training data included simulation data, lab data, historical production data; the validation data were subsets of the training data for selecting hyper-parameters; test data were both of the same welding machines or different machines (testing transferability). According to domain knowledge, we assume that the interplay between welding machines to be only marginally significant and that it is safe to predict the welding quality of one welding machine only by using information of this welding machine. This assumption has been verified and obtained a prediction error of about 2% [11]. Thus, the data analysis on data of each welding machine can be performed independently if each subset contains all data of one machine.
**Cloud Deployment Orchestration.** To orchestrate the distributed computation, SemCloud encapsulates ETL steps or data analysis as containers and runs the containers independently and in parallel, allowing for deploying multiple instances of the same ETL step or data analysis [8]. Each instance is implemented by a template composed of three main parts: _Input Processing_, _Pipeline Step Task_, and _Output Processing_ (Figure 3(c)). The _Input Processing_ fetches data from remote sources and moves the data to the step workspace. The _Pipeline Step Task_ wraps custom code to process the fetched data. The _Output Processing_ component delivers the processed data to a specific destination, notifies that they are available for the next steps, and clears up temporary and input data from the step workspace. Configuration and attributes of a pipeline step can be expressed as parameters and injected at deployment time. The communication between the steps is handled by Message-oriented Middleware (MOM) [15] (Figure 3(b)), so that the consecutive steps do not need to run simultaneously for interaction, ensuring temporal decoupling. None of the sequential steps needs to know about the existence of other steps or their scaling configuration, thus achieving space decoupling. Therefore, it is possible to assign more instances to bottlenecked pipeline steps that are more computationally heavy and reduce the overall processing time.
### ETL Pipeline Generation
**Cloud Ontology.**SemCloud provides the users GUI to construct ETL pipelines and encode them into knowledge graphs, based on a SemCloud ontology (Figure 5a). The ontology _SemCloud_ is written in OWL 2, and consists of 20 classes and 165 axioms. It has three main classes: _DataEntity_, _Task_, _Requirement_. _DataEntity_ refers to any dataset to be processed; _Task_ has sub-classes that represents the four types of tasks in the data preparation: _retrieve_, _slice_, _prepare_, and _store_; and _Requirement_ that describes the requirements for computing, storage and networking resources.
**ETL Pipeline Generation in KGs.** We now illustrate the generation of ETL pipelines in knowledge graphs with the example in Figure 5b. The data for welding condition monitoring have multiple levels of updating frequencies, which should be accommodated by the ETL pipelines. For example, data that are generated for each welding operation are updated after each welding operation, and thus are updated very frequently (about one second for one operation). For these data, the users construct an ETL pipeline p1 with four layers (via GUI). Firstly, data are "retrieved" from the welding factories. Thus, the layer l1 is of type _RetrieveLayer_, and has the task t1 of type _Retrieve_. The task t1 has an IO handler io, which has an output d1 of type _DataEntity_. Then the data are read in by a task t2 of type _Slice_, and "sliced" into smaller pieces d2, d3. These slices are input to different computing nodes to do tasks t3 and t4 of type _Prepare_. Finally, all prepared data entities are stored by t5 of type _Store_.
### Adaptive Datalog Rules Inference for Resource Configuration
Obtaining an optimised cloud configuration is not a trivial task. Cloud experts typically try different configurations by testing the system with various settings and use the system performance under these test settings as heuristics to manually decide the cloud configurations. In SemCloud, we design a set of declarative
Figure 5: (a) Schematic illustration of the SemCloud ontology. (b) Partial illustration of a KG for ETL-Pipeline.
adaptive rules written using logical programming to make the cloud configurations explicit, automated, less error-prone where the system optimisation is done with help of external functions learned with ML, such that the rules can be also used by non-cloud experts.
To this end, SemCloud uses adaptive rules in Datalog with aggregation and calls to external predicates learned by ML (they are adaptive because the function parameters are learned, see Sect. 3.6). In particular, we consider non-recursive rules of the form \(A\gets B_{1},\ldots,B_{n}\) where \(A\) is a head of rule (the consequence of the rule application) and \(B_{1},\ldots,B_{n}\) are either predicates that apply join or aggregate function that filters out the results. For the theouloary of Datalog we refer to [16, 17, 18, 19]. In the following we provide some example rules and explain their logic.
We have six different Datalog programs (set of rules) that run independently and that are divided into three steps: (i) graph extraction rules that populate rule predicates by extracting information from the ETL-pipeline KGs (e.g., \(\mathit{rule}_{0}\)) (ii) resource estimation rules that estimate the resource consumption for the given pipeline if there is only one computing node (assuming infinitely large nodes, e.g., \(\mathit{rule}_{2}\)) (iii) resource configuration rules that find the optimal resource allocation in distributed computing the given pipeline (e.g., \(\mathit{rule}_{3}\)).
**Graph extraction rules.** These rules populate the predicates so that these predicates will be used for the resource estimation and configuration. The \(\mathit{rule}_{0}\) exemplifies populating the predicate subgraph1 that is related to the ETL pipeline p.
subgraph1(p,n,v,ms,mp,ssl,spr,sst) \(\leftarrow\) ETLPipeline(p), hasInputData(p,d), hasVolume(d,v), hasNoRecords(d,n) hasEstSliceMemory(p,ms), hasEstPrepareMemory(p,mp) hasEstSliceStorage(p,ssl), hasEstPrepareStorage(p,spr) hasEstStoreStorage(p,sst) (\(\mathit{rule}_{0}\)) Similarly, we have rule \(\mathit{rule}_{1}\) that creates subgraph2(p,n,v,ms,mp,ts,tp, nc,ns,mrs,mrp,mode).
**Resource estimation rules.** These rules are used to estimate required resources assuming one computing node. For example, \(\mathit{rule}_{2}\) estimates the required slice memory (ms), prepare memory (mp), slice storage (ssl), prepare storage (spr), and the store storage (sst). The rule then stores these estimations in the predicate estimated_resource.
estimated_resource(p,ms,mp,ssl,spr,sst) \(\leftarrow\) subgraph1(p,n,v,ms,mp,ssl,spr,sst), ms=@func_ms(n,v), mp=#avg{@func_mp(n,v,ms,i):range(i)}, spr=#avg{@func_spr(n,v,ssl,i):range(i)}, ssl=@func_ssl(n,v), sst=@func_sst(n,v,ssl,spr) (\(\mathit{rule}_{2}\)) where @func_ms, @func_ssl, @func_sst, etc. are parameterised ML functions whose parameters are learnt in the _rule parameter learning_ (Sect. 3.6). In the implementation, those are defined as external built-in functions that are called
in the grounding phase of the program and then are replaced by concrete values [16, 17]. We also have other resource estimation rules that estimate other resources, such as CPU consumption.
**Resource configuration rules.** These rules find the optimal cloud configurations based on the estimated cloud resource. \(rule_{3}\) is an example for deciding the slicing strategy and the storage strategy, and finding the optimal resource configuration such as the chuck size (nc), slice size (ns), memory reservation for _slice_ (mrs) and for _prepare_ (mrp). In essence, \(rule_{3}\) stipulates that if the maximum of estimated slice memory (ms) and prepare memory (mp) is greater than a given threshold (c1*nm), and the maximum of estimated slice storage (ssl), prepare storage (spr), and store storage (sst) is smaller than (or equal to) another threshold (c2*ns), then the chosen strategy for the given pipeline is _slicing_ (thus nc and ns are computed), and _fast storage_ (fs, where the thresholds are calculated from cloud attributes.
```
configured_resource(p,nc,ns,fs,mrs,mrp)\(\leftarrow\) subgraph2(p,n,v,ms,mp,ts,tp,nc,ns,mrs,mrp,mode), estimated_resource(p,ms,mp,ssl,spr,sst), CloudAttributes(c,c1,c2,c3,nm,ns,fs,cs),
#max{ms,mp}> (c1*nm), #max{ssl,spr,sst}<= (c2*ns), nc= @func_fs_1(n,v,ts,tp),ns= @func_fs_2(n,v,ts,tp), mrs= #min{ms, #max{@func_ss(n,v,nc,ns), c3*ms}}, mrp= #min{mp, #max{@func_pn(n,v,nc,ns), c3*mp}} (rule_3)
```
### Rule Parameter Learning with Machine Learning
The functions in the adaptive rules are in the form of ML models. The _resource estimation rules_ are selected from the best model resulting from training three ML methods and the pilot running statistics. These three ML methods are _Polynomial Regression (PolyR)_ (Eq. 1), _Multilayer Perceptron (MLP)_ (Eq. 2), and _K-Nearest Neighbours (KNN)_. (Eq. 3). We selected these three methods because they are representative classic ML methods suitable for the scale of the pilot running statistics. PolyR transfers the input features (\(x_{i},i\in\{1,2,...,n\}\), \(n\) is the number of input features) to a series of polynomial vectors (\([1,x_{i},x_{i}^{2},...x_{i}^{m}]\), \(m\) is the highest degree), and then constructs a predictor by multiplying a weight matrix (\(\mathbf{W}\in\mathbb{R}^{m\times n}\)). MLP consists of multiple layers of perceptrons, where each perceptron applies the _ReLu_ function to the weighted sum of all neuron outputs of the previous layer plus the bias terms \(\mathbf{W}^{(l-1)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l-1)}\). For a given data \(i\) whose output feature \(y_{i}\) is to be predicted, KNN finds its k samples (s, consisting of pairs of input \(\mathbf{x}_{s}\) and output \(y_{s}\)) that are most similar to \(i\) (the k nearest neighbours \(\mathcal{N}_{k}\)) in the training data, and uses a weighted sum (the reciprocal of distance \(d(s,i)\)) of the output features \(y_{s}\) in \(\mathcal{N}_{k}\) as the estimation.
The _resource configuration rules_ are trained with the same three ML methods and with optimisation techniques such as Bayesian optimisation or grid search. For example, the functions @func_fs_1 and @func_fs_2 that find the optimal chuck size (nc) and slice size (ns) are trained by finding the arguments of (nc, ns) for the minimal total computing time (\(t_{\text{total}}\))
\[\begin{array}{l}\texttt{nc},\texttt{ns}=\arg\min_{\texttt{nc},\texttt{ns}}\ t_{\texttt{total}}=\arg\min_{\texttt{nc},\texttt{ns}}f(\texttt{v}, \texttt{n},\texttt{nc},\texttt{ns},t_{\texttt{slice}},t_{\texttt{prepare}})\\ \\ \texttt{x}_{i}=[1,x_{i},x_{i}^{2},...x_{i}^{m}]^{\texttt{T}}\ \ \texttt{h}^{(0)}= \texttt{x}=[x_{1},x_{2},...,x_{n}]^{\texttt{T}}\ \
system was deployed in a node that meets the total requirements for the experimental data to monitor the resource usage.
It can be seen that the the memory usage of the computing instance for the legacy solution increases monotonously along the processed input data volume, while SemCloud requires almost zero increase of memory allocation, which means SemCloud can deploy the ETL process on many computing instances with no extra memory demand. Figure 6b shows SemCloud requires slightly more CPU power. This is expected and understandable, because the distributed deployment consumes more computing power per unit of time, but decreases the overall computing time. The latter is confirmed by Figure 6c: as the input data volume increases the reduced computing time brought by SemCloud becomes increasingly significant (R1).
### Evaluation of Rule Parameter Learning and Inference
**Pilot Running Statistics**. To verify the scalability and accuracy of the rule parameter learning and inference, we gather pilot running statistics, train and test the ML functions in SemCloud. We run SemCloud repeatedly 3562 times with different sizes of subsets of the welding dataset in Sect. 4.1 and gather pilot running statistics. These statistics include data information, e.g., input data size, different configurations, e.g., slice size, and recorded resource consumption e.g., memory consumption, mCPU consumption.
**Experiment Setting.** We split the pilot running statistics so that 80% are for rule parameter training and 20% for rule testing and inference. Three ML models are trained and tested: _PolyR_, _MLP_, and _KNN_. We adopt a grid search strategy for hyper-parameter tuning. The final hyper-parameter are, PolyR: 4 degree; MLP: 2 hidden layers with neurons 10 and 9; KNN: 2 neighbours.
Figure 6: (Performance comparison by (a) memory usage, (b) total CPU usage (integrated over time), (c) consumed time: SemCloud significantly reduces the memory usage and consumed time (by about 50%), and uses slightly more total CPU, compared to the _legacy_ solution (Without SemCloud). X-axis: processed input data volume.
**Performance Metrics.** We use several performance metrics: _normalised mean absolute error_ (_nmae_) [23] to measure prediction accuracy, minimal training data amount (Min. \(|\mathcal{D}_{train}|\) for yielding satisfactory results, optimisation time (Opt. time), learning time and inference time. Intuitively, _nmae_ reflects the scale-independent average prediction error. It is computed as _mean absolute error_ normalised by the mean value of the configuration \(\bar{c}\): _nmae_\(=mae/\bar{c}\). _nmae_ reflects the mean absolute error between the ground truth configuration \(c\) and the predicted configuration \(\hat{c}\): \(mae=\frac{1}{|\mathcal{D}_{test}|}\sum_{\mathcal{D}_{test}}|c-\hat{c}|\), and \(\bar{c}=\sum_{\mathcal{D}_{test}}c/|\mathcal{D}_{test}|\). We normalise \(mae\) because its scale is dependent on the variable for which we calculate _mae_. If it is divided by the mean value of the variable, it becomes _nmae_ which is scale-independent.
**Learning and Inference Results.** The optimisation results of _slice size_ shows we can configure it to find a "sweet spot" to minimise the total consumed time (Figure 7a). The performance of rule learning and inference is shown in Table 1 (R3). The learning time is the time it takes to train the models. The inference time includes the model inference time and the rule reasoning time. It can be seen that the PolyR has the best prediction accuracy, requires the least training data, and consumes the least time. Therefore, PolyR generates the best results and is selected for the use case. We presume the reason is that PolyR works better with small amounts of and not very complex data (3562 repeated running statistics). We can see MLP is not very stable (Figure 7b). This is due to the random initialisation effect of MLP.
## 5 Discussion on General Impact and Related Works
**Uptake for the cloud community.** SemCloud is an attempt for democratising cloud systems for non-cloud experts. We hope to inspire research and a broad range of users who are pursuing scaling data science solutions on the cloud, but
\begin{table}
\begin{tabular}{l c c c} \hline Metric & PolyR & MLP & KNN \\ \hline _nmae_ & 0.0671 & 0.0947 & 0.0818 \\ Min. \(|\mathcal{D}_{train}|\) & 7.42\% & 50.97\% & 10.00\% \\ Opt. time & 1.12s & 174.32s & 7.25s \\ Learning time & 20.82ms & 120.31ms & 27.52ms \\ Inference time & \(<\)1.00ms & \(<\)1.00ms & \(<\)5.00ms \\ \hline \end{tabular}
\end{table}
Table 1: Parameter learning and reasoning results, recorded on Intel Core i7-10710U.
Figure 7: (a) Optimisation to find the best slice size for the least time, when chunk size is fixed. (b) Comparing ML methods to find the minimal training data
are impeded by the long training time for acquiring cloud expertise. Providing a dynamically scalable (on a step and pipeline level), general-purpose solution for big data pipelines on heterogeneous resources that a broad audience can use is an open research topic [24, 25]. The currently available multitude of tools for big data pipeline management only partially addresses these issues as shown in [26]. Data pipeline management approaches in literature such as [27, 28, 29] also partially address the issue but either specifically support a knowledge domain (scientific workflows, ad-hoc languages), fail to address issues related to individual step scalability, or are not well-suited for dynamic long-running jobs. Other works that touch that topic [30, 31, 32, 33] also do not address the automatic resource allocation issue and are not designed for non-cloud experts. Our approach tackles these issues and the presented principles should be easy to reproduce.
**Uptake in terms of semantic technology.** We open-source our cloud ontology [34]. We hope this ontology can facilitate research of semantic technology in the scalability challenges, that the tenets of explicit, transparent, and shared knowledge can advance in the practice in academia and industry. We developed it as we did not find a suitable ontology for our challenges. Past works about cloud ontologies focus more on describing the different layers and components [4, 5], services [35], functional or non-functional features and the interaction between the layers [36]. They cover the cloud tasks and resource allocation, but to a limited extent. There exist other works about the resource management topic [37, 38, 39], but they do not provide mechanism or reasoning for adaptive and automatic resource configuration. Works about cloud reasoning are focused on other aspects like security attacks [40], minimising sources like computing nodes [41], computational requirements [42], service placement [43], verifying policy properties [44], deploying semantic reasoning on the cloud [45]. There is insufficient discussion on helping users to automate the cloud resource configuration.
**Uptake by stakeholders and benefits.** Semantic technologies play an increasingly important role in modern industrial practice. Ontologies, as a good way for formal description of knowledge, offer unambiguous "lingua franca" for cross-domain communication. They can help users to perform tasks of a remote domain that otherwise would be error-prone, time-consuming and cognitively demanding. We incorporated rule-based reasoning, falling in the category of symbolic reasoning, with machine learning, which is a type of sub-symbolic reasoning. The combination takes benefits from both: SemCloud becomes more agnostic of cloud infrastructure and adapts to the resource conditions, thus exploiting explicit domain knowledge via semantics and learning implicit relationship via ML.
In addition, we tested SemCloud with users of various backgrounds (welding experts, data scientists, semantic experts). SemCloud could improve their working efficiency. Before using SemCloud, users that are the non-cloud experts have very limited understanding of the cloud system, and did not use the cloud system. Through SemCloud, these users could obtain better understanding of the cloud system, start using the cloud system, and rely on the SemCloud to automatically configure the resource allocation. We tested the GUI with the users to collected feedback for improving the usability and expanding the functionalities.
**Lessons Learnt on costs and risks.** The main costs for development of such systems comprise the early development time for the semantic infrastructure that mediates between the cloud resources, data analysis solutions and users. Naturally, these costs vary depending on the specific project. It was manageable in our case, but should be carefully evaluated for each project individually. The key lessons learnt for reducing costs is that a good cross-domain communication framework is essential, where experts of different backgrounds can speak a common language and reduce misunderstanding and communication time. A possible and important risk is that the assumption could be wrong as to whether and to what extent the ETL and data analysis can be parallelised. It is recommended to verify the assumption early to avoid further costs.
## 6 Conclusion, and Outlook
**Conclusion.** This work presents our SemCloud system motivated by a Bosch use case of welding monitoring, for addressing the scalability challenges in terms of _data volume_, _variety_, and more _users_. SemCloud provides semantic abstraction that mediates between the users, ETL and data analysis, as well as cloud infrastructure. The scalability in terms of data variety is addressed by semantic data integration, data volume by distributed ETL and data analysis, and scalability to more users by adaptive Datalog rule-based resource configuration. These Datalog rules are adaptive because they have parameterised functions learnt from pilot running statistics via ML, a combination of symbolic and sub-symbolic approaches. We evaluated SemCloud extensively through cloud deployment with large volume of industrial data, and rule learning from thousands of pilot runs of the system, showing very promising results.
**Outlook.** SemCloud is under the umbrella of Neuro-Symbolic AI for Industry 4.0 at Bosch [46] that aims at enhancing manufacturing with both symbolic AI (such as semantic technologies [47]) for improving transparency [48], and ML for prediction power [49]. Bosch is developing user-friendly cloud technology in the framework of EU project DataCloud with many EU partners [50]. SemCloud is partially developed with production end-users and current deployed in a Bosch evaluation environment. We plan to push it into the production to test with more users and collect feedback, and work together with EU partners for transferring the knowledge and experience to other manufacturing domains to increase the wide adoption. We also plan to develop formal theories for knowledge representation and reasoning for cloud technology and automatic resource configuration, e.g., better modelling framework, advanced reasoning rules, and deeper integration of symbolic and sub-symbolic reasoning.
**Acknowledgements.** The work was partially supported by the European Commission funded projects DataCloud (101016835), enRichMyData (101070284), Graph-Massivizer (101093202), Dome 4.0 (953163), OntoCommons (958371), and the Norwegian Research Council funded projects (237898, 323325, 309691, 309834, and 308817). |
2306.06257 | Vector Summaries of Persistence Diagrams for Permutation-based
Hypothesis Testing | Over the past decade, the techniques of topological data analysis (TDA) have
grown into prominence to describe the shape of data. In recent years, there has
been increasing interest in developing statistical methods and in particular
hypothesis testing procedures for TDA. Under the statistical perspective,
persistence diagrams -- the central multi-scale topological descriptors of data
provided by TDA -- are viewed as random observations sampled from some
population or process. In this context, one of the earliest works on hypothesis
testing focuses on the two-group permutation-based approach where the
associated loss function is defined in terms of within-group pairwise
bottleneck or Wasserstein distances between persistence diagrams (Robinson and
Turner, 2017). However, in situations where persistence diagrams are large in
size and number, the permutation test in question gets computationally more
costly to apply. To address this limitation, we instead consider pairwise
distances between vectorized functional summaries of persistence diagrams for
the loss function. In the present work, we explore the utility of the Betti
function in this regard, which is one of the simplest function summaries of
persistence diagrams. We introduce an alternative vectorization method for the
Betti function based on integration and prove stability results with respect to
the Wasserstein distance. Moreover, we propose a new shuffling technique of
group labels to increase the power of the test. Through several experimental
studies, on both synthetic and real data, we show that the vectorized Betti
function leads to competitive results compared to the baseline method involving
the Wasserstein distances for the permutation test. | Umar Islambekov, Hasani Pathirana | 2023-06-09T21:01:41Z | http://arxiv.org/abs/2306.06257v1 | # Vector Summaries of Persistence Diagrams for Permutation-based Hypothesis Testing
###### Abstract
Over the past decade, the techniques of topological data analysis (TDA) have grown into prominence to describe the shape of data. In recent years, there has been increasing interest in developing statistical methods and in particular hypothesis testing procedures for TDA. Under the statistical perspective, persistence diagrams - the central multi-scale topological descriptors of data provided by TDA - are viewed as random observations sampled from some population or process. In this context, one of the earliest works on hypothesis testing focuses on the two-group permutation-based approach where the associated loss function is defined in terms of within-group pairwise bottleneck or Wasserstein distances between persistence diagrams [1]. However, in situations where persistence diagrams are large in size and number, the permutation test in question gets computationally more costly to apply. To address this limitation, we instead consider pairwise distances between vectorized functional summaries of persistence diagrams for the loss function. In the present work, we explore the utility of the Betti function in this regard, which is one of the simplest function summaries of persistence diagrams. We introduce an alternative vectorization method for the Betti function based on integration and prove stability results with respect to the Wasserstein distance. Moreover, we propose a new shuffling technique of group labels to increase the power of the test. Through several experimental studies, on both synthetic and real data, we show that the vectorized Betti function leads to competitive results compared to the baseline method involving the Wasserstein distances for the permutation test.
Topological data analysis Persistent homology Persistence diagram Betti function Hypothesis testing
## 1 Introduction
Topological data analysis (TDA) provides tools to describe the shape structure underlying data [2, 3]. One such widely used tool is the theory of _persistent homology_ which uses algebraic techniques to quantify the global topology and local geometry of data [4, 5, 6]. A standard topological descriptor provided by persistent homology is a _persistence diagram_ (PD) which is commonly computed via a medium of increasing family of combinatorial objects, called simplicial complexes, built on top of data points.
In applications, PDs, viewed as signatures of the data encoding its global topology and local geometry, are utilized within statistical or machine learning frameworks [7]. A direct way of doing this is to introduce a notion of distance between PDs, such as the bottleneck or Wasserstein distances [8, 9], and apply a distance-based method [10]. However, the fact that the space of PDs cannot be turned into a Hilbert space [11, 12], which is the input space for a wide class of machine learning methods, hinders the direct usage of PDs in practice. To address this challenge, one can use PDs indirectly within a learning task by mapping them into some Hilbert space [10]. For example, the mapping of PDs into a Hilbert space can be achieved via a suitable kernel function to be used within a kernel-based machine learning method [13, 14, 15, 16, 17]. Alternatively, PDs can be summarized as elements of \(\mathbb{R}^{d}\)[18, 19, 10, 20, 21, 22, 23]. A common way of extracting vector summaries from PDs is first to map them into a space of functions and then vectorize the corresponding functional summaries using a grid of scale values. In recent years, summarizing PDs by means of kernel functions and vectorization has been an active research domain in TDA.
Integration of TDA-based summaries within machine learning methods has proven very successful in a broad range of applications where data has intrinsic underlying shape structure (see e.g., [24] for a survey of topological methods in machine learning). By contrast, applications of TDA to statistics have not received the same level of attention and remain relatively less explored. A statistical approach to TDA views a given collection of PDs as a random sample drawn from some population or process and addresses problems such as deriving consistency and convergence results for TDA-based summaries, constructing confidence regions and performing hypothesis testing [7; 25; 26].
In the present paper, we aim to expand the scope of the permutation test introduced in [1] to test whether two groups of PDs are sampled from the same population. The associated loss function of the permutation test involves computing within-group pairwise bottleneck or Wasserstein distances between PDs. The computation of these distances requires finding optimal bijections between pairs of PDs and has the running time that grows cubically with the size of diagrams. Moreover, the total run-time cost of the test has a quadratic dependence on the number of PDs. Although efficient algorithms exit for approximating the Wasserstein distance [27; 28], the hypothesis testing procedure in question can become of limited use when PDs are large in size and number. To deal with the issue of high computational cost, we instead consider pairwise distances between vectorized functional summaries of PDs which have significantly lower time complexity (see the simulation experiment in Section 4.1 for comparison). Though we focus on and adopt for this work the Betti function1, which is one of the simplest univariate functional summaries extracted from a PD, our approach provides a general framework to incorporate any other vector summary in the hypothesis testing procedure used in [1]. We explore the permutation-based hypothesis testing procedure under the proposed approach in terms of performance and computational gain by conducting a range of experiments involving various types of input data (both synthetic and real) as well as different ways to construct a nested sequence of simplicial complexes on top of the data. Furthermore, we modify the permutation technique of the group labels and empirically demonstrate that the modification consistently leads to the increased power of the test (for more details, see Sections 3 and 4.2).
Footnote 1: also known as _Betti curve_
Many of the univariate functional summaries of PDs, including the Betti function, are vectorized by evaluating them at each point of a super-imposed grid. We propose an alternative vectorization scheme, where the Betti function is vectorized by averaging it using integration (see Section 2 for more details).
Of the key properties of PDs is their stability against certain class of perturbations in the input data [9; 29; 30; 8]. Stability results have also been derived for many of the vector summaries of PDs such as persistence landscapes (PL) [18] and persistence images (PI) [19]. We derive stability results for the Betti function and its vector summary with respect to the Wasserstein distance2.
Footnote 2: The proposed vectorization of the Betti function and the mentioned stability results are also presented in our recent work on predicting anomalies in time-varying graphs (for a preprint of the paper, see [31]). For completeness, we provide the details of the new vectorization and the proofs of the stability results in Section 2 with minor modifications.
The rest of the paper is organized as follows. Section 2 covers the basic theory behind TDA, the new vectorization method for the Betti function and the associated stability results. In Section 3, we review the hypothesis testing procedure for PDs introduced in [1]. In Section 4, we present our experimental findings and comparisons with other vector summaries in terms of performance and run-time cost. Section 5 summarizes the main contributions of the paper and highlights directions for future research.
## 2 Theory
### Basics of TDA
Persistent homology is one of the main tools used in TDA to extract shape information from data [4; 5; 6]. The basic idea behind persistent homology is to build a nested sequence of topological spaces, often in the form of _simplicial complexes_ (indexed by a scale parameter), on top of data points and keep a record of the appearance and disappearance of various topological features at different scale resolutions. In this context, these topological features are "holes" of different dimensions - connected components, loops, voids, and their higher-dimensional versions whose emergence and subsequent disappearance are tracked using a concept of homology from algebraic topology. An increasing sequence of simplicial complexes (called a _filtration_) serves as a medium for gaining insight into the underlying shape structure of the data which is lost during sampling [32].
An (abstract) simplicial complex is a collection \(\mathcal{K}\) of subsets of some finite set \(\mathbb{X}\) (called a _vertex set_) such that \(\{x\}\in\mathcal{K}\) for all \(x\in\mathbb{X}\) and if \(\tau\in\mathcal{K}\) then \(\sigma\in\mathcal{K}\) for all \(\sigma\subseteq\tau\). If \(\tau\in\mathcal{K}\) consists of \(k+1\) points, it is called a \(k\)-simplex (or a simplex of dimension \(k\)). The dimension of a simplicial complex is the maximum dimension of its simplices. It is
convenient to identify a 0-simplex with a vertex, a 1-simplex with an edge and a 2-simplex with a triangle and so on (see Figure 1).
Building a filtration is an important step in obtaining topological information from data. A general way to construct a filtration is by using sublevelsets of a monotone function defined over a simplicial complex [32]. Let \(\mathcal{K}\) be a simplicial complex and \(g:\mathcal{K}\rightarrow\mathbb{R}\) a monotone function in the sense that if \(\sigma,\tau\in\mathcal{K}\) with \(\sigma\subseteq\tau\), then \(g(\sigma)\leq g(\tau)\). Let \(\{c_{1},c_{2},\ldots,c_{N}\}\) be the increasing sequence of distinct values of \(g\). The sublevelset filtration of \(\mathcal{K}\) induced by \(g\) is a nested family of simplicial complexes:
\[\emptyset\subseteq S_{1}(g)\subseteq S_{2}(g)\subseteq\ldots\subseteq S_{N}( g)=\mathcal{K},\]
where \(S_{i}(g)=\{\sigma\in\mathcal{K}\mid g(\sigma)\leq c_{i}\}\) and \(N\) is called the length of the filtration.
In applications, the choice of \(g\) to construct filtered simplicial complexes depends on the nature of the data set (such as size and dimension), computational considerations and what kinds of topological/geometric features of the data one would like to highlight [32]. For example, if \(\mathbb{X}\) is a finite set of points lying in some metric space \((M,\rho)\) and \(\mathcal{K}_{\mathbb{X},d}\) is the simplicial complex of dimension \(d\) containing all subsets of \(\mathbb{X}\) of size at most \(d+1\), then the sublevelset filtration of \(\mathcal{K}_{\mathbb{X},d}\) induced by \(g(\sigma)=\max_{x,y\in\sigma}\rho(x,y)\) is called the _Vietoris-Rips_ filtration. The length of this filtration is equal to the number of distinct pairwise distances among the points of \(\mathbb{X}\). Note that the number of simplices in \(\mathcal{K}_{\mathbb{X},d}\) increases exponentially with the size of \(\mathbb{X}\). Therefore, to reduce the computational cost, in practise one commonly constructs a shorter filtration before reaching \(\mathcal{K}_{\mathbb{X},d}\) by stopping at the scale value reasonably less than the maximum distance [32]. The _Cech_ filtration is obtained if \(g(\sigma)=\min\{r\geq 0\mid\exists x\in M\text{ such that }\rho(x,y)\leq r\text{ for all }y\in\sigma\}\). While the Cech filtration has more desirable theoretical properties, in applications the Vietoris-Rip filtration is often preferred due to its ease of construction and fast implementation.
If \(\mathbb{X}\) is a finite abstract set and \(g(\sigma)=\max_{x\in\sigma}f(x)\), where \(f:\mathbb{X}\rightarrow\mathbb{R}\), the corresponding filtration is called the _lower-star_ filtration. Thus, to construct the lower-star filtration, it suffices to have any real-valued function \(f\) defined over the vertex set \(\mathbb{X}\) which does not have to lie in a metric space. When \(\mathbb{X}\subseteq\mathbb{R}^{d}\) and \(g(\sigma)=\min\{r\geq 0\mid\cap_{x\in\sigma}B_{x}(\sqrt{r})\cap V_{x} \neq\emptyset\}\), where \(B_{x}(\sqrt{r})\) is the closed ball centered at \(x\) with radius \(\sqrt{r}\) and \(V_{x}=\{z\in\mathbb{R}^{d}\mid\|z-x\|_{2}\leq\|z-y\|_{2}\text{ for all }y\in\mathbb{X}\}\) (Voronoi cell of \(x\)), one obtains filtered _alpha_ complexes. Observe that unlike the Vietoris-Rip and Cech complexes, the dimension of an alpha complex can never go beyond \(d\). Moreover, the so-called _nerve theorem_ is applicable to filtered alpha complexes. However, these benefits come at the expense of increased computational cost which makes alpha complexes less practical in higher dimensions (i.e., when \(d\) is large) [32]. In section 4, we illustrate the usage for most of these filtrations.
A standard summary of various topological features that appear and disappear as we traverse a filtration is called a _persistence diagram_ (PD). It is a multiset of points in \(\mathbb{R}^{2}\) where each point \((b,d)\) represents a topological feature that is "born" at scale value \(b\) and "dies" at scale value \(d\). Note that a PD is a multiset since it may contain multiple features with the same birth and death values. The scale values vary over the range of the function \(g\) which induces the filtration. A \(k\)-dimensional PD consists of points corresponding to topological features of homological dimension \(k\) (0 if connected components, 1 if one-dimensional holes (or loops), 2 if two-dimensional holes (or voids), etc). If the homological dimension of a PD is understood or irrelevant in a given context, we will omit it to reduce notation. The features with low persistence values (i.e. \(d-b\)) tend to correspond to noise, while those having high persistence are more likely to reflect true topological features of the underlying shape. Thus, a PD can be viewed as a topological fingerprint of the data it is computed from [7].
Shape structures underlying two sets of data points can be compared by introducing a distance metric between the computed PDs. The \(L_{p}\)\(q\)-Wasserstein distance [27], \(p,q\geq 1\), between two finite persistence diagrams \(D\) and \(\tilde{D}\) is defined as
\[d_{pq}(D,\tilde{D})=\Big{(}\inf_{\gamma:D\rightarrow\tilde{D}}\sum_{u\in D} \|u-\gamma(u)\|_{p}^{q}\Big{)}^{1/q}, \tag{1}\]
Figure 1: Geometric representation of simplices. Simplices serve as building blocks for simplicial complexes which are used to approximate the underlying shape structure lost during sampling.
where \(\|\cdot\|_{p}\) is the \(L_{p}\) norm on \(\mathbb{R}^{2}\) and \(\gamma\) is a bijection with \(D\) and \(\bar{D}\) being enlarged to additionally contain points of the diagonal line \(\Delta=\{(b,d)\in\mathbb{R}^{2}\mid b=d\}\) with infinite multiplicity. Thus, \(\gamma\) pairs an off-diagonal point either with another off-diagonal point from the other diagram or with its own orthogonal projection on the diagonal line \(\Delta\). If \(p=\infty\), the metric is simply called the \(q\)-Wasserstein distance. If both \(p\) and \(q\) are infinite, one obtains the _bottleneck_ distance (where the summation is replaced by the supremum).
### Summary functions of persistence diagrams
Since the Wasserstein distance involves finding the optimal bijection, the overall computational cost can get very high if PDs are large in size or in number. One way of handling this limitation is to extract functional summaries from PDs, discretize them to obtain vectors in \(\mathbb{R}^{n}\) and use the \(L_{p}\) distance between the vectors instead of the Wasserstein distance. Vectorization methods such as _persistence landscapes_[18], _persistence images_[19] and _Betti functions_[7, 10] fall under this approach. Below we briefly review their constructions and vectorization methods.
The \(k\)th order landscape function of a persistence diagram \(D=\{(b_{i},d_{i})\}_{i=1}^{N}\) is defined as
\[\lambda_{k}(t)=k\text{max}_{1\leq i\leq N}\Lambda_{i}(t),\quad k\in\mathbb{N},\]
where \(k\)max returns the \(k\)th largest value and
\[\Lambda_{i}(t)=\left\{\begin{array}{ll}t-b_{i}&t\in[b_{i},\frac{b_{i}+d_{i} }{2}]\\ d_{i}-t&t\in(\frac{b_{i}+d_{i}}{2},d_{i}]\\ 0&\text{otherwise}\end{array}\right.\]
To obtain a finite-dimensional vector, the landscape function is evaluated at each point of an increasing sequence of scale values \(\{t_{1},t_{2},\ldots,t_{n}\}\):
\[(\lambda_{k}(t_{1}),\lambda_{k}(t_{2}),\ldots,\lambda_{k}(t_{n}))\in\mathbb{R }^{n}. \tag{2}\]
A persistence image is a vector summary of the _persistence surface_ function
\[\rho(x,y)=\sum_{u\in T(D)}f(u)\phi_{u}(x,y),\]
where \(T(D)=\{(b_{i},p_{i})\}_{i=1}^{N}\) with \(p_{i}=d_{i}-b_{i}\) (persistence), \(\phi_{u}\) is a differentiable probability distribution (usually a Gaussian) with mean \(u=(u_{x},u_{y})\) and \(f\) is a suitable weighting function with zero values along the diagonal \(\Delta\). The persistence surface is vectorized by averaging it over each cell of a superimposed two-dimensional grid:
\[\Big{(}\frac{1}{\Delta x_{1}\Delta y_{1}}\int_{x_{1}}^{x_{2}}\int_{y_{1}}^{y _{2}}\rho(x,y)dydx,\ldots,\frac{1}{\Delta x_{n-1}\Delta y_{m-1}}\int_{x_{n-1}} ^{x_{n}}\int_{y_{m-1}}^{y_{m}}\rho(x,y)dydx\Big{)}\in\mathbb{R}^{(n-1)(m-1)},\]
where \(\{x_{1},x_{2},\ldots,x_{n}\}\) and \(\{y_{1},y_{2},\ldots,y_{m}\}\) are two increasing sequences of birth and persistence values; \(\Delta x_{k}=x_{k+1}-x_{k}\) and \(\Delta y_{j}=y_{j+1}-y_{j}\).
The Betti function is one of the simplest summary functions and defined as a weighted sum of indicator functions determined by the points of a PD:
\[\beta(t)=\sum_{i=1}^{N}w(b_{i},d_{i})I_{[b_{i},d_{i})}(t), \tag{3}\]
where \(I_{[b_{i},d_{i})}(t)=1\) if \(t\in[b_{i},d_{i})\) and 0 otherwise and \(w\) is a weight function. If \(w\equiv 1\) and the homological dimension of a PD is \(k\), \(\beta(t)\) is equal to the count of \(k\)-dimensional topological features what are born before or at \(t\) and still persistent.
Similar to the persistence landscape function, a common way to vectorize the Betti function is to evaluate it at each point of an increasing sequence of scale values \(\{t_{1},t_{2},\ldots,t_{n}\}\) and form a vector in \(\mathbb{R}^{n}\). Since the Betti function is easy to integrate, in the present work, we propose to vectorize by averaging it over intervals formed by pairs of successive scale points:
\[\Big{(}\frac{1}{\Delta t_{1}}\int_{t_{1}}^{t_{2}}\beta(t)dt,\frac{1}{\Delta t_ {2}}\int_{t_{2}}^{t_{3}}\beta(t)dt,\ldots,\frac{1}{\Delta t_{n-1}}\int_{t_{n-1 }}^{t_{n}}\beta(t)dt\Big{)}\in\mathbb{R}^{n-1}. \tag{4}\]
We call this new vector representation a _vector of averaged Bettis_ (VAB). Unlike the common approach, the proposed vectorization method considers the behavior of the Betti function over the entire intervals determined by pairs of neighboring scale values. Note that if a grid of scale values is dense enough, the two vector summaries will be very
similar but high-dimensional. However, the new vectorization can also be used to compute informative low-dimensional vectors based on a sparse grid of scale values3.
Footnote 3: See the experiment of Section 3.1 in [31] which compares the two vectorization methods of the Betti function when the extracted vector summaries are low-dimensional.
Among the metrics for comparing PDs that do not involve vectorization is the _sliced Wasserstein_ distance [17]. It utilizes the fact that the Wasserstein distance is fast to compute if the points of PDs all lie on the same line. Thus, for the sliced Wasserstein distance the points of PDs are first projected onto a line through the origin and then the Wasserstein distance is computed between the two sets of projected points. Finally, the Wasserstein distances are averaged over all possible lines via integration.
### Stability results for the Betti function and vector of averaged Bettis (VAB)
PDs are known to be stable to fluctuations with respect to the Wasserstein distance [9, 8]. Likewise, stability is a desirable property of any vector summary of PDs. For example, vector summaries such as persistence landscapes and persistence images have some type of stability guaranties [18, 19]. In the following, we derive stability results (Theorem 1 and Corollary 1) for the Betti function and its new proposed vectorization with respect to the \(L_{1}\) and \(L_{2}\) 1-Wasserstein distances. We assume that all PDs have finitely many off-diagonal points with finite death values. Under additional assumptions for the weight function \(w\) in (3), the stability result can be extended to PDs with infinite death values (see Remark 3). First, we prove a technical lemma needed for our main result (Theorem 1).
**Lemma 1**.: _Let \(w:\mathbb{R}^{2}\rightarrow\mathbb{R}\) be a differentiable function with \(\|w\|_{\infty}=\sup_{z\in\mathbb{R}^{2}}|w(z)|<\infty\) and \(\|\nabla w\|_{\infty}=\sup_{z\in\mathbb{R}^{2}}\|\nabla w(z)\|_{2}<\infty\). Then for any \(u=(b_{u},d_{u})\), \(v=(b_{v},d_{v})\) with \(b_{u}\leq d_{u}<\infty\) and \(b_{v}\leq d_{v}<\infty\), we have_
\[\int_{\mathbb{R}}|w(u)I_{[b_{u},d_{u})}(t)-w(v)I_{[b_{v},d_{v})}(t)|dt\leq\|w\| _{\infty}\|u-v\|_{1}+L\|\nabla w\|_{\infty}\|u-v\|_{2}, \tag{5}\]
_where \(L\) is a non-negative constant such that \(L\geq\max\{d_{u}-b_{u},d_{v}-b_{v}\}\)._
Proof.: The proof consists of three cases. Let \(f(t)=|w(u)I_{[b_{u},d_{u})}(t)-w(v)I_{[b_{u},d_{v})}|\) and note that by the fundamental theorem of calculus for line integrals, \(|w(u)-w(v)|\leq\|\nabla w\|_{\infty}\|u-v\|_{2}\).
Case 1: \([b_{u},d_{u})\cap[b_{v},d_{v})=\emptyset\). Without the loss of generality, assume \(b_{u}\leq d_{u}\leq b_{v}\leq d_{v}\). Then,
\[\int_{\mathbb{R}}f(t)dt =\int_{b_{u}}^{d_{u}}f(t)dt+\int_{b_{v}}^{d_{v}}f(t)dt\] \[=\int_{b_{u}}^{d_{u}}|w(u)|dt+\int_{b_{v}}^{d_{v}}|w(v)|dt\] \[\leq\|w\|_{\infty}|d_{u}-b_{u}|+\|w\|_{\infty}|d_{v}-b_{v}|\] \[\leq\|w\|_{\infty}|b_{v}-b_{u}|+\|w\|_{\infty}|d_{v}-d_{u}|\] \[=\|w\|_{\infty}(|b_{u}-b_{v}|+|d_{u}-d_{v}|)\] \[=\|w\|_{\infty}\|u-v\|_{1}\] \[\leq\|w\|_{\infty}\|u-v\|_{1}+L\|\nabla w\|_{\infty}\|u-v\|_{2}.\]
Case 2: \([b_{u},d_{u})\cap[b_{v},d_{v})\neq\emptyset\), \([b_{u},d_{u})\not\subseteq[b_{v},d_{v})\) and \([b_{v},d_{v})\not\subseteq[b_{u},d_{u})\). Without the loss of generality, assume \(b_{u}\leq b_{v}\leq d_{u}\leq d_{v}\). Then,
\[\int_{\mathbb{R}}f(t)dt =\int_{b_{u}}^{b_{v}}f(t)dt+\int_{b_{v}}^{d_{u}}f(t)dt+\int_{d_{u }}^{d_{v}}f(t)dt\] \[=\int_{b_{u}}^{b_{v}}|w(u)|dt+\int_{b_{v}}^{d_{u}}|w(u)-w(v)|dt+ \int_{d_{u}}^{d_{v}}|w(v)|dt\] \[\leq\|w\|_{\infty}|b_{v}-b_{u}|+\|\nabla w\|_{\infty}\|u-v\|_{2} |d_{u}-b_{v}|+\|w\|_{\infty}|d_{v}-d_{u}|\] \[=\|w\|_{\infty}|(|b_{u}-b_{v}|+|d_{u}-d_{v}|)+L\|\nabla w\|_{\infty} \|u-v\|_{2}\] \[=\|w\|_{\infty}\|u-v\|_{1}+L\|\nabla w\|_{\infty}\|u-v\|_{2}.\]
Case 3: \([b_{u},d_{u})\subseteq[b_{v},d_{v})\) or \([b_{v},d_{v})\subseteq[b_{u},d_{u})\). Without the loss of generality, assume \(b_{u}\leq b_{v}\leq d_{v}\leq d_{u}\). Then,
\[\int_{\mathbb{R}}f(t)dt =\int_{b_{u}}^{b_{v}}f(t)dt+\int_{b_{v}}^{d_{v}}f(t)dt+\int_{d_{v}}^{ d_{u}}f(t)dt\] \[=\int_{b_{u}}^{b_{v}}|w(u)|dt+\int_{b_{v}}^{d_{v}}|w(u)-w(v)|dt+ \int_{d_{u}}^{d_{u}}|w(v)|dt\] \[\leq\|w\|_{\infty}|b_{v}-b_{u}|+\|\nabla w\|_{\infty}\|u-v\|_{2}| d_{v}-b_{v}|+\|w\|_{\infty}|d_{u}-d_{v}|\] \[=\|w\|_{\infty}(|b_{u}-b_{v}|+|d_{u}-d_{v}|)+L\|\nabla w\|_{\infty }\|u-v\|_{2}\] \[=\|w\|_{\infty}\|u-v\|_{1}+L\|\nabla w\|_{\infty}\|u-v\|_{2}.\]
By combining the three cases we finish the proof of the lemma.
We now state and prove our main stability result. In the context of computing the Wasserstein distance, PDs are additionally supplied with points of the diagonal line \(\Delta=\{(b,d)\in\mathbb{R}^{2}\,|\,\,b=d\}\). Note that the inclusion of such points does not change the corresponding Betti function.
**Theorem 1**.: _Let \(D\) and \(\tilde{D}\) be two persistence diagrams with finitely many off-diagonal points and finite death values. Let \(\beta(t)=\sum_{u\in D}w(u)I_{[b_{u},d_{u})}(t)\) and \(\tilde{\beta}(t)=\sum_{v\in\tilde{D}}w(v)I_{[b_{u},d_{v})}(t)=\sum_{u\in D}w( \gamma(u))I_{[b_{\gamma(u)},d_{\gamma(u)})}(t)\) be the associated Betti functions, where the weight function \(w\) is as in Lemma 1 and \(\gamma\) is a bijection between \(D\) and \(\tilde{D}\). Then,_
\[\|\beta-\tilde{\beta}\|_{L_{1}}\leq\|w\|_{\infty}d_{11}(D,\tilde{D})+L\|\nabla w \|_{\infty}d_{21}(D,\tilde{D}), \tag{6}\]
_where \(L=\max_{u\in D_{1}\cup D_{2}}(d_{u}-b_{u})\). In particular, if \(w\equiv 1\) then \(\|w\|_{\infty}=1\), \(\|\nabla w\|_{\infty}=0\) and therefore, we have_
\[\|\beta-\tilde{\beta}\|_{L_{1}}\leq d_{11}(D,\tilde{D}). \tag{7}\]
Proof.: \[\|\beta-\tilde{\beta}\|_{L_{1}} =\int_{\mathbb{R}}\Big{|}\sum_{u\in D}w(u)I_{[b_{u},d_{u})}(t)- \sum_{u\in D}w(\gamma(u))I_{[b_{\gamma(u)},d_{\gamma(u)})}(t)\Big{|}dt\] \[\leq\sum_{u\in D}\int_{\mathbb{R}}|w(u)I_{[b_{u},d_{u})}(t)-w( \gamma(u))I_{[b_{\gamma(u)},d_{\gamma(u)})}(t)|dt\] \[\leq\|w\|_{\infty}\sum_{u\in D}\|u-\gamma(u)\|_{1}+L\|\nabla w\|_ {\infty}\sum_{u\in D}\|u-\gamma(u)\|_{2}\quad\text{(by Lemma~{}\ref{lem:D})}\] \[\leq\|w\|_{\infty}d_{11}(D,\tilde{D})+L\|\nabla w\|_{\infty}d_{21} (D,\tilde{D}).\]
**Remark 1**.: _If the points of all PDs being considered belong to \(\mathbb{R}^{2}_{\Delta,m,M}=\{(x,y)\,\,|\,\,m\leq x\leq y\leq M\}\) for some constants \(m,M\in(0,\infty)\), then any differentiable function \(w\) defined on the compact set \(\mathbb{R}^{2}_{\Delta,m,M}\) can be taken as a weight function for Theorem 1._
**Remark 2**.: _Since \(\|u-v\|_{2}\leq\|u-v\|_{1}\), \(\|u-v\|_{1}\leq 2\|u-v\|_{\infty}\) and \(\|u-v\|_{2}\leq\sqrt{2}\|u-v\|_{\infty}\), the upper bound in (6) can be further bounded above by \((\|w\|_{\infty}+L\|\nabla w\|_{\infty})d_{11}(D,\tilde{D})\) and \((2\|w\|_{\infty}+\sqrt{2}L\|\nabla w\|_{\infty})d_{\infty 1}(D,\tilde{D})\)._
**Remark 3**.: _In general, a PD may contain points with infinite death values. Note that the associated Betti function is still well-defined. In practice, one may ignore points with infinite death values or replace such values with some global constant. If we choose neither of these options and further assume that \(w\equiv C\) (constant) and \(\|u-\phi(u)\|_{1}=|b_{1}-b_{2}|\) where \(u=(b_{1},\infty)\in D\) and \(\phi(u)=(b_{2},\infty)\in\tilde{D}\) (a common convention adopted by the software tools for TDA), then the upper bound in (5) will be \(C\|u-v\|_{1}\) and hence \(\|\beta-\tilde{\beta}\|_{L_{1}}\leq C\cdot d_{11}(D,\tilde{D})\). If \(D\) and \(\tilde{D}\) have unequal number of points with infinite death values, then one of the points is necessarily matched with a point on the diagonal line \(\Delta\) which implies \(\|\beta-\tilde{\beta}\|_{L_{1}}=d_{11}(D,\tilde{D})=\infty\). If the points with infinite death values are equal in number across \(D\) and \(\tilde{D}\), then they must be matched among one another by an optimal bijection \(\gamma\) and the two distances will both be finite._
As proven in [10], the lack of a stability result for the Betti curve \(\beta(t)=\sum_{(b,d)\in D}I_{b<d}(b,d)I_{[b,d)}(t)\) (see Section A.1.1 of the cited paper for more details) does not contradict the claim of Theorem 1. In [10], the Betti curve is considered a member of a broader class of summary functions called _persistence curves_, where the weight functions may additionally depend on \(t\), and are assumed to be zero on the diagonal line \(\Delta\) and continuous with respect to \(t\). The weight function \(w(b,d)=I_{b<d}(b,d)\) of the Betti curve is not differentiable on \(\mathbb{R}^{2}\) and therefore is not suitable for
Theorem 1. In contrast, we take \(w(b,d)\equiv 1\) which proves crucial for proving inequality (7). Note that the two weights differ only on the diagonal \(\Delta\) and since both are multiplied by \(I_{[b,d)}(t)\), the resulting Betti summaries are the same.
For some weight functions, the main stability result of [10] (see Theorem 1) and Theorem 1 provide similar bounds. For example, if \(w(b,d)=d-b\) and the points of PDs belong to \(\mathbb{R}^{2}_{\Delta,m,M}\), then according to Theorem 1 in [10],
\[\|\beta-\tilde{\beta}\|_{L_{1}}\leq 4(M-m)d_{\infty 1}(D,\tilde{D}).\]
For Theorem 1, \(\|w\|_{\infty}\leq M-m\) (assuming \(w\) is restricted to \(\mathbb{R}^{2}_{\Delta,m,M}\)), \(L\leq M-m\) and \(\|\nabla w\|_{\infty}=\sqrt{2}\) implying by Remark 2
\[\|\beta-\tilde{\beta}\|_{L_{1}} \leq(2+2)(M-m)d_{\infty 1}(D,\tilde{D})\] \[=4(M-m)d_{\infty 1}(D,\tilde{D}).\]
Next is a stability result for the vector of averaged Bettis (VAB) which readily follows from Theorem 1.
**Corollary 1**.: _Under the assumptions of Theorem 1, let \(\mathbf{\beta}\) and \(\tilde{\mathbf{\beta}}\) be the corresponding VABs computed using an increasing sequence of equally spaced scale values \(t_{1},t_{2},\ldots,t_{n}\) via (4). Then,_
\[\|\mathbf{\beta}-\tilde{\mathbf{\beta}}\|_{1}\leq\frac{1}{\Delta t}\Big{[}\|w\|_{ \infty}d_{11}(D,\tilde{D})+L\|\nabla w\|_{\infty}d_{21}(D,\tilde{D})\Big{]},\]
_where \(\Delta t=t_{i+1}-t_{i}=\text{const}\), \(i=1,2,\ldots,n-1\). In particular, if \(w\equiv 1\) then \(\|w\|_{\infty}=1\), \(\|\nabla w\|_{\infty}=0\) and therefore we have_
\[\|\mathbf{\beta}-\tilde{\mathbf{\beta}}\|_{1}\leq\frac{1}{\Delta t}d_{11}(D,\tilde{D}).\]
Proof.: \[\|\mathbf{\beta}-\tilde{\mathbf{\beta}}\|_{1} =\sum_{i=1}^{n-1}\Big{|}\frac{1}{\Delta t_{i}}\int_{t_{i}}^{t_{i+ 1}}\beta(t)dt-\frac{1}{\Delta t_{i}}\int_{t_{i}}^{t_{i+1}}\tilde{\beta}(t)dt \Big{|}\] \[\leq\sum_{i=1}^{n-1}\frac{1}{\Delta t}\int_{t_{i}}^{t_{i+1}}| \beta(t)-\tilde{\beta}(t)|dt\] \[\leq\frac{1}{\Delta t}\int_{\mathbb{R}}|\beta(t)-\tilde{\beta}(t )|dt\] \[\leq\frac{1}{\Delta t}\Big{[}\|w\|_{\infty}d_{11}(D,\tilde{D})+L\| \nabla w\|_{\infty}d_{21}(D,\tilde{D})\Big{]}\quad\text{(by Theorem 1)}\]
## 3 Hypothesis testing procedure for persistence diagrams
In this section we review the hypothesis testing procedure presented in [1]. Let \(\{D_{i}\}_{i=1}^{n_{1}}\) and \(\{\tilde{D}_{i}\}_{i=1}^{n_{2}}\) be two groups (or sets) of independent PDs. The PDs of each group might be generated from some model or computed from data (e.g., point clouds, images, graphs etc.) via a suitable filtration type. The goal is to test whether the processes behind generating the two groups of PDs are the same (the null hypothesis \(H_{0}\)) or different (the alternative hypothesis \(H_{1}\)). For example, if PDs of each group arise from point clouds sampled from a 2D shape, the null hypothesis will state that the underlying two shapes are the same.
Since PDs are not known to be characterized by parametric distributions, the authors of [1] resort to a permutation-based testing procedure which provides an empirical estimate of the null distribution of the test statistic. The paper adopts the following joint loss function involving within-group pairwise distances between PDs as a test statistic:
\[F_{pq}(\{D_{i}\},\{\tilde{D}_{i}\})=\frac{2}{n_{1}(n_{1}-1)}\sum_{i=1}^{n_{1} }\sum_{j=1}^{n_{1}}d_{pp}(D_{i},D_{j})^{q}+\frac{2}{n_{2}(n_{2}-1)}\sum_{i=1}^ {n_{2}}\sum_{j=1}^{n_{2}}d_{pp}(\tilde{D}_{i},\tilde{D}_{j})^{q}, \tag{8}\]
where \(d_{pp}\) is the \(L_{p}\)\(p\)-Wasserstein distance as in (1) and \(q\geq 1\).
The permutation-based \(p\)-value for a given labelling \(L_{0}=\{D_{1},\ldots,D_{n_{1}},\tilde{D}_{1},\ldots,\tilde{D}_{n_{2}}\}\) under the loss function \(F\) is the proportion of all labelings (permutations of the labels) \(L\) for which \(F(L)\leq F(L_{0})\). The number of all possible
labelings is \(\binom{n_{1}+n_{2}}{n_{1}}\) which can easily get very large. In this case, one samples uniformly from the set of all permutations which provides an unbiased estimator of the \(p\)-value. The algorithm for computing the permutation \(p\)-value is given in Algorithm 1[1].
```
Data:\(n_{1}+n_{2}\) PDs with labels \(L_{0}\) in disjoint sets of size \(n_{1}\) and \(n_{2}\), number of permutations \(N\), a joint loss function \(F\) Result: Estimate of the \(p\)-value Initialize \(Z\) at 0; Compute \(F(L_{0})\) for the observed labels; for\(i=1\) to \(N-1\)do Randomly shuffle the group labels into disjoint sets of size \(n_{1}\) and \(n_{2}\) to obtain labelling \(L\); Compute \(F(L)\); if\(F(L)\leq F(L_{0})\)then \(Z=Z+1\); end if end for \(Z=\frac{Z+1}{N+1}\); Output \(Z\) /* p-value */
```
**Algorithm 1**Unbiased estimation of the \(p\)-value [1]
In practice, once the computation of the loss function \(F\) is implemented, to run Algorithm 1 it suffices to provide the \((n_{1}+n_{2})\times(n_{1}+n_{2})\) matrix of all pairwise distances (which is computed only once), the labelling \(L_{0}\) and the number of permutations \(N\).
In Section 4, for Algorithm 1 we use the Wasserstein distance \(d_{11}\) as in (8) or replace it either with the sliced Wasserstein distance or the \(L_{1}\) distance between the corresponding vector summaries extracted from PDs. For example, if \(\{\boldsymbol{\beta}_{i}\}\) and \(\{\boldsymbol{\tilde{\beta}}_{i}\}\) are the vectors of averaged Bettis (VAB) computed from persistence diagrams \(\{D_{i}\}\) and \(\{\tilde{D}_{i}\}\) respectively using (4), then \(F_{pq}\) taken the form
\[F_{pq}(\{\boldsymbol{\beta}_{i}\},\{\boldsymbol{\tilde{\beta}}_{j}\})=\frac{2} {n_{1}(n_{1}-1)}\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{1}}\|\boldsymbol{\beta}_{i}- \boldsymbol{\beta}_{j}\|_{p}^{q}+\frac{2}{n_{2}(n_{2}-1)}\sum_{i=1}^{n_{2}} \sum_{j=1}^{n_{2}}\|\boldsymbol{\tilde{\beta}}_{i}-\boldsymbol{\tilde{\beta}}_ {j}\|_{p}^{q}, \tag{9}\]
where \(\|\cdot\|_{p}\) is the \(L_{p}\) norm on \(\mathbb{R}^{2}\).
Moreover, in addition to the standard permutation method of the group labels, we introduce a new shuffling technique with the aim of increasing the power of the test. Under this approach, when permuting the group labels we consider only those permutations which lead to stronger mixing of the labels from different groups. In other words, we maximize the sum of proportions of pairwise distances between PDs of different groups which potentially leads to larger values of the loss function \(F\) and hence a smaller p-value. To make the idea more precise, consider permutations where \(k\) diagrams from group 1 are put into group 2 and vice versa. Then the sum of proportions of distances between PDs from different groups is
\[\frac{k(n_{1}-k)}{n_{1}(n_{1}-1)}+\frac{k(n_{2}-k)}{n_{2}(n_{2}-1)}.\]
The above quantity (as a function of \(k\)) is maximized at
\[k_{\text{max}}=\frac{n_{1}n_{2}(n_{1}+n_{2}-2)}{2(n_{1}(n_{1}-1)+n_{2}(n_{2}-1 ))}. \tag{10}\]
In practice, following the strong mixing procedure, we take \(k=\min(n_{1},n_{2},k_{\text{max}})\) and consider a subset of the permutations where exactly \(k\) PDs are exchanged between the two groups. Note that when the null hypothesis is true, regardless of the permutation method (the standard or strong mixing), the PDs being permuted are drawn from the same underlying population/process.
When the group sizes are equal (i.e., \(n_{1}=n_{2}\)) and \(p=2\), the loss function \(F\) in (9) has an interesting connection to the _energy_ distance [33] defined by
\[E_{\alpha}=\frac{n_{1}n_{2}}{n_{1}+n_{2}}[2M_{12}-M_{11}-M_{22}],\]
where \(\alpha\in(0,2]\) and
\[M_{12}=\frac{1}{n_{1}n_{2}}\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}\|\mathbf{\beta}_{i}- \tilde{\mathbf{\beta}}_{j}\|_{2}^{\alpha},\quad M_{11}=\frac{1}{n_{1}^{2}}\sum_{i= 1}^{n_{1}}\sum_{j=1}^{n_{1}}\|\mathbf{\beta}_{i}-\mathbf{\beta}_{j}\|_{2}^{\alpha},\]
\[M_{22}=\frac{1}{n_{2}^{2}}\sum_{i=1}^{n_{2}}\sum_{j=1}^{n_{2}}\|\tilde{\mathbf{ \beta}}_{i}-\tilde{\mathbf{\beta}}_{j}\|_{2}^{\alpha}.\]
Observe that \(E_{\alpha}\) contains not only pairwise within-group distances, but also between-group distances and it can be shown that \(E_{\alpha}\geq 0\)[33]. Assuming \(n=n_{1}=n_{2}\), \(p=2\) and \(q=\alpha\in[1,2)\), and letting \(T=n^{2}(2M_{12}+M_{11}+M_{22})\) (the sum of all possible pairwise distances), we get
\[E_{\alpha} =\frac{n^{2}}{2n}\Big{[}2M_{12}-M_{11}-M_{22}\Big{]}\] \[=\frac{n}{2}\Big{[}\frac{1}{n^{2}}T-2(M_{11}+M_{22})\Big{]}\] \[=\frac{n}{2}\Big{[}\frac{1}{n^{2}}T-\frac{4(n-1)}{n}F_{2\alpha} \Big{]}\] \[=\frac{T}{2n}-2(n-1)F_{2\alpha}\]
Hence \(E_{\alpha}\) is a linear transformation of \(F_{2\alpha}\) and since \(T\) remains constant under any permutation, Algorithm 1 will provide the same p-value if \(F_{2\alpha}\) is replaced by \(E_{\alpha}\).
## 4 Experiments
The experiments section consists of three main subsections. The first subsection examines the run-time costs of five different methods for computing the matrix of pairwise distances to be used in Algorithm 1. The first one is the baseline method, where PDs are compared via the \(L_{1}\) 1-Wasserstein distance (W). The next three methods involve the \(L_{1}\) distances between respective vector summaries of the vector of average Bettis (VAB), persistence images (PI) and persistence landscapes (PL) computed from the given PDs. The fifth method compares PDs using the sliced Wasserstein distance (SW). The loss function (8) is appropriately modified for the methods other than the baseline (e.g., (9) is used instead of (8) for VAB).
The second subsection explores the power of the hypothesis test (implemented in Algorithm 1 of Section 3) for the above five methods (W, VAB, PI, PL, SW) using various simulated data (with and without added noise) and filtration types. The last subsection investigates the performance of the testing procedure on real data.
In all experiments, the weight function \(w\) in (3) for the vector of averaged Bettis is set to one. For persistence landscapes we consider only the first landscape function (i.e. \(k=1\) in (2)). Persistence images are computed using the Gaussian density with standard deviation \(\sigma=0.5\times(\text{maximum persistence})/(\text{grid size})\). The sliced Wasserstein distance is computed (approximately) by taking 10 uniformly spaced directions. All TDA-related computations are performed using the R language [34] in conjunction with packages TDA[35; 36], TDAstats[37], kernelTDA[38], TDAvec[39] and igraph[40].
For Algorithm 1, the loss function \(F_{pq}\) is computed with \(p=q=1\), and both the standard (random) and strong mixing procedures are applied in shuffling the group labels. The null hypothesis \(H_{0}\) (of equality of shapes/processes between the two groups) is rejected if the \(p\)-value is less than the cutoff \(\alpha=0.05\) (significance level). We consider PDs of different homological dimensions separately and report the results for the dimension which leads to the best outcome.
### Performance comparison with respect to computational cost
This section focuses on the run-time costs of the five methods (W, VAB, PI, PL, SW) for computing the matrix of pairwise distances. For all simulations, a persistence diagram \(D=\{(b_{i},d_{i})\}_{i=1}^{N}\) is generated from the process where \(b_{i}\sim\text{Unif}(0,1)\) and \(d_{i}=b_{i}+z_{i}\) with \(z_{i}\sim\text{Beta}(\alpha,\beta)\) for some prespecified \(\alpha\) and \(\beta\). For the vector summaries VAB, PL and PI the run-time cost additionally includes the time taken to compute these summaries from the given PDs.
In the first simulation study, we generate 50 PDs of size \(N\) ranging (per simulation) over \(\{100,200,\cdots,1000\}\) to explore the run-time cost with respect to changes in the size of PDs. In the second study, we vary the number of PDs \(n\) from 10 to 100 (with increments of 10) and fix the size of PDs to be 500. Table 1 and Table 2 provide the median
run-time costs (measured in seconds and computed based on ten repeated simulations) for these two studies. The results show that VAB has the smallest (overall) run-time cost and SW has much smaller cost than the baseline Wasserstein metric which is the costliest to compute. Moreover, we observe that for the vector summaries (i.e. VAB, PI and PL) the computational cost increases approximately linearly in \(N\) and \(n\).
### Hypothesis testing on simulated data
This section provides four experimental results regarding the power of the hypothesis test based on different types of simulated data. The first two involve computing PDs from 2D and 3D point clouds (with underlying shape structure) via the Vietoris-Rips and alpha filtrations respectively. The third dataset consists of PDs generated from a process similar to that of Section 4.1. In the fourth dataset, PDs are computed from random graphs via the lower-star filtration.
All four experiments consider 5 sets of 20 PDs where the first 10 belong to group 1 and the remaining 10 are part of group 2: \(\{\mathcal{S}_{i}\}_{i=1}^{5}=\{\{D_{ij}\}_{j=1}^{10},\{\tilde{D}_{ij}\}_{j=1}^ {10}\}_{i=1}^{5}\). When \(i=1\), the data generating processes behind PDs of the two groups are the same (i.e. the null hypothesis \(H_{0}\) is true). As \(i\) gets bigger, the alternative hypothesis \(H_{1}\) is true and the process behind producing PDs of group 2 increasingly starts to differ from that of group 1 which remains fixed. Hence, as \(i\) gradually increases, the two groups of PDs, in principle, become easier to distinguish and therefore, we expect the power of the hypothesis test to increase as well (at a higher or lower rate depending on the absence or presence of noise in the data).
In terms of vectorization of summary functions of PDs, we use a (one-dimensional) grid of size 100 for the Betti functions and persistence landscapes, and a \(20\times 20\) grid for the persistence images. For Algorithm 1, under the standard permutation method of the group labels, we randomly select 10000 out of \(\binom{20}{10}=184756\) possible permutations. When the strong mixing procedure (introduced in Section 3) is applied, since the group sizes \(n_{1}=n_{2}=10\), by (10) \(k_{\text{max}}=5\) PDs from each group are mixed with each other which leads to \(\binom{10}{5}\cdot\binom{10}{5}=64504\) permutations of which we again randomly select 10000. For each value of \(i\), the power of the test is computed based on 1000 independent repeated simulations. When \(i=1\), the null hypothesis \(H_{0}\) is true and therefore the power of the test is equal to the type-I error rate. For \(i>1\), the power of the test is equal to 1 - the type-II error rate since the alternative hypothesis \(H_{1}\) is true in this case.
Finally, we redo all the experiments where the four datasets are generated with added noise. The results will be presented as bar charts where the heights of colored bars correspond to the powers of the hypothesis test for each of the five methods (W, VAB, PI, PL, SW) under the standard permutation method. The additional bars (with transparent background) on top of the colored bars reflect the increase in power under the strong mixing procedure for permuting the group labels.
#### 4.2.1 Distinguishing circle and ellipse
In this experiment, PDs of group 1, \(\{D_{ij}\}_{i=1,j=1}^{5,10}\), are computed (via a Vietoris-Rips filtration) from points clouds of size 50 sampled uniformly from a unit circle. For each \(i\in\{1,\ldots,5\}\), PDs \(\{\tilde{D}_{ij}\}_{j=1}^{10}\) of group 2 are computed based on 50 points sampled uniformly from an ellipse with equation \(x^{2}+y^{2}/(1-r)^{2}=1\) where \(r=0.02(i-1)\). When \(r=0\) (or \(i=1\)), the two underlying shapes are the same and as \(r\) increases the corresponding ellipses gradually start to
\begin{table}
\begin{tabular}{c c c c c c} \hline N & W & VAB & PL & PI & SW \\ \hline
100 & 6.723 & **0.007** & 0.016 & 0.013 & 0.293 \\
200 & 17.92 & **0.011** & 0.038 & 0.026 & 0.606 \\
300 & 34.93 & **0.015** & 0.059 & 0.039 & 1.000 \\
400 & 57.58 & **0.020** & 0.068 & 0.051 & 1.361 \\
500 & 82.77 & **0.025** & 0.085 & 0.064 & 1.744 \\
600 & 111.8 & **0.029** & 0.103 & 0.077 & 2.112 \\
700 & 147.4 & **0.035** & 0.122 & 0.091 & 2.515 \\
800 & 174.4 & **0.046** & 0.127 & 0.106 & 2.916 \\
900 & 218.5 & **0.044** & 0.146 & 0.121 & 3.337 \\
1000 & 245.0 & **0.050** & 0.179 & 0.143 & 3.744 \\ \hline \end{tabular}
\end{table}
Table 1: Median run-time costs (measured in seconds) of computing the distance matrix for the five methods (W,VAB,PL,PI,SW) using 50 PDs of size \(N\) ranging over {100,200,...,1000}.
\begin{table}
\begin{tabular}{c c c c c c} \hline n & W & VAB & PL & PI & SW \\ \hline
10 & 2.971 & **0.005** & 0.013 & 0.013 & 0.061 \\
20 & 11.82 & **0.010** & 0.027 & 0.025 & 0.274 \\
30 & 29.09 & **0.015** & 0.044 & 0.039 & 0.607 \\
40 & 50.41 & **0.019** & 0.062 & 0.051 & 1.084 \\
50 & 87.71 & **0.026** & 0.086 & 0.069 & 1.807 \\
60 & 117.1 & **0.037** & 0.104 & 0.075 & 2.425 \\
70 & 159.4 & **0.034** & 0.117 & 0.086 & 3.329 \\
80 & 207.9 & **0.046** & 0.137 & 0.120 & 4.279 \\
90 & 260.5 & **0.046** & 0.153 & 0.116 & 5.504 \\
100 & 323.3 & **0.056** & 0.174 & 0.123 & 6.796 \\ \hline \end{tabular}
\end{table}
Table 2: Median run-time costs (measured in seconds) of computing the distance matrix for the five methods (W,VAB,PL,PI,SW) using \(n\) PDs of size 500, where \(n\) ranges over {10,200,...,100}.
depart (in terms of shape) from the unit circle. For the case with noise, we add Gaussian noise with standard deviation \(\sigma=0.05\) to both the \(x\) and \(y\) coordinates of the points in a point cloud.
The simulation results for homological dimension 1 are given in Figure 1(a) (no noise) and Figure 1(b) (with noise). As expected, the test powers generally increase with \(r\) (or \(i\)). The methods W (Wasserstein) and VAB (vector of averaged Bettis) show very similar high performance, whereas the lowest powers are given by the SW (sliced Wasserstein) method. Moreover, we observe that the strong mixing procedure leads to an increase in the power of the test and it is even more pronounced when the data is noisy. More specifically, when \(r=0\) (i.e. \(H_{0}\) is true), under the standard permutation method, the powers (or the type-I error rates) are in the range of 4.3-5.9% (no noise) and 4.9-5.3% (with noise) and hence generally stay below the significance level of 5% (\(\alpha=0.05\)). Under the strong mixing procedure, the type-I error rates further increase by 1.3-2.4%. However, for \(r>0\) (i.e. \(H_{1}\) is true), the changes in the power of the test between the two permutation methods reach up to 3.9% (no noise) and 5.6% (with noise). The highest powers for homological dimension 0 are 11.5% (no noise) and 14.4% (with noise) achieved by the VAB method at \(r=0.08\) (or \(i=5\)).
#### 4.2.2 Distinguishing sphere and ellipsoid
The experiment of this section involves 3D point clouds and closely mimics the preceding one in Section 4.2.1. PDs of group 1 are computed from point clouds of size 100 uniformly sampled from a unit sphere via the alpha filtration. Point clouds for group 2 consist of 100 points sampled from the ellipsoid \(x^{2}+y^{2}+z^{2}/(1-r)^{2}=1\) with \(r=0.01(i-1)\), \(i=1,\ldots,5\) (the index of simulation) which amounts to compressing the unit sphere by a factor of \(1-r\) along the \(z\)-axis. Finally, the simulation is repeated by adding Gaussian noise with standard deviation \(\sigma=0.05\) to all three coordinates of the points in a point cloud during the data generation process.
The best results are obtained for homological dimension 2 and presented in Figure 2(a) (no noise) and Figure 2(b) (with noise). We observe that, in the absense of noise in the data, the power for the methods W (Wasserstein), VAB (vector of averaged Bettis) and PL (persistence landscape) reach 96.2-97.7% as early as \(r=0.01\) (or \(i=2\)), whereas the methods PI (persistence image) and SW (sliced Wasserstein) get to that level only at \(r=0.03\) (or \(i=4\)). In this case, due to a rapid increase of the test power, the effect of the strong mixing method is less pronounced compared to that of Section 4.2.1.
With the presence of noise in the data, a more gradual increase in the test power is observed reaching at most 77.4% (B) and 79.8% (W) by the two permutation methods respectively (see Figure 2(b)). Here, the strong mixing leads to a maximum of 5.6% increase in the test power. For homological dimensions 0 and 1, the highest value of the power does not exceed 12.1% showing very limited ability to distinguish a sphere from an ellipsoid.
#### 4.2.3 Distinguishing persistence diagrams generated from a process
The PDs in this experiment are generated from a process (similar to one used in Section 4.1) rather than being computed from a point cloud through some filtration. For each simulation \(i\in\{1,\ldots,5\}\), a persistence diagram \(D=\{(b_{j},d_{j})\}_{j=1}^{50}\) of group 1 is generated from the process where \(b_{j}\sim\text{Beta}(1,1)\) (or equivalently \(\text{Unif}(0,1)\)) and
Figure 2: Estimated powers of the permutation test (see Algorithm 1) using the five methods (W,VAB,PL,PI,SW) for the experiment of Section 4.2.1. PDs are computed from 2D point clouds via Vietoris-Rips filtration. Results are for homological dimension 1. The extra bars above the colored ones display the increase in power under the strong mixing approach for permuting group labels.
\(d_{j}=b_{j}+z_{j}\) with \(z_{j}\sim\text{Beta}(1,1)\). For a diagram \(\tilde{D}=\{(\tilde{b}_{j},\tilde{d}_{j})\}_{j=1}^{50}\) of group 2, \(b_{j}\sim\text{Beta}(1,1)\) and \(d_{j}=b_{j}+z_{j}\) with \(z_{j}\sim\text{Beta}(1,r)\), where \(r=1+0.2(i-1)\), \(i=1,\dots,5\). As in the previous two experimental settings, when \(r=0\) (or \(i=1\)) the two processes behind the generation of PDs are the same (i.e. the \(H_{0}\) hypothesis is true) and as \(r\) increases the processes begin to diverge from one another. For the noisy case, Gaussian noise with standard deviation \(\sigma=0.1\) is added to both the birth \(b\) and death \(d\) coordinates of each point in a PD. If, as a result, \(b>d\), then the point \((b,d)\) is replaced by \((b,b)\) to ensure that no point lies below the diagonal \(\Delta=\{(b,d)|b=d\}\).
The results are given Figures 3(a) and 3(b). Unlike in the previous two experiments, the method SW (sliced Wasserstein) provides the highest test powers. When the two processes are still relatively hard to distinguish (e.g. \(r=0.2\) and \(r=0.4\)), the effect of the strong mixing is observed the most (up to 5%) and the methods PL (persistence landscape) and PI (persistence image) have the lowest powers compared to the other three methods.
#### 4.2.4 Distinguishing simulated graphs
The final experiment in this section involves computing PDs from simulated graphs using the lower-star filtration. More specifically, first we sample 100 points from the Dirichlet distribution with parameter \(\alpha\in\mathbb{R}^{3}\) and generate a graph whose nodes are represented by the sampled points and an edge is added between nodes \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\) with probability \(p=x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}\) (dot product between \(x\) and \(y\)). This graph is called a _random dot product graph_ and has been proven useful to simulate of social networks [41]. Next, viewing the graph as a one-dimensional simplicial complex, we increase its dimension to two by adding triangles (2-simplices) formed by the
Figure 4: Estimated powers of the permutation test (see Algorithm 1) using the five methods (W,VAB,PL,PI,SW) for the experiment of Section 4.2.3. PDs are generated from a process. The extra bars above the colored ones display the increase in power under the strong mixing approach for permuting group labels.
Figure 3: Estimated powers of the permutation test (see Algorithm 1) using the five methods (W,VAB,PL,PI,SW) for the experiment of Section 4.2.2. PDs are computed from 3D point clouds via alpha complex filtration. Results are for homological dimension 2. The extra bars above the colored ones display the increase in power under the strong mixing approach for permuting group labels.
edges. Finally, we compute the PD of the lower-star filtration induced by \(g(x)=-\sum_{i=1}^{3}x_{i}\log(x_{i})\) (cross-entropy) where \(x=(x_{1},x_{2},x_{3})\) represents a node.
All graphs of group 1 are generated using \(\alpha=(1.5,1.5,1.5)\). For each simulation \(i\in\{1,\ldots,5\}\), the graphs of group 2 are produced from the Dirichlet distribution with \(\alpha=(1.5,1.5,1.5+0.4(i-1))\). We repeat the experiment by adding Gaussian noise with standard deviation \(\sigma=0.1\) to the values of the function \(g\) (node attributes).
The best results, observed for homological dimension 1, are given in Figures 4(a) and 4(b). In this experiment, the methods W (Wasserstein) and VAB (vector of averaged Betttis) show the highest test powers for all five simulations. Again, the effect of the strong mixing technique is most pronounced for \(r=0.4\) and \(r=0.8\) (when the two groups are still relatively difficult to tell apart).
### Hypothesis testing on real data
In the final section, we conduct two experiments on hypothesis testing which involve PDs computed from real data. In the first experiment, we consider point clouds sampled for the surfaces of ten 3D non-rigid toys and compute PDs using the alpha filtration. The second experiment deals with PDs computed from real graph data using the lower-star filtration induced by various functions for node attributes.
#### 4.3.1 Distinguishing 3D non-rigid toys
The dataset for this experiment contains point clouds obtained from ten non-rigid toys [42]. Each toy is scanned in ten different poses resulting in a total of 100 point clouds (of size 4000 on average) sampled uniformly from the toy surface scans (see Figures 5(a) and 5(b)). To compute PDs, we resort to the alpha filtration, rather than the Vietoris-Rips filtration, as it has much lower run-time cost for this dataset. The best results are obtained for homological dimension 2 which tracks the evolution of voids (or holes of dimension two) along the filtration. As a note, in this experiment the PDs are much larger in size (on average 4000, 7500 and 1200 for homological dimensions 0, 1 and 2 respectively) compared to those of Section 4.2 which leads to significantly higher run-time cost of computing Wasserstein distances for Algorithm 1.
First, we compare points clouds sampled from the same toy. Thus, for each toy there are ten point clouds which we randomly split into two groups of equal size. Algorithm 1 is run with \(N=\binom{10}{5}=252\) permutations and the standard mixing procedure for permuting the group labels. Table 3 contains the p-values obtained for each toy and each of the five methods (W, VAB, PI, PL, SW) for computing distance matrices. We see from Table 3 that since the two groups of PDs come from the same toy, the null hypothesis \(H_{0}\) is rejected only 8% of the time (4 out 50 comparisons) which is close to the used significance level of 5%.
Next, we compare point clouds obtained from different toys. Algorithm 1 is applied to all possible \(\binom{10}{2}=45\) comparisons (with \(N=10000\) permutations) using the five methods for computing distance matrices. Out of the \(45\times 5=225\) computed p-values, all except one turn out be below \(\alpha=0.05\) (we fail to reject the \(H_{0}\) hypothesis only
Figure 5: Estimated powers of the permutation test (see Algorithm 1) using the five methods (W,VAB,PL,PI,SW) for the experiment of Section 4.2.4. PDs are computed from graphs via lower-star filtration. Results are for homological dimension 1. The extra bars above the colored ones display the increase in power under the strong mixing approach for permuting group labels.
once for the PL method). Thus, in all but one case, the hypothesis testing procedure correctly distinguishes the pairs toys compared.
#### 4.3.2 Distinguishing real graphs
The experiments of this section utilize datasets from TUDataset - a collection of benchmark datasets for graph classification and regression [43]. The TUDataset collection contains more than 120 datasets from various application domains. We use 13 of those datasets where the graphs are classified into two groups (see Table 4 for their names and basic graph summaries). Table 4 shows that the graphs vary in terms of the number of nodes, group sizes and the balance between them.
We compute PDs of homological dimensions 0 and 1 using the lower-start filtration induced by the node attributes of degree, betweenness and closeness. In case the graphs are further supplied with (external) node attributes, we utilize those attributes as well to produce the filtration. We apply Algorithm 1 with \(N=10000\) permutations and both standard and strong mixing procedures for permuting the class labels. Due to a high relative run-time cost of computing the Wasserstein distances, we calculate them only if the total number of graphs in both groups does not exceed 100 (which amounts to three datasets). Table 5 provides the hypothesis testing results for each dataset and method used to compute the distance matrix. The results show that the hypothesis testing procedure distinguishes the two groups of graphs in the majority of cases based on at least one of the node attribute functions used. As in Section 4.2.4, better results are obtained mostly for homological dimension 1. All methods fail to tell apart the graphs in the BZR and KKI datasets at the significance level of 5%. The methods W (Wasserstein) (if available), VAB (vector of averaged Bettis) and SW (sliced Wasserstein) provide almost identical results. The strong mixing procedure helps improve the results but only in few cases. Overall, all the five methods demonstrate comparable performance.
## 5 Conclusion and directions for future research
The paper by [1] is one of the early and important works on hypothesis testing for TDA. This approach is based on the permutation test where persistence diagrams are regarded as data observations and the associated loss function
\begin{table}
\begin{tabular}{c c c c c c} \hline & W & VAB & PL & PI & SW \\ \hline Toy 1 & 0.241 & 0.601 & 0.502 & 0.233 & 0.320 \\ Toy 2 & 0.458 & 0.601 & 0.138 & 0.170 & 0.352 \\ Toy 3 & 0.344 & 0.545 & 0.798 & 0.265 & 0.498 \\ Toy 4 & 0.202 & 0.625 & 0.352 & 0.375 & 0.253 \\ Toy 5 & 0.138 & 0.352 & **0.036** & 0.075 & 0.130 \\ Toy 6 & 0.119 & 0.083 & **0.043** & 0.281 & 0.067 \\ Toy 7 & 0.530 & 0.265 & 0.099 & 0.067 & 0.628 \\ Toy 8 & 0.249 & 0.557 & 0.379 & 0.249 & 0.253 \\ Toy 9 & **0.036** & 0.715 & 0.051 & 0.099 & **0.043** \\ Toy 10 & 0.577 & 0.059 & 0.482 & 0.565 & 0.296 \\ \hline \end{tabular}
\end{table}
Table 3: Estimated p-values of the permutation test (see Algorithm 1) using the five methods (W,VAB,PL,PLSW) for the experiment of Section 4.3.1, where point clouds are sampled from the same toy.
Figure 6: Toys used for the experiment of Section 4.3.1 [42].
(8) is defined using the sum of within-class pairwise Wasserstein distances between the diagrams. However, in the settings where persistence diagrams are large in size and in number, computing the Wasserstein distances may have high computational cost limiting the application of the permutation test. The present work proposes to replace the Wasserstein distances between persistence diagrams with the \(L_{p}\) distances between vector summaries extracted from the diagrams. In this context, though we specifically focus on exploring the utility of the vectorized Betti function as an alternative summary to working with persistence diagrams directly, our approach is applicable to any other vector summary.
The Betti function, or more generally any univariate summary function extracted from a persistence diagram, is typically vectorized by evaluating it at each point of a super-imposed grid of scale values. We propose a different vectorization scheme for the Betti function by averaging it between two consecutive scale values via integration. The new vector summary is called a _vector of averaged Betti_s (VAB) and can be used in situations where informative low-dimensional topological vector summaries are desired. Moreover, we prove stability results for the Betti function and its vectorized form VAB with respect to the 1-Wasserstein distance.
In our experiments, we apply the hypothesis testing procedure of [1] to test whether the process or underlying shape behind two groups of persistence diagrams is the same (per homological dimension) by employing five methods - the 1-Wasserstein distance (baseline), sliced Wasserstein distance and the \(L_{1}\) distance between the vector summaries of VAB (our focus), persistence landscape (PL) and persistence image (PI) - to compute the loss function. We use both synthetic and real datasets of various types and we find that the results obtained based on the Wasserstein distance and
\begin{table}
\begin{tabular}{l c c c c} \hline Dataset & Group 1 size & Group 2 size & Avg \# of nodes & Avg \# of edges \\ \hline AIDS & 400 & 1600 & 15.69 & 16.20 \\ BZR & 319 & 86 & 35.75 & 38.36 \\ COX2 & 365 & 102 & 41.22 & 43.45 \\ DHFR & 295 & 461 & 42.43 & 44.54 \\ DD & 691 & 487 & 284.3 & 715.7 \\ IMDB-BINARY & 500 & 500 & 19.77 & 96.53 \\ KKI & 37 & 46 & 26.96 & 48.42 \\ MUTAG & 63 & 125 & 17.93 & 19.79 \\ OHSU & 35 & 44 & 82.01 & 199.7 \\ Peking\_1 & 49 & 36 & 39.31 & 77.35 \\ PTC\_FM & 206 & 143 & 14.11 & 14.48 \\ PROTEINS & 663 & 450 & 39.06 & 72.82 \\ REDDIT-BINARY & 1000 & 1000 & 429.6 & 497.8 \\ \hline \end{tabular}
\end{table}
Table 4: Graph datasets used for the experiment of Section 4.3.2.
\begin{table}
\begin{tabular}{l c c c c c} \hline Dataset (hom dim used) & W & VAB & PL & PI & SW \\ \hline AIDS (0) & - & \(\times\),\(\times\),\(\times\),\(\times\) & \(\times\),\(\times\),\(\times\) & \(\checkmark\),\(\checkmark\),\(\checkmark\) & \(\times\),\(\times\),\(\times\) \\ BZR (0,1) & - & \(\times\),\(\times\),\(\times\),\(\times\) & \(\times\),\(\times\),\(\times\),\(\times\) & \(\times\),\(\times\),\(\times\) & \(\times\),\(\times\),\(\times\) \\ COX2 (1) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) \\ DHFR (1) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) \\ DD (1) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) \\ IMDB-BINARY (1) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & - \\ KKI (1) & \(\times\),\(\times\),\(\times\), - & \(\times\),\(\times\),\(\times\),\(\times\) & - & \(\star\),\(\times\),\(\times\),\(\times\),\(\times\) & - & \(\times\),\(\times\),\(\times\),\(\times\),\(\times\) \\ MUTAG (1) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\checkmark\) & - \\ OHSU (0) & \(\times\), \(\star\),\(\times\), - & \(\times\),\(\times\),\(\times\),\(\times\),\(\times\) & \(\times\),\(\times\),\(\times\),\(\times\) & - & \(\times\),\(\times\),\(\times\),\(\times\) & - \\ Peking\_1 (0) & \(\checkmark\), \(\star\),\(\times\), - & \(\star\),\(\times\),\(\times\),\(\times\) & - & \(\times\),\(\times\),\(\times\),\(\times\) & - & \(\times\),\(\times\),\(\times\),\(\times\) \\ PTC\_FM (1) & - & \(\checkmark\),\(\times\),\(\times\), - & \(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\times\),\(\times\),\(\times\) & - & \(\checkmark\),\(\times\),\(\times\) \\ PROTEINS (1) & - & \(\checkmark\),\(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) & \(\checkmark\),\(\checkmark\) \\ REDDIT-BINARY (1) & - & \(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\checkmark\) & - & \(\checkmark\),\(\checkmark\) & - \\ \hline \end{tabular}
\end{table}
Table 5: Hypothesis testing results for real graph data using the standard permutation procedure. For each method, we report four results indicated by the symbols \(\checkmark\), \(\star\) and \(\times\) (depending on whether the p-value is less than 0.05, between 0.05 and 0.1 or greater than 0.1) where the node attributes are respectively based on the _degree, betweenness, closeness or externally provided_. For example, \(\checkmark\),\(\checkmark\),\(\times\),\(\times\) means that the p-value is less than 0.05 for the degree and betweenness and greater than 0.1 for the closeness and the case when node attributes are externally provided. The symbol “-” means the result is not available.
VAB are very similar and outperform the other three methods in the majority of cases. In almost all cases, persistence diagrams corresponding to the highest available homological dimension lead to the best results.
In practice, to avoid a high computational cost, Algorithm 1 implementing the permutation test in [1] often uses only a fraction of permutations uniformly sampled from all possible permutations to estimate the p-value. In our experiments, in addition to the standard permutation procedure, we introduce and explore a new shuffling procedure which considers only those permutations which mix the labels of the two groups the most. The experimental results show that the proposed permutation procedure leads to increased power of the test. This increase is more pronounced when the parameter quantifying the difference between the two groups of persistence diagrams is relatively small (i.e. when the two groups of diagrams are harder to distinguish). However, the observed improvement of the test power comes at the cost of minor inflation of the type-I error.
The loss function (1) assigns equal weights to both groups of persistence diagrams regardless of their sizes. After obtaining our main results, we investigated the performance of the loss function when the weights are adjusted according to the group sizes. The preliminary experiments with the modified loss function on the TUDataset of real graphs (see Section 4.3.2) result in higher percentage of correct decisions. For example, the graphs in the AIDS and BZR datasets (both of which are imbalanced) are easily distinguished by the methods VAB, PL and SW for almost all node attribute functions and homological dimensions used. We plan to explore theoretical properties of the new permutation procedure and the modified loss function in our future research.
|
2308.06923 | Hall algebras and quantum symmetric pairs of Kac-Moody type II | We extend the $\imath$Hall algebra realization of $\imath$quantum groups
arising from quantum symmetric pairs, which establishes an injective
homomorphism from the universal $\imath$quantum group of Kac-Moody type to the
$\imath$Hall algebra associated to an arbitrary $\imath$quiver (not necessarily
virtually acyclic). This generalizes Lu-Wang's result. | Ming Lu, Runze Shang | 2023-08-14T03:58:21Z | http://arxiv.org/abs/2308.06923v2 | # Hall algebras and quantum symmetric pairs of Kac-Moody type II
###### Abstract.
We extend the \(\imath\)Hall algebra realization of quantum groups arising from quantum symmetric pairs, which establishes an injective homomorphism from the universal quantum group of Kac-Moody type to the \(\imath\)Hall algebra associated to an arbitrary quiver (not necessarily virtually acyclic). This generalizes Lu-Wang's result.
Key words and phrases:Quantum symmetric pairs, Hall algebras, Quiver with involutions 2020 Mathematics Subject Classification: Primary 17B37, 16G20, 18E30
###### Contents
* 1 Introduction
* 2 \(\imath\)Quiver algebras and \(\imath\)Hall algebras
* 3 Quantum groups and \(\imath\)quantum groups
* 4 \(\imath\)Serre relation in \(\imath\)Hall algebras
* 5 \(\imath\)Quantum groups and \(\imath\)Hall algebras
## 1. Introduction
\(\imath\)Quantum groups \(\mathbf{U}^{\imath}\) (also called quantum symmetric pair coideal subalgebras) were introduced by G. Letzter [9, 10], which are right coideal subalgebras of Drinfeld-Jimbo's quantum groups \(\mathbf{U}\); see [8] for an extension to Kac-Moody type. A striking breakthrough of the \(\imath\)quantum groups is the discovery of canonical basis by H. Bao and W. Wang [1]. Inspired by the Hall algebra realization, the first author and Wang [14] defined the universal \(\imath\)quantum groups \(\widetilde{\mathbf{U}}^{\imath}\), such that \(\mathbf{U}^{\imath}\) is a quotient algebra of \(\widetilde{\mathbf{U}}^{\imath}\) by a certain ideal generated by central elements.
A Serre presentation of _quasi-split_\(\imath\)quantum groups \(\mathbf{U}^{\imath}\) and \(\widetilde{\mathbf{U}}^{\imath}\) of Kac-Moody type is more complicated than a Serre presentation (which is the definition) of a quantum group, and it was recently completed in full generality by the first author joint with X. Chen and W. Wang [4]; a complete presentation of \(\mathbf{U}^{\imath}\) in finite type was already given earlier by Letzter [10]. A crucial relation, known as the \(\imath\)Serre relation, in the final presentation for \(\mathbf{U}^{\imath}\) and \(\widetilde{\mathbf{U}}^{\imath}\), involves the \(\imath\)divided powers which arise from the theory of canonical basis for quantum symmetric pairs [1, 2]. The \(\imath\)divided powers come in \(2\) forms, depending on a parity.
In [14, 15], the first author and Wang defined a new class of quiver algebras \(\Lambda^{\imath}\) associated to \(\imath\)quivers \((Q,\tau)\), and then formulated the \(\imath\)Hall algebra of \(\Lambda^{\imath}\) over a finite field \(\mathbf{k}=\mathbb{F}_{q}\) in the framework of semi-derived Ringel-Hall algebras of \(1\)-Gorenstein algebras [12, 11]. This new form of Hall algebras was motivated by the construction of Bridgeland's Hall algebra of complexes [3] and Gorsky's semi-derived Hall algebras [6] (which were in turn built on [18, 7]).
The \(\imath\)Hall algebras of \(\imath\)quiver algebras were conjectured to provide a realization of the universal \(\imath\)quantum groups arising from quasi-split quantum symmetric pairs of Kac-Moody type. This
###### Contents
* 1 Introduction
[MISSING_PAGE_POST]
and \(Q^{\prime}\) of \(Q^{\sharp}\)). The algebra \(\Lambda\) can be realized in terms of the quiver \(Q^{\sharp}\) and a certain ideal \(I^{\sharp}\) so that \(\Lambda\cong\mathbf{k}Q^{\sharp}\big{/}I^{\sharp}\).
By definition, \(\tau^{\sharp}\) on \(Q^{\sharp}\) preserves \(I^{\sharp}\) and hence induces an involution \(\tau^{\sharp}\) on the algebra \(\Lambda\). The \(a\)quiver algebra of \((Q,\tau)\) is the fixed point subalgebra of \(\Lambda\) under \(\tau^{\sharp}\),
\[\Lambda^{\imath}=\{x\in\Lambda\mid\tau^{\sharp}(x)=x\}. \tag{2.2}\]
The algebra \(\Lambda^{\imath}\) can be described in terms of a certain quiver \(\overline{Q}\) and its ideal \(\overline{I}\) so that \(\Lambda^{\imath}\cong\mathbf{k}\overline{Q}/\overline{I}\); see [14, Proposition 2.6]. We recall \(\overline{Q}\) and \(\overline{I}\) as follows:
* \(\overline{Q}\) is constructed from \(Q\) by adding a loop \(\varepsilon_{i}\) at the vertex \(i\in Q_{0}\) if \(i=\tau i\), and adding an arrow \(\varepsilon_{i}:i\to\tau i\) for each \(i\in Q_{0}\) if \(i\neq\tau i\);
* \(\overline{I}\) is generated by
* (Nilpotent relations) \(\varepsilon_{i}\varepsilon_{\tau i}\) for any \(i\in\mathbb{I}\);
* (Commutative relations) \(\varepsilon_{i}\alpha-\tau(\alpha)\varepsilon_{j}\) for any arrow \(\alpha:j\to i\) in \(Q_{1}\).
Moreover, it follows by [15, Proposition 2.2] that \(\Lambda^{\imath}\) is a \(1\)-Gorenstein algebra.
By [14, Corollary 2.12], \(\mathbf{k}Q\) is naturally a subalgebra and also a quotient algebra of \(\Lambda^{\imath}\). Viewing \(\mathbf{k}Q\) as a subalgebra of \(\Lambda^{\imath}\), we have a restriction functor
\[\operatorname{res}:\operatorname{mod}^{\operatorname{nil}}(\Lambda^{\imath}) \longrightarrow\operatorname{mod}^{\operatorname{nil}}(\mathbf{k}Q).\]
Viewing \(\mathbf{k}Q\) as a quotient algebra of \(\Lambda^{\imath}\), we obtain a pullback functor
\[\iota:\operatorname{mod}^{\operatorname{nil}}(\mathbf{k}Q)\longrightarrow \operatorname{mod}^{\operatorname{nil}}(\Lambda^{\imath}). \tag{2.3}\]
For each \(i\in Q_{0}\), define a \(k\)-algebra
\[\mathbb{H}_{i}:=\left\{\begin{array}{ll}\mathbf{k}[\varepsilon_{i}]/( \varepsilon_{i}^{2})&\text{ if }i=\tau i,\\ \mathbf{k}(\;i\;\xleftarrow{\varepsilon_{\tau i}}\tau i\;)/(\varepsilon_{i} \varepsilon_{\tau i},\varepsilon_{\tau i}\varepsilon_{i})&\text{ if }\tau i\neq i.\end{array}\right. \tag{2.4}\]
Note that \(\mathbb{H}_{i}=\mathbb{H}_{\tau i}\) for any \(i\in Q_{0}\). Choose one representative for each \(\tau\)-orbit on \(\mathbb{I}\), and let
\[\mathbb{I}_{\tau}=\{\text{the chosen representatives of $\tau$-orbits in $\mathbb{I}$}\}. \tag{2.5}\]
Define the following subalgebra of \(\Lambda^{\imath}\):
\[\mathbb{H}=\bigoplus_{i\in\mathbb{I}_{\tau}}\mathbb{H}_{i}. \tag{2.6}\]
Note that \(\mathbb{H}\) is a radical square zero selfinjective algebra. Denote by
\[\operatorname{res}_{\mathbb{H}}:\operatorname{mod}^{\operatorname{nil}}( \Lambda^{\imath})\longrightarrow\operatorname{mod}^{\operatorname{nil}}( \mathbb{H}) \tag{2.7}\]
the natural restriction functor. On the other hand, as \(\mathbb{H}\) is a quotient algebra of \(\Lambda^{\imath}\) (cf. [14, proof of Proposition 2.15]), every \(\mathbb{H}\)-module can be viewed as a \(\Lambda^{\imath}\)-module.
Recall the algebra \(\mathbb{H}_{i}\) for \(i\in\mathbb{I}_{\tau}\) from (2.4). For \(i\in Q_{0}=\mathbb{I}\), define the indecomposable module over \(\mathbb{H}_{i}\) (if \(i\in\mathbb{I}_{\tau}\)) or over \(\mathbb{H}_{\tau i}\) (if \(i\not\in\mathbb{I}_{\tau}\))
\[\mathbb{E}_{i}=\left\{\begin{array}{ll}\mathbf{k}[\varepsilon_{i}]/( \varepsilon_{i}^{2}),&\text{ if }i=\tau i;\\ \mathbf{k}\xleftarrow{\begin{array}{ll}1&\text{ }\\ \hline\end{array}}\mathbf{k}\;\text{ on the quiver }\;i\xleftarrow{\varepsilon_{i}} \tau i\;,&\text{ if }i\neq\tau i.\end{array}\right. \tag{2.8}\]
Then \(\mathbb{E}_{i}\), for \(i\in Q_{0}\), can be viewed as a \(\Lambda^{\imath}\)-module and will be called a _generalized simple_\(\Lambda^{\imath}\)-module.
Let \(\mathcal{P}^{<\infty}(\Lambda^{\imath})\) be the subcategory of \(\operatorname{mod}^{\operatorname{nil}}(\Lambda^{\imath})\) formed by modules of finite projective dimensions. Let \(\mathcal{P}^{\leq d}(\Lambda^{\imath})\) be the subcategory of \(\operatorname{mod}^{\operatorname{nil}}(\Lambda^{\imath})\) which consists of \(\Lambda^{\imath}\)-modules of projective dimension less than or equal to \(d\), for \(d\in\mathbb{N}\). Then \(\mathcal{P}^{<\infty}(\Lambda^{\imath})=\mathcal{P}^{\leq 1}(\Lambda^{\imath})\), and \(\mathbb{E}_{i}\in\mathcal{P}^{<\infty}(\Lambda^{\imath})\) for any \(i\in\mathbb{I}\); see [15, Lemma 2.3].
Following [14], we can define the Euler forms \(\langle K,M\rangle=\langle K,M\rangle_{\Lambda^{\imath}}\) and \(\langle M,K\rangle=\langle M,K\rangle_{\Lambda^{\imath}}\) for any \(K\in\mathcal{P}^{\leq 1}(\Lambda^{\imath})\), \(M\in\mathrm{mod}^{\mathrm{nil}}(\Lambda^{\imath})\). These forms descend to bilinear Euler forms on the Grothendieck groups:
\[\langle\cdot,\cdot\rangle:K_{0}(\mathcal{P}^{\leq 1}(\Lambda^{ \imath}))\times K_{0}(\mathrm{mod}^{\mathrm{nil}}(\Lambda^{\imath})) \longrightarrow\mathbb{Z},\] \[\langle\cdot,\cdot\rangle:K_{0}(\mathrm{mod}^{\mathrm{nil}}( \Lambda^{\imath}))\times K_{0}(\mathcal{P}^{\leq 1}(\Lambda^{\imath})) \longrightarrow\mathbb{Z}.\]
Denote by \(\langle\cdot,\cdot\rangle_{Q}\) the Euler form of \(\mathbf{k}Q\). Denote by \(S_{i}\) the simple \(\mathbf{k}Q\)-module (respectively, \(\Lambda^{\imath}\)-module) corresponding to vertex \(i\in Q_{0}\) (respectively, \(i\in\overline{Q}_{0}\)). These 2 Euler forms are related via the restriction functor \(\mathrm{res}:\mathrm{mod}^{\mathrm{nil}}(\Lambda^{\imath})\to\mathrm{mod}^{ \mathrm{nil}}(\mathbf{k}Q)\) as follows.
**Lemma 2.1** ([15, Lemma 3.1]).: _We have_
1. \(\langle K,M\rangle=\langle\mathrm{res}_{\mathbb{H}}(K),M\rangle\)_,_ \(\langle M,K\rangle=\langle M,\mathrm{res}_{\mathbb{H}}(K)\rangle\)_,_ \(\forall M\in\mathrm{mod}^{\mathrm{nil}}(\Lambda^{\imath}),K\in\mathcal{P}^{ \leq 1}(\Lambda^{\imath})\)_;_
2. \(\langle\mathbb{E}_{i},M\rangle=\langle S_{i},\mathrm{res}(M)\rangle_{Q}\)_,_ \(\langle M,\mathbb{E}_{i}\rangle=\langle\mathrm{res}(M),S_{\tau i}\rangle_{Q}\)_,_ \(\forall i\in Q_{0}\)_,_ \(M\in\mathrm{mod}^{\mathrm{nil}}(\Lambda^{\imath})\)_;_
3. \(\langle M,N\rangle=\frac{1}{2}\langle\mathrm{res}(M),\mathrm{res}(N)\rangle_{Q}\)_,_ \(\forall M,N\in\mathcal{P}^{\leq 1}(\Lambda^{\imath})\)_._
### Hall algebras
In this subsection we consider \(\mathbf{k}=\mathbb{F}_{q}\), and set
\[\mathbf{v}=\sqrt{q}.\]
Generalizing [12], the first author defined a (twisted) semi-derived Hall algebra for a 1-Gorenstein algebra [11]. The \(\imath\)Hall algebra \(\widetilde{\mathcal{H}}(\mathbf{k}Q,\tau)\) for \(\imath\)quiver \((Q,\tau)\) is by definition the twisted semi-derived Hall algebra for the module category of the \(\imath\)quiver algebra \(\Lambda^{\imath}\); see [14, 15]. We recall it here briefly.
Let \(\mathcal{H}(\Lambda^{\imath})\) be the Ringel-Hall algebra of \(\Lambda^{\imath}\), i.e.,
\[\mathcal{H}(\Lambda^{\imath})=\bigoplus_{[M]\in\mathrm{Iso}(\mathrm{mod}^{ \mathrm{nil}}(\Lambda^{\imath}))}\mathbb{Q}(\mathbf{v})[M],\]
with the multiplication defined by (see [3])
\[[M]\diamond[N]=\sum_{[L]\in\mathrm{Iso}(\mathrm{mod}^{\mathrm{nil}}(\Lambda^{ \imath}))}\frac{|\operatorname{Ext}^{1}(M,N)_{L}|}{|\operatorname{Hom}(M,N)|}[L].\]
For any three objects \(X,Y,Z\), let
\[F^{Z}_{XY} =\big{|}\{L\subseteq Z,L\cong Y\text{ and }Z/L\cong X\}\big{|}\] \[=\frac{|\operatorname{Ext}^{1}(X,Y)_{Z}|}{|\operatorname{Hom}(X,Y )|}\cdot\frac{|\operatorname{Aut}(Z)|}{|\operatorname{Aut}(X)||\operatorname{ Aut}(Y)|}\qquad(\text{Riedtman-Peng formula}). \tag{2.9}\]
Define \(I\) to be the two-sided ideal of \(\mathcal{H}(\Lambda^{\imath})\) generated by
\[\{[K]-[K^{\prime}]\mid\mathrm{res}_{\mathbb{H}}(K)\cong\mathrm{res}_{ \mathbb{H}}(K^{\prime}),K,K^{\prime}\in\mathcal{P}^{<\infty}(\Lambda^{\imath}) \}\bigcup\] \[\{[L]-[K\oplus M]\mid\exists\text{ exact sequence }0\longrightarrow K \longrightarrow L\longrightarrow M\longrightarrow 0,K\in\mathcal{P}^{<\infty}( \Lambda^{\imath})\}. \tag{2.10}\]
Consider the following multiplicatively closed subset \(\mathcal{S}\) of \(\mathcal{H}(\Lambda^{\imath})/I\):
\[\mathcal{S}=\{a[K]\in\mathcal{H}(\Lambda^{\imath})/I\mid a\in\mathbb{Q}( \mathbf{v})^{\times},K\in\mathcal{P}^{<\infty}(\Lambda^{\imath})\}. \tag{2.11}\]
The semi-derived Hall algebra of \(\Lambda^{\imath}\)[11] is defined to be the localization
\[\mathcal{SDH}(\Lambda^{\imath}):=(\mathcal{H}(\Lambda^{\imath})/I)[\mathcal{S} ^{-1}].\]
We define the \(\imath\)Hall algebra (i.e., a twisted semi-derived Hall algebra) \(\widetilde{\mathcal{H}}(\mathbf{k}Q,\tau)\)[14, SS4.4] to be the \(\mathbb{Q}(\mathbf{v})\)-algebra on the same vector space as \(\mathcal{SDH}(\Lambda^{\imath})\) but with twisted multiplication given by
\[[M]*[N]=\mathbf{v}^{\langle\operatorname{res}(M),\operatorname{res}(N)\rangle_{Q }}[M]\diamond[N]. \tag{2.12}\]
## 3. Quantum groups and \(\imath\)quantum groups
In this section, we review the quantum groups and \(\imath\)quantum groups following [14, 15]; also see [17, 9, 10, 8, 1].
### Quantum groups
Let \(Q\) be a quiver (without loops) with vertex set \(Q_{0}=\mathbb{I}\). Let \(n_{ij}\) be the number of edges connecting vertices \(i\) and \(j\). Let \(C=(c_{ij})_{i,j\in\mathbb{I}}\) be the symmetric generalized Cartan matrix of the underlying graph of \(Q\), defined by \(c_{ij}=2\delta_{ij}-n_{ij}.\) Let \(\mathfrak{g}\) be the corresponding Kac-Moody Lie algebra.
Let \(v\) be an indeterminate. Define the quantum integers, quantum (double) factorials, and quantum binomial coefficients, for \(r\in\mathbb{N}\) and \(m\in\mathbb{Z}\),
\[\begin{split}[m]=[m]_{v}=\frac{v^{m}-v^{-m}}{v-v^{-1}},& \qquad[r]^{!}=[r]^{!}_{v}=\prod_{i=1}^{r}[i]_{v},\\ [2r]^{!!}=[2r]^{!!}_{v}=\prod_{i=1}^{r}[2i]_{v},& \qquad\begin{bmatrix}m\\ r\end{bmatrix}=\frac{[m][m-1]\ldots[m-r+1]}{[r]^{!}}.\end{split} \tag{3.1}\]
Write \([\underline{A},\underline{B}]=AB-BA\). Then \(\widetilde{\mathbf{U}}=\widetilde{\mathbf{U}}_{v}(\mathfrak{g})\) is defined to be the \(\mathbb{Q}(v)\)-algebra generated by \(E_{i},F_{i},\widetilde{K}_{i},\widetilde{K}^{\prime}_{i}\), \(i\in\mathbb{I}\), where \(\widetilde{K}_{i},\widetilde{K}^{\prime}_{i}\) are invertible, subject to the following relations:
\[[E_{i},F_{j}]=\delta_{ij}\frac{\widetilde{K}_{i}-\widetilde{K}^{ \prime}_{i}}{v-v^{-1}}, [\widetilde{K}_{i},\widetilde{K}_{j}]=[\widetilde{K}_{i}, \widetilde{K}^{\prime}_{j}]=[\widetilde{K}^{\prime}_{i},\widetilde{K}^{\prime }_{j}]=0, \tag{3.3}\] \[\widetilde{K}_{i}E_{j}=v^{c_{ij}}E_{j}\widetilde{K}_{i}, \widetilde{K}_{i}F_{j}=v^{-c_{ij}}F_{j}\widetilde{K}_{i},\] (3.4) \[\widetilde{K}^{\prime}_{i}E_{j}=v^{-c_{ij}}E_{j}\widetilde{K}^{ \prime}_{i}, \widetilde{K}^{\prime}_{i}F_{j}=v^{c_{ij}}F_{j}\widetilde{K}^{ \prime}_{i}, \tag{3.2}\]
and the quantum Serre relations, for \(i\neq j\in\mathbb{I}\),
\[\sum_{r=0}^{1-c_{ij}}(-1)^{r}\left[\begin{array}{c}1-c_{ij}\\ r\end{array}\right]E^{r}_{i}E_{j}E^{1-c_{ij}-r}_{i}=0, \tag{3.6}\] \[\sum_{r=0}^{1-c_{ij}}(-1)^{r}\left[\begin{array}{c}1-c_{ij}\\ r\end{array}\right]F^{r}_{i}F_{j}F^{1-c_{ij}-r}_{i}=0. \tag{3.5}\]
Note that \(\widetilde{K}_{i}\widetilde{K}^{\prime}_{i}\) are central in \(\widetilde{\mathbf{U}}\) for all \(i\). The comultiplication \(\Delta:\widetilde{\mathbf{U}}\to\widetilde{\mathbf{U}}\otimes\widetilde{ \mathbf{U}}\) is given by
\[\Delta(E_{i})=E_{i}\otimes 1+\widetilde{K}_{i}\otimes E_{i}, \Delta(F_{i})=1\otimes F_{i}+F_{i}\otimes\widetilde{K}^{\prime}_{i},\] \[\Delta(\widetilde{K}_{i})=\widetilde{K}_{i}\otimes\widetilde{K}_{i}, \Delta(\widetilde{K}^{\prime}_{i})=\widetilde{K}^{\prime}_{i} \otimes\widetilde{K}^{\prime}_{i}. \tag{3.7}\]
Analogously as for \(\widetilde{\mathbf{U}}\), the quantum group \(\mathbf{U}\) is defined to be the \(\mathbb{Q}(v)\)-algebra generated by \(E_{i},F_{i},K_{i},K_{i}^{-1}\), \(i\in\mathbb{I}\), subject to the relations modified from (3.2)-(3.6) with \(\widetilde{K}_{i}\) and \(\widetilde{K}^{\prime}_{i}\) replaced by \(K_{i}\) and \(K_{i}^{-1}\), respectively. The comultiplication \(\Delta\) is obtained by modifying (3.7) with \(\widetilde{K}_{i}\) and \(\widetilde{K}^{\prime}_{i}\) replaced by \(K_{i}\) and \(K_{i}^{-1}\), respectively (cf. [17]; beware that our \(K_{i}\) has a different meaning from \(K_{i}\in\mathbf{U}\) therein).
### \(\imath\)**Quantum groups.**
For a (generalized) Cartan matrix \(C=(c_{ij})\), let \(\operatorname{Aut}(C)\) be the group of all permutations \(\tau\) of the set \(\mathbb{I}\) such that \(c_{ij}=c_{ri,\tau j}\). An element \(\tau\in\operatorname{Aut}(C)\) is called an _involution_ if \(\tau^{2}=\operatorname{Id}\).
Let \(\tau\) be an involution in \(\operatorname{Aut}(C)\). We define \(\widetilde{\mathbf{U}}^{\imath}\) to be the \(\mathbb{Q}(v)\)-subalgebra of \(\widetilde{\mathbf{U}}\) generated by
\[B_{i}=F_{i}+E_{\tau i}\widetilde{K}_{i}^{\prime},\qquad\widetilde{k}_{i}= \widetilde{K}_{i}\widetilde{K}_{\tau i}^{\prime},\quad\forall i\in\mathbb{I}. \tag{3.8}\]
Let \(\widetilde{\mathbf{U}}^{\imath 0}\) be the \(\mathbb{Q}(v)\)-subalgebra of \(\widetilde{\mathbf{U}}^{\imath}\) generated by \(\widetilde{k}_{i}\), for \(i\in\mathbb{I}\). By [14, Lemma 6.1], the elements \(\widetilde{k}_{i}\) (for \(i=\tau i\)) and \(\widetilde{k}_{i}\widetilde{k}_{\tau i}\) (for \(i\neq\tau i\)) are central in \(\widetilde{\mathbf{U}}^{\imath}\).
Let \(\mathfrak{c}=(\varsigma_{i})\in(\mathbb{Q}(v)^{\times})^{\mathbb{I}}\) be such that \(\varsigma_{i}=\varsigma_{\tau i}\) for all \(i\). Let \(\mathbf{U}^{\imath}:=\mathbf{U}_{\boldsymbol{\varsigma}}^{\imath}\) be the \(\mathbb{Q}(v)\)-subalgebra of \(\mathbf{U}\) generated by
\[B_{i}=F_{i}+\varsigma_{i}E_{\tau i}K_{i}^{-1},\quad k_{j}=K_{j}K_{\tau j}^{-1},\qquad\forall i\in\mathbb{I},j\in\mathbb{I}\backslash\mathbb{I}_{\tau}.\]
It is known [9, 8] that \(\mathbf{U}^{\imath}\) is a right coideal subalgebra of \(\mathbf{U}\) in the sense that \(\Delta:\mathbf{U}^{\imath}\to\mathbf{U}^{\imath}\otimes\mathbf{U}\); and \((\mathbf{U},\mathbf{U}^{\imath})\) is called a _quantum symmetric pair_ (_QSP_ for short), as they specialize at \(v=1\) to \((U(\mathfrak{g}),U(\mathfrak{g}^{\theta}))\), where \(\theta=\omega\circ\tau\), \(\omega\) is the Chevalley involution, and \(\tau\) is understood here as an automorphism of \(\mathfrak{g}\).
The algebras \(\mathbf{U}_{\boldsymbol{\varsigma}}^{\imath}\), for \(\boldsymbol{\varsigma}\in(\mathbb{Q}(v)^{\times})^{\mathbb{I}}\), are obtained from \(\widetilde{\mathbf{U}}^{\imath}\) by central reductions.
**Proposition 3.1** ([14, Proposition 6.2]).: _(1) The algebra \(\mathbf{U}^{\imath}\) is isomorphic to the quotient of \(\widetilde{\mathbf{U}}^{\imath}\) by the ideal generated by_
\[\widetilde{k}_{i}-\varsigma_{i}\;(\text{for }i=\tau i),\qquad\widetilde{k}_{i} \widetilde{k}_{\tau i}-\varsigma_{i}\varsigma_{\tau i}\;(\text{for }i\neq\tau i). \tag{3.9}\]
_The isomorphism is given by sending \(B_{i}\mapsto B_{i},k_{j}\mapsto\varsigma_{\tau j}^{-1}\widetilde{k}_{j},k_{j}^ {-1}\mapsto\varsigma_{j}^{-1}\widetilde{k}_{\tau j},\forall i\in\mathbb{I},j \in\mathbb{I}\backslash\mathbb{I}_{\tau}\)._
_(2) The algebra \(\widetilde{\mathbf{U}}^{\imath}\) is a right coideal subalgebra of \(\widetilde{\mathbf{U}}\); that is, \((\widetilde{\mathbf{U}},\widetilde{\mathbf{U}}^{\imath})\) forms a QSP._
We shall refer to \(\widetilde{\mathbf{U}}^{\imath}\) and \(\mathbf{U}^{\imath}\) as _(quasi-split) \(\imath\)quantum groups_; they are called _split_ if \(\tau=\operatorname{Id}\).
For \(i\in\mathbb{I}\) with \(\tau i=i\), generalizing the constructions in [1, 2], we define the _\(\imath\)divided powers_ of \(B_{i}\) to be (also see [5])
\[B_{i,1}^{(m)}=\frac{1}{[m]^{!}}\left\{\begin{array}{ll}B_{i} \prod_{s=1}^{k}(B_{i}^{2}-\widetilde{k}_{i}[2s-1]^{2})&\text{if }m=2k+1,\\ \prod_{s=1}^{k}(B_{i}^{2}-\widetilde{k}_{i}[2s-1]^{2})&\text{if }m=2k; \end{array}\right. \tag{3.11}\] \[B_{i,0}^{(m)}=\frac{1}{[m]^{!}}\left\{\begin{array}{ll}B_{i} \prod_{s=1}^{k}(B_{i}^{2}-\widetilde{k}_{i}[2s]^{2})&\text{if }m=2k+1,\\ \prod_{s=1}^{k}(B_{i}^{2}-\widetilde{k}_{i}[2s-2]^{2})&\text{if }m=2k. \end{array}\right. \tag{3.10}\]
On the other hand, for \(i\in\mathbb{I}\) with \(i\neq\tau i\), we define the divided powers as in the quantum group setting: for \(m\in\mathbb{N}\),
\[B_{i}^{(m)}=\frac{B_{i}^{m}}{[m]^{!}}. \tag{3.12}\]
Denote
\[(a;x)_{0}=1,\qquad(a;x)_{n}=(1-a)(1-ax)\cdots(1-ax^{n-1}),\quad n\geq 1.\]
We have the following _Serre presentation_ of \(\widetilde{\mathbf{U}}^{\imath}\), with \(\overline{p}_{i}\in\mathbb{Z}_{2}\) fixed for each \(i\in\mathbb{I}\).
**Proposition 3.2** ([15, Theorem 4.2]; also cf. [4, Theorem 2]).: _The \(\mathbb{Q}(v)\)-algebra \(\widetilde{\mathbf{U}}^{\imath}\) has a presentation with generators \(B_{i}\), \(\widetilde{k}_{i}\;(i\in\mathbb{I})\) and the relations (3.13)-(3.17) below: for \(l\in\mathbb{I}\), and \(i\neq j\in\mathbb{I}\),_
\[\widetilde{k}_{i}\widetilde{k}_{l}=\widetilde{k}_{l}\widetilde{k} _{i},\quad\widetilde{k}_{i}B_{l}=v^{c_{\tau i,l}-c_{il}}B_{l}\widetilde{k}_{i}, \tag{3.14}\] \[B_{i}B_{j}-B_{j}B_{i}=0,\quad\text{ if }c_{ij}=0\text{ and }\tau i\neq j, \tag{3.13}\]
\[\sum_{n=0}^{1-c_{ij}}(-1)^{n}B_{i}^{(n)}B_{j}B_{i}^{(1-c_{ij}-n)}=0, \quad\text{ if }j\neq\tau i\neq i, \tag{3.16}\] \[\sum_{n=0}^{1-c_{i,\tau i}}(-1)^{n+c_{i,\tau i}}B_{i}^{(n)}B_{\tau i }B_{i}^{(1-c_{i,\tau i}-n)}=\frac{1}{v-v^{-1}}\times\] \[\qquad\Big{(}v^{c_{i,\tau i}}(v^{-2};v^{-2})_{-c_{i,\tau i}}B_{i}^ {(-c_{i,\tau i})}\widetilde{k}_{i}-(v^{2};v^{2})_{-c_{i,\tau i}}B_{i}^{(-c_{i, \tau i})}\widetilde{k}_{\tau i}\Big{)},\quad\text{ if }\tau i\neq i,\] (3.17) \[\sum_{n=0}^{1-c_{ij}}(-1)^{n}B_{i,\overline{p}_{i}}^{(n)}B_{j}B_{i,\overline{c_{ij}}+\overline{p}_{i}}^{(1-c_{ij}-n)}=0,\quad\text{ if }i=\tau i. \tag{3.15}\]
## 4. \(\imath\)Serre relation in \(\imath\)Hall algebras
In this section, we shall reappear the \(\imath\)Serre relation (3.17) in \(\imath\)Hall algebras.
### \(\imath\)Divided powers in \(\imath\)Hall algebras
For a \(\Lambda^{\imath}\)-module \(M\), we shall write
\[[lM]=[\underbrace{M\oplus\cdots\oplus M}_{l}],\qquad[M]^{l}=\underbrace{[M]* \cdots*[M]}_{l}.\]
Following [15], if \(\tau i=i\), we define the \(\imath\)divided power of \([S_{i}]\) in \(\mathcal{SD\widetilde{H}}(\Lambda^{\imath})\) as follows:
\[[S_{i}]_{\bar{1}}^{(m)}:=\frac{1}{[m]^{!}_{\mathbf{v}}}\left\{ \begin{array}{ll}[S_{i}]*\prod_{j=1}^{k}([S_{i}]^{2}+\mathbf{v}^{-1}( \mathbf{v}^{2}-1)^{2}[2j-1]^{2}_{\mathbf{v}}[\mathbb{E}_{i}])&\text{if }m=2k+1,\\ \prod_{j=1}^{k}([S_{i}]^{2}+\mathbf{v}^{-1}(\mathbf{v}^{2}-1)^{2}[2j-1]^{2}_{ \mathbf{v}}[\mathbb{E}_{i}])&\text{if }m=2k;\end{array}\right. \tag{4.2}\] \[[S_{i}]_{\bar{0}}^{(m)}:=\frac{1}{[m]^{!}_{\mathbf{v}}}\left\{ \begin{array}{ll}[S_{i}]*\prod_{j=1}^{k}([S_{i}]^{2}+\mathbf{v}^{-1}( \mathbf{v}^{2}-1)^{2}[2j]^{2}_{\mathbf{v}}[\mathbb{E}_{i}])&\text{if }m=2k+1,\\ \prod_{j=1}^{k}([S_{i}]^{2}+\mathbf{v}^{-1}(\mathbf{v}^{2}-1)^{2}[2j-2]^{2}_{ \mathbf{v}}[\mathbb{E}_{i}])&\text{if }m=2k.\end{array}\right. \tag{4.1}\]
If \(\tau i\neq i\), define
\[[S_{i}]^{(m)}:=\frac{1}{[m]^{!}_{\mathbf{v}}}[S_{i}]^{m}=\mathbf{v}^{-\frac{m( m-1)}{2}}[mS_{i}]. \tag{4.3}\]
We recall the expansion formula of the \(\imath\)divided powers in terms of an \(\imath\)Hall basis. See (3.1) for notation \([2k]^{!!}_{\mathbf{v}}\).
**Lemma 4.1** ([15, Propositions 6.4-6.5]).: _For any \(m\in\mathbb{N}\), \(\overline{p}\in\mathbb{Z}_{2}\) we have_
\[[S_{i}]_{\overline{p}}^{(m)}=\left\{\begin{array}{ll}\sum\limits_{k=0}^{\lfloor \frac{m}{2}\rfloor}\underbrace{\mathbf{v}^{k(k-1)-\left(m-\frac{2k}{2}\right)}} _{[m-2k]\mathbb{v}[2k]\mathbb{v}}(\mathbf{v}-\mathbf{v}^{-1})^{k}[(m-2k)S_{i }]*[\mathbb{E}_{i}]^{k},&\text{ if }\overline{m}=\overline{p};\\ \sum\limits_{k=0}^{\lfloor\frac{m}{2}\rfloor}\underbrace{\mathbf{v}^{k(k+1)- \left(m-\frac{2k}{2}\right)}}_{[m-2k]^{!}_{\mathbf{v}}[2k]^{!}_{\mathbf{v}}}( \mathbf{v}-\mathbf{v}^{-1})^{k}[(m-2k)S_{i}]*[\mathbb{E}_{i}]^{k},&\text{ if }\overline{m}\neq\overline{p}.\end{array}\right. \tag{4.4}\]
### \(\imath\)Serre relations in \(\imath\)Hall algebras
Consider the \(\imath\)quiver
\[Q=(\begin{array}{cc}1&\xrightarrow{\text{\tiny$\alpha$}}\\ \xrightarrow{\text{\tiny$\alpha$}}&2\end{array}),\quad\tau=\text{Id},\qquad\text{ where }a+b=-c_{12}. \tag{4.5}\]
Here and blow, \(\begin{array}{cc}1&\xrightarrow{\text{\tiny$\alpha$}}\\ \xrightarrow{\text{\tiny$\alpha$}}&2\end{array}\) means there are \(a\) arrows from \(1\) to \(2\). The arrows \(1\to 2\) are denoted by \(\alpha_{i}\), \(1\leq i\leq a\), and \(2\to 1\) are denoted by \(\beta_{j}\), \(1\leq j\leq b\). Then the corresponding \(\imath\)quiver algebra \(\Lambda^{\imath}\) has its quiver \(\overline{Q}\) as
**Theorem 4.2**.: _Let \(\Lambda^{\imath}\) be the \(\imath\)quiver algebra associated to the \(\imath\)quiver (4.5). Then the following identity holds in \(\widetilde{\mathcal{H}}(\mathbf{k}Q,\tau)\), for any \(\overline{p}\in\mathbb{Z}/2\):_
\[\sum_{n=0}^{1+a+b}(-1)^{n}[S_{1}]^{(n)}_{\overline{p}}*[S_{2}]*[S_{1}]^{(1+a+b- n)}_{\overline{a+\overline{b}+\overline{p}}}=0. \tag{4.6}\]
The proof of Theorem 4.2 shall occupy the remainder of this section.
### A building block
We recall a useful multiplication formula of \(\imath\)Hall algebras in the following lemma.
**Lemma 4.3** ([15, Proposition 3.10]).: _Let \((Q,\tau)\) be an \(\imath\)quiver with \(\tau=\mathrm{Id}\). For any \(A,B\in\mathrm{mod}^{\mathrm{nil}}(\mathbf{k}Q)\subset\mathrm{mod}^{\mathrm{nil} }(\Lambda^{\imath})\), we have_
\[[A]*[B] =\sum_{[L],[M],[N]\in\mathrm{Iso}(\mathrm{mod}^{\mathrm{nil}}( \mathbf{k}Q))}\mathbf{v}^{-(A,B)_{Q}}q^{(N,L)_{Q}}\frac{|\operatorname{Ext}^{ 1}(N,L)_{M}|}{|\operatorname{Hom}(N,L)|}\] \[\quad\cdot|\{s\in\operatorname{Hom}(A,B)\mid\ker s\cong N, \operatorname{Coker}s\cong L\}|\cdot[M]*[\mathbb{K}_{\widehat{A}-\widehat{N}}] \tag{4.7}\]
_in \(\widetilde{\mathcal{H}}(\mathbf{k}Q,\tau)\)._
For any \(\Lambda^{\imath}\)-module \(M=(M_{i},M(\alpha_{j}),M(\beta_{k}),M(\varepsilon_{i}))_{i=1,2;1\leq j\leq a,1\leq k\leq b}\) such that \(\dim_{\mathbf{k}}M_{2}=1\), we define
\[U_{M}:=\bigcap_{1\leq i\leq a}\operatorname{Ker}M(\alpha_{i}),\qquad W_{M}:= \sum_{j=1}^{b}\operatorname{Im}M(\beta_{j}), \tag{4.8}\]
and let
\[u_{M}:=\dim U_{M},\qquad w_{M}:=\dim W_{M}. \tag{4.9}\]
For any \(r\geq 0\), define
\[\mathcal{I}_{r}:= \{[M]\in\mathrm{Iso}(\mathrm{mod}^{\mathrm{nil}}(\mathbf{k}Q)) \mid\widehat{M}=r\widehat{S_{1}}+\widehat{S_{2}},W_{M}\subseteq U_{M}\}, \tag{4.11}\] \[\mathcal{J}_{r}:= \{[M]\in\mathrm{Iso}(\mathrm{mod}^{\mathrm{nil}}(\mathbf{k}Q)) \mid\widehat{M}=r\widehat{S_{1}}+\widehat{S_{2}},S_{2}\in\operatorname{ add}\top(M)\}. \tag{4.10}\]
The following proposition is a generalization of [15, Proposition 7.3].
**Proposition 4.4**.: _The following identity holds in \(\widetilde{\mathcal{H}}(\mathbf{k}Q,\tau)\), for \(s,t\geq 0\):_
\[[sS_{1}]*[S_{2}]*[tS_{1}]= \mathbf{v}^{-tb}\sum_{r=0}^{\min\{s,t\}}\sum_{[M]\in\mathcal{I} _{s+t-2r}}\mathbf{v}^{\widetilde{p}(a,b,r,s,t)}(\mathbf{v}-\mathbf{v}^{-1})^{ s-r+t+1}\frac{[s]_{\mathbf{v}}^{\dagger}[t]_{\mathbf{v}}^{\dagger}}{[r]_{ \mathbf{v}}^{\dagger}} \tag{4.12}\] \[\left[\begin{matrix}u_{M}-w_{M}\\ (t-r)-w_{M}\end{matrix}\right]_{\mathbf{v}}\frac{1}{|\operatorname{Aut}(M)|}[ M]*[\mathbb{E}_{1}]^{r},\]
_where_
\[\tilde{p}(a,b,r,s,t)= 2(s-r)(t-r-a)-s(t-a)+2br-2r(t-r)-r^{2}+rs+t^{2}+\binom{t}{2}+\] \[(s-r)^{2}+\binom{s-r}{2}+\big{(}u_{M}-(t-r)\big{)}\big{(}(t-r)-w _{M}\big{)}+1.\]
Proof.: Using Lemma 4.3, we compute
LHS(4.12) \[= {\bf v}^{-tb}\sum_{[U]\in{\mathcal{J}}_{t}}|\operatorname{Ext}^{1}(S _{2},S_{1}^{\oplus t})_{U}|[S_{1}^{\oplus s}]*[U]\] \[= {\bf v}^{-tb}\sum_{[U]\in{\mathcal{J}}_{t}}|\operatorname{Ext}^{1 }(S_{2},S_{1}^{\oplus t})_{U}|\sum_{r=0}^{\min\{s,t\}}\sum_{[M],[L]\in{ \mathcal{J}}_{t-r}}{\bf v}^{sa-st}q^{(s-r)(t-r-a)}\] \[\frac{|\operatorname{Ext}^{1}(S_{1}^{\oplus(s-r)},L)_{M}|}{| \operatorname{Hom}(S_{1}^{\oplus(s-r)},L)|}\cdot|\{f:S_{1}^{\oplus t}\to U \mid\ker f\cong S_{1}^{\oplus(s-r)},\operatorname{Coker}f\cong L\}|\cdot[M]*[ \mathbb{K}_{1}]^{r}.\]
In fact, for a fixed \(0\leq r\leq\min\{s,t\}\), we have \(\widehat{M}=(s+t-2r)\widehat{S_{1}}+\widehat{S_{2}}\). For any \(f:S_{1}^{\oplus s}\to U\) such that \(\ker f\cong S_{1}^{\oplus(s-r)}\) and \(\operatorname{Coker}f\cong L\), there exist \(f_{1}:S_{1}^{\oplus s}\twoheadrightarrow S_{1}^{\oplus r}\) and \(f_{2}:S_{1}^{\oplus r}\hookrightarrow U\) such that \(\operatorname{Ker}f_{1}\cong S_{1}^{\oplus(s-r)}\) and \(\operatorname{Coker}f_{2}\cong L\). Moreover, \(f_{1}\) and \(f_{2}\) are unique up to a free action of \(\operatorname{Aut}(S_{1}^{\oplus r})\). Then
\[|\{f:S_{1}^{\oplus s}\to U\mid\ker f\cong S_{1}^{\oplus(s-r)}, \operatorname{Coker}f\cong L\}|\] \[= \frac{|\{f_{1}:S_{1}^{\oplus s}\twoheadrightarrow S_{1}^{\oplus r }\mid\operatorname{Ker}f_{1}\cong S_{1}^{\oplus(s-r)}\}|\cdot|\{f_{2}:S_{1}^{ \oplus r}\hookrightarrow U\mid\operatorname{Coker}f_{2}\cong L\}|}{| \operatorname{Aut}(S_{1}^{\oplus r})|}\] \[= |\{f_{1}:S_{1}^{\oplus s}\twoheadrightarrow S_{1}^{\oplus r}\}| \cdot F_{L,S_{1}^{\oplus r}}^{U}.\]
So
LHS(4.12) \[= {\bf v}^{-tb}\sum_{r\geq 0,[M]}\sum_{[U]\in{\mathcal{J}}_{t},[L] \in{\mathcal{J}}_{t-r}}{\bf v}^{sa-st}q^{(s-r)(t-r-a)}\cdot|\operatorname{Ext }^{1}(S_{2},S_{1}^{\oplus t})_{U}|\cdot\] \[|\operatorname{Ext}^{1}(S_{1}^{\oplus(s-r)},L)_{M}|\cdot q^{-(s-r )(t-r)}\cdot|\{f_{1}:S_{1}^{\oplus s}\twoheadrightarrow S_{1}^{\oplus r}\}| \cdot F_{L,S_{1}^{\oplus r}}^{U}\cdot[M]*[\mathbb{K}_{1}]^{r}\] \[= {\bf v}^{-tb}\sum_{r\geq 0,[M]}\sum_{[U]\in{\mathcal{J}}_{t},[L] \in{\mathcal{J}}_{t-r}}{\bf v}^{sa-st+2ar}\prod_{j=0}^{r-1}(q^{s}-q^{j})\cdot |\operatorname{Ext}^{1}(S_{2},S_{1}^{\oplus t})_{U}|\cdot\] \[F_{L,S_{1}^{\oplus r}}^{U}\cdot[M]*[\mathbb{K}_{1}]^{r}.\]
It is obvious that \(F_{S_{2},S_{1}^{\oplus t}}^{U}\neq 0\) if and only if \([U]\in{\mathcal{J}}_{t}\), and in this case, \(F_{S_{2},S_{1}^{\oplus t}}^{U}=1\). By the Riedtmann-Peng formula we have
\[|\operatorname{Ext}^{1}(S_{2},S_{1}^{\oplus t})_{U}|=\frac{| \operatorname{Aut}(S_{2})|\cdot|\operatorname{Aut}(S_{1}^{\oplus t})|}{| \operatorname{Aut}(U)|}F_{S_{2},S_{1}^{\oplus t}}^{U}=\frac{|\operatorname{ Aut}(S_{2})|\cdot|\operatorname{Aut}(S_{1}^{\oplus t})|}{|\operatorname{Aut}(U)|}.\]
Furthermore, we have
\[|\operatorname{Ext}^{1}(S_{1}^{\oplus(s-r)},L)_{M}|=F_{S_{1}^{\oplus(s-r)},L}^{M }\cdot|\operatorname{Hom}(S_{1}^{\oplus(s-r)},L)|\frac{|\operatorname{Aut} \left(S_{1}^{\oplus(s-r)}\right)|\cdot|\operatorname{Aut}(L)|}{| \operatorname{Aut}(M)|}\]
by using the Riedtmann-Peng formula again. So
LHS(4.12) \[= {\bf v}^{-tb}\sum_{r\geq 0,[M]}\sum_{[U]\in{\mathcal{J}}_{t},[L] \in{\mathcal{J}}_{t-r}}{\bf v}^{-sa-st+2ar}\prod_{j=0}^{r-1}(q^{s}-q^{j}) \cdot|\operatorname{Ext}^{1}(S_{2},S_{1}^{\oplus t})_{U}|\cdot F_{S_{1}^{\oplus( s-r)},L}^{M}\cdot\]
\[|\mathrm{Hom}(S_{1}^{\oplus(s-r)},L)|\cdot\frac{|\operatorname{Aut} \left(S_{1}^{\oplus(s-r)}\right)|\cdot|\operatorname{Aut}(L)|}{|\operatorname{ Aut}(M)|}\cdot F_{L,S_{1}^{\oplus r}}^{U}\cdot[M]*[\mathbb{K}_{1}]^{r}\] \[= \mathbf{v}^{-tb}\sum_{r\geq 0,[M]}\sum_{[U]\in\mathcal{J}_{t},[L] \in\mathcal{J}_{t-r}}\mathbf{v}^{2(s-r)(t-r-a)-s(t-a)}\prod_{j=0}^{r-1}(q^{s}-q ^{j})\prod_{j=0}^{s-r-1}(q^{s-r}-q^{j})\cdot\] \[|\mathrm{Ext}^{1}(S_{2},S_{1}^{\oplus t})_{U}|\cdot F_{S_{1}^{ \oplus(s-r)},L}^{M}\cdot F_{L,S_{1}^{\oplus r}}^{U}\cdot\frac{|\operatorname{ Aut}(L)|}{|\operatorname{Aut}(M)|}\cdot[M]*[\mathbb{K}_{1}]^{r}\] \[= \mathbf{v}^{-tb}\sum_{r\geq 0,[M]}\sum_{[U]\in\mathcal{J}_{t},[L] \in\mathcal{J}_{t-r}}\mathbf{v}^{2(s-r)(t-r-a)-s(t-a)}\prod_{j=0}^{r-1}(q^{s}- q^{j})\prod_{j=0}^{s-r-1}(q^{s-r}-q^{j})\] \[\frac{|\operatorname{Aut}(S_{2})|\cdot|\operatorname{Aut}(S_{1}^ {\oplus t})|}{|\operatorname{Aut}(U)|}F_{S_{1}^{\oplus(s-r)},L}^{M}\cdot F_{L,S_{1}^{\oplus r}}^{U}\cdot\frac{|\operatorname{Aut}(L)|}{|\operatorname{Aut} (M)|}\cdot[M]*[\mathbb{K}_{1}]^{r}.\]
Since
\[F_{L,S_{1}^{\oplus r}}^{U}=\frac{|\operatorname{Ext}^{1}(L,S_{1}^{\oplus r})_{ U}|}{|\operatorname{Hom}(L,S_{1}^{\oplus r})|}\frac{|\operatorname{Aut}(U)|}{| \operatorname{Aut}(S_{1}^{\oplus r})|\cdot|\operatorname{Aut}(L)|}.\]
We have
LHS(4.12) \[= \mathbf{v}^{-tb}\sum_{r\geq 0,[M]}\mathbf{v}^{2(s-r)(t-r-a)-s(t-a )}\prod_{j=0}^{r-1}(q^{s}-q^{j})\prod_{j=0}^{s-r-1}(q^{s-r}-q^{j})\frac{| \operatorname{Aut}(S_{2})|\cdot|\operatorname{Aut}(S_{1}^{\oplus r})|}{| \operatorname{Aut}(M)|\cdot|\operatorname{Aut}(S_{1}^{\oplus r})|}\] \[\sum_{[L]\in\mathcal{J}_{t-r}}F_{S_{1}^{\oplus(s-r)},L}^{M}\cdot \sum_{[U]\in\mathcal{J}_{t}}\frac{|\operatorname{Ext}^{1}(L,S_{1}^{\oplus r}) _{U}|}{|\operatorname{Hom}(L,S_{1}^{\oplus r})|}\cdot[M]*[\mathbb{K}_{1}]^{r}.\]
For any \([L]\in\mathcal{J}_{t-r}\), we have \([U]\in\mathcal{J}_{t}\) for any \(U\) such that \(|\operatorname{Ext}^{1}(L,S_{1}^{\oplus r})_{U}|\neq 0\). Then
\[\sum_{[U]\in\mathcal{J}_{t}}\frac{|\operatorname{Ext}^{1}(L,S_{1}^{\oplus r}) _{U}|}{|\operatorname{Hom}(L,S_{1}^{\oplus r})|}=\frac{|\operatorname{Ext}^{1} (L,S_{1}^{\oplus r})|}{|\operatorname{Hom}(L,S_{1}^{\oplus r})|}=q^{-\langle L,S_{1}^{\oplus r}\rangle_{Q}}=q^{-\langle M,S_{1}^{\oplus r}\rangle_{Q}+ \langle S_{1}^{\oplus(s-r)},S_{1}^{\oplus r}\rangle_{Q}}.\]
So
LHS(4.12) \[= \mathbf{v}^{-tb}\sum_{[M],r}\mathbf{v}^{2(s-r)(t-r-a)-s(t-a)} \prod_{j=0}^{r-1}(q^{s}-q^{j})\prod_{j=0}^{s-r-1}(q^{s-r}-q^{j})\frac{| \operatorname{Aut}(S_{2})|\cdot|\operatorname{Aut}(S_{1}^{\oplus t})|}{| \operatorname{Aut}(M)|}\] \[q^{-\langle M,S_{1}^{\oplus r}\rangle_{Q}+\langle S_{1}^{\oplus( s-r)},S_{1}^{\oplus r}\rangle_{Q}}\frac{1}{|\operatorname{Aut}(S_{1}^{\oplus r})|} \sum_{[L]\in\mathcal{J}_{t-r}}F_{S_{1}^{\oplus(s-r)},L}^{M}\cdot[M]*[\mathbb{K} _{1}]^{r}.\]
Similar to [19, Page 38], one can obtain
\[\sum_{[L]\in\mathcal{J}_{t-r}}F_{S_{1}^{\oplus(s-r)},L}^{M}= |\operatorname{Gr}(t-r-w_{M},u_{M}-w_{M})|\] \[= \mathbf{v}^{(u_{M}-(t-r))((t-r)-w_{M})}\begin{bmatrix}u_{M}-w_{M} \\ (t-r)-w_{M}\end{bmatrix}_{v}. \tag{4.13}\]
Here \(\operatorname{Gr}(t-r-w_{M},u_{M}-w_{M})\) is the Grassmanian of \((t-r-w_{M})\)-planes in \((u_{M}-w_{M})\)-space. Furthermore, (4.13) is not zero only if \(W_{M}\subseteq U_{M}\). So we can assume \([M]\in\mathcal{I}_{s+t-2r}\).
A direct computation shows
\[|\operatorname{Aut}(S_{2})|=q-1,\qquad|\operatorname{Aut}(S_{1}^{\oplus t})|= \prod_{i=0}^{t-1}(q^{t}-q^{i}),\]
\[\langle S_{1}^{\oplus(s-r)},S_{1}^{\oplus r}\rangle_{Q}=(s-r)r,\qquad\langle M,S _{1}^{\oplus r}\rangle_{Q}=r(s+t-2r)-br.\]
Recall \(q=\mathbf{v}^{2}\). Note that
\[\prod_{j=0}^{r-1}(q^{r}-q^{j}) =\mathbf{v}^{r^{2}+\binom{r}{2}}(\mathbf{v}-\mathbf{v}^{-1})^{r }[r]_{v}^{!},\] \[\prod_{j=0}^{t-1}(q^{t}-q^{j}) =\mathbf{v}^{t^{2}+\binom{t}{2}}(\mathbf{v}-\mathbf{v}^{-1})^{t} [t]_{v}^{!},\] \[\prod_{j=0}^{r-1}(q^{s}-q^{j}) =\mathbf{v}^{rs+\binom{r}{2}}(\mathbf{v}-\mathbf{v}^{-1})^{r}[s]_ {v}[s-1]_{v}\cdots[s-r+1]_{v},\] \[\prod_{j=0}^{s-r-1}(q^{s-r}-q^{j}) =\mathbf{v}^{(s-r)^{2}+\binom{s-r}{2}}(\mathbf{v}-\mathbf{v}^{-1} )^{s-r}[s-r]_{v}^{!}.\]
According to the above computations, the desired formula follows.
### Proof of (4.6)
Note that (4.6) can be written as
\[\sum_{n=0}^{1+a+b}(-1)^{n}[S_{1}]_{\overline{0}}^{(n)}*[S_{2}]*[S_ {1}]_{\overline{a}+\overline{b}}^{(1+a+b-n)} =0, \tag{4.15}\] \[\sum_{n=0}^{1+a+b}(-1)^{n}[S_{1}]_{\overline{1}}^{(n)}*[S_{2}]*[S_ {1}]_{\overline{a}+\overline{b}+\overline{1}}^{(1+a+b-n)} =0. \tag{4.14}\]
In the following, we give a detailed proof of (4.14), and then a sketch proof of (4.15). The proof given here is modified from [15, SS7.3.1]. We divide the computation of the LHS of (4.14) into 2 cases.
_Case (I): \(n\) even._ According to Proposition 4.4 and Lemma 4.1, we have
\[[S_{1}]_{\overline{0}}^{(n)}*[S_{2}]*[S_{1}]_{\overline{a}+ \overline{b}}^{(1+a+b-n)}\] \[= \sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2} \rfloor}\frac{\mathbf{v}^{k(k-1)+m(m+1)-\binom{n-2k}{2}-\binom{1+a+b-n-2m}{2} }\cdot(\mathbf{v}-\mathbf{v}^{-1})^{k+m}}{[n-2k]_{\mathbf{v}}^{!}[1+a+b-n-2m] _{\mathbf{v}}^{!}[2k]_{\mathbf{v}}^{!}[2m]_{\mathbf{v}}^{!}}\] \[\times[(n-2k)S_{1}]*[S_{2}]*[(1+a+b-n-2m)S_{1}]*[\mathbb{E}_{1}]^{k +m}\] \[= \sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2} \rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\sum_{r=0}^{\min\{n-2k,1+a+b-n-2m\}}\sum_{[M] \in\mathcal{I}_{1+a+b-2k-2m-2r}}\] \[\frac{\mathbf{v}^{k(k-1)+m(m+1)-\binom{n-2k}{2}-\binom{1+a+b-n-2m }{2}}}{[n-2k]_{\mathbf{v}}^{!}[1+a+b-n-2m]_{\mathbf{v}}^{!}[2k]_{\mathbf{v}}^ {!}[2m]_{\mathbf{v}}^{!}}(\mathbf{v}-\mathbf{v}^{-1})^{k+m}\mathbf{v}^{\bar{ p}(a,b,r,n-2k,1+a+b-n-2m)}\] \[(\mathbf{v}-\mathbf{v}^{-1})^{2+a+b-2k-2m-r}\frac{[n-2k]_{\mathbf{ v}}^{!}[1+a+b-n-2m]_{\mathbf{v}}^{!}}{[r]_{\mathbf{v}}^{!}}\]
\[\cdot\begin{bmatrix}u_{M}-w_{M}\\ 1+a+b-n-2m-r-w_{M}\end{bmatrix}_{\mathbf{v}}\cdot\frac{[M]}{|\operatorname{Aut}(M )|}*[\mathbb{E}_{1}]^{k+m+r}.\]
This can be simplified to be, for \(n\) even,
\[[S_{1}]_{\overline{0}}^{(n)}*[S_{2}]*[S_{1}]_{\overline{a+b}}^{(1 +a+b-n)}\] \[= \sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2} \rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\sum_{r=0}^{\min\{n-2k,1+a+b-n-2m\}}\sum_{[M ]\in\mathcal{I}_{1+a+b-2k-2m-2r}}\] \[\frac{\mathbf{v}^{\tilde{z}}(\mathbf{v}-\mathbf{v}^{-1})^{2+a+b- k-m-r}}{[2k]_{\mathbf{v}}^{!!}[2m]_{\mathbf{v}}^{!!}[r]_{\mathbf{v}}^{!}} \begin{bmatrix}u_{M}-w_{M}\\ 1+a+b-n-2m-r-w_{M}\end{bmatrix}_{\mathbf{v}}\frac{[M]*[\mathbb{E}_{1}]^{k+m+r}} {|\operatorname{Aut}(M)|}, \tag{4.16}\]
where we denote
\[\tilde{z}=k(k-1) +m(m+1)-\binom{n-2k}{2}-\binom{1+a+b-n-2m}{2}\] \[+\tilde{p}(a,b,r,n-2k,1+a+b-n-2m).\]
_Case (II): n odd._
\[[S_{1}]_{\overline{0}}^{(n)}*[S_{2}]*[S_{1}]_{\overline{a+b}}^{( 1+a+b-n)}\] \[= \sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2} \rfloor}\frac{\mathbf{v}^{k(k+1)+m(m-1)-\binom{n-2k}{2}-\binom{1+a+b-n-2m}{2} }\cdot(\mathbf{v}-\mathbf{v}^{-1})^{k+m}}{[n-2k]_{\mathbf{v}}^{!}[1+a+b-n-2m]_ {\mathbf{v}}^{!}[2k]_{\mathbf{v}}^{!!}[2m]_{\mathbf{v}}^{!!}}\] \[\times[(n-2k)S_{1}]*[S_{2}]*[(1+a+b-n-2m)S_{1}]*[\mathbb{E}_{1}]^ {k+m}\] \[= \sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2} \rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\sum_{r=0}^{\min\{n-2k,1+a+b-n-2m\}}\sum_{ [M]\in\mathcal{I}_{1+a+b-2k-2m-2r}}\] \[\frac{\mathbf{v}^{k(k+1)+m(m-1)-\binom{n-2k}{2}-\binom{1+a+b-n-2m }{2}}}{[n-2k]_{\mathbf{v}}^{!}[1+a+b-n-2m]_{\mathbf{v}}^{!}[2k]_{\mathbf{v}}^ {!!}[2m]_{\mathbf{v}}^{!!}}(\mathbf{v}-\mathbf{v}^{-1})^{k+m}\mathbf{v}^{ \tilde{p}(a,b,r,n-2k,1+a+b-n-2m)}\] \[(\mathbf{v}-\mathbf{v}^{-1})^{2+a+b-2k-2m-r}\frac{[n-2k]_{\mathbf{ v}}^{!}[1+a+b-n-2m]_{\mathbf{v}}^{!}}{[r]_{\mathbf{v}}^{!}}\] \[\cdot\begin{bmatrix}u_{M}-w_{M}\\ 1+a+b-n-2m-r-w_{M}\end{bmatrix}_{\mathbf{v}}\cdot\frac{[M]}{|\operatorname{ Aut}(M)|}*[\mathbb{E}_{1}]^{k+m+r}.\]
This can be simplified to be, for \(n\) odd,
\[[S_{1}]_{\overline{0}}^{(n)}*[S_{2}]*[S_{1}]_{\overline{a+b}}^{(1 +a+b-n)}\] \[= \sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2} \rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\sum_{r=0}^{\min\{n-2k,1+a+b-n-2m\}}\sum_{ [M]\in\mathcal{I}_{1+a+b-2k-2m-2r}}\] \[\frac{\mathbf{v}^{\tilde{z}+2k-2m}(\mathbf{v}-\mathbf{v}^{-1})^{ 2+a+b-k-m-r}}{[2k]_{\mathbf{v}}^{!!}[2m]_{\mathbf{v}}^{!!}[r]_{\mathbf{v}}^{!} }\begin{bmatrix}u_{M}-w_{M}\\ 1+a+b-n-2m-r-w_{M}\end{bmatrix}_{\mathbf{v}}\frac{[M]*[\mathbb{E}_{1}]^{k+m+r }}{|\operatorname{Aut}(M)|}. \tag{4.17}\]
Summing up (4.16) and (4.17) above, we obtain
\[\sum_{n=0}^{a+b+1}(-1)^{n}[S_{1}]_{\overline{0}}^{(n)}*[S_{2}]*[S_{1}]_{ \overline{a+b}}^{(1+a+b-n)} \tag{4.18}\]
\[=\sum_{n=0,2|n}^{a+b+1}\sum_{k=0}^{\frac{n}{2}}\sum_{m=0}^{\lfloor\frac {a+b+1-n}{2}\rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\sum_{r=0}^{\min\{n-2k,1+a+b-n-2m \}}\sum_{[M]\in\mathcal{I}_{1+a+b-2k-2m-2r}}\] \[\qquad\qquad\frac{\mathbf{v}^{\bar{z}}(\mathbf{v}-\mathbf{v}^{-1} )^{2+a+b-k-m-r}}{[2k]^{\dagger}_{\mathbf{v}}[2m]^{\dagger\dagger}_{\mathbf{v}} [r]^{\dagger}_{\mathbf{v}}}\left[1+a+b-n-2m-r-w_{M}\right]_{\mathbf{v}}\frac{[ M]*[\mathbb{E}_{1}]^{k+m+r}}{|\operatorname{Aut}(M)|}\] \[-\sum_{n=0,2|n}^{a+b+1}\sum_{k=0}^{\frac{n-1}{2}}\sum_{m=0}^{ \lfloor\frac{a+b+1-n}{2}\rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\sum_{r=0}^{\min\{ n-2k,1+a+b-n-2m\}}\sum_{[M]\in\mathcal{I}_{1+a+b-2k-2m-2r}}\] \[\qquad\qquad\frac{\mathbf{v}^{\bar{z}+2k-2m}(\mathbf{v}-\mathbf{ v}^{-1})^{2+a+b-k-m-r}}{[2k]^{\dagger\dagger}_{\mathbf{v}}[2m]^{\dagger \dagger}_{\mathbf{v}}[r]^{\dagger}_{\mathbf{v}}}\left[1+a+b-n-2m-r-w_{M} \right]_{\mathbf{v}}\frac{[M]*[\mathbb{E}_{1}]^{k+m+r}}{|\operatorname{Aut}(M )|}.\]
Set
\[d=k+m+r.\]
Now we only need to prove that the coefficient of \(\frac{[M]*[\mathbb{E}_{1}]^{d}}{|\operatorname{Aut}(M)|}\) in the RHS of (4.18) is zero, for any given \([M]\in\mathcal{I}_{1+a+b-2d}\) and any \(d\in\mathbb{N}\). For simplicity, we denote by \(u=u_{M}\), \(w=w_{M}\) in the following.
Note the powers of \((\mathbf{v}-\mathbf{v}^{-1})\) in all terms are the same (equals to \(2+a+b-d\)). Denote
\[\widetilde{T}(a,b,d,u,w)=\sum_{n=0,2|n}^{a+b+1}\sum_{k=0}^{\frac {n}{2}}\sum_{m=0}^{\lfloor\frac{a+b+1-n}{2}\rfloor}\mathbf{v}^{-(1+a+b-n-2m)b }\delta\{0\leq r\leq n-2k\}\\ \cdot\frac{\mathbf{v}^{\bar{z}}}{[2k]^{\dagger\dagger}_{\mathbf{v} }[2m]^{\dagger}_{\mathbf{v}}[r]^{\dagger}_{\mathbf{v}}}\left[1+a+b-n-2m-r-w \right]_{\mathbf{v}}\\ -\sum_{n=0,2|n}^{a+b+1}\sum_{k=0}^{\frac{n-1}{2}}\sum_{m=0}^{ \lfloor\frac{a+b+1-n}{2}\rfloor}\mathbf{v}^{-(1+a+b-n-2m)b}\delta\{0\leq r\leq n -2k\}\\ \cdot\frac{\mathbf{v}^{\bar{z}+2k-2m}}{[2k]^{\dagger\dagger}_{ \mathbf{v}}[2m]^{\dagger}_{\mathbf{v}}[r]^{\dagger\dagger}_{\mathbf{v}}}\left[ 1+a+b-n-2m-r-w\right]_{\mathbf{v}}, \tag{4.19}\]
where we set \(\delta\{X\}=1\) if the statement \(X\) holds and \(\delta\{X\}=0\) if \(X\) is false. (Note that \(r\) is interpreted to be \(d-k-m\) in (4.19) and SS4.5 below.) Then the coefficient of \(\frac{[M]*[\mathbb{E}_{1}]^{d}}{|\operatorname{Aut}(M)|}\) in the RHS of (4.18) is equal to \((\mathbf{v}-\mathbf{v}^{-1})^{2+a+b-d}\widetilde{T}(a,b,d,u,w)\). So we have the following.
**Lemma 4.5**.: _The identity (4.14) is equivalent to the identity \(\widetilde{T}(a,b,d,u,w)=0\), and for any nonnegative integers \(a,b,d,u,w\) subject to the constraints_
\[0\leq d\leq(a+b+1)/2,\quad 0\leq w\leq b,\quad b+1-2d\leq u\leq 1+a+b-2d, \tag{4.20}\]
_where \(u,d\) are not both zero._
In SS4.5 below, we shall prove the following result.
**Lemma 4.6**.: _For any nonnegative integers \(a,b,d,u,w\) satisfying the constraint (4.20), we have the following identity_
\[\widetilde{T}(a,b,d,u,w)=0. \tag{4.21}\]
Thanks to Lemma 4.6, the formula (4.14) follows.
The proof of (4.15) is essentially the same as the proof of (4.14), since the differences on \([S]^{(n)}_{\overline{0}}\) versus \([S]^{(n)}_{\overline{1}}\) merely lie in the powers of \(\mathbf{v}\). Going through the same computations, we can get \(\widetilde{T_{1}}\) by changing the power of \(\mathbf{v}\) in the first summand from \(\tilde{z}\) to \(\tilde{z}+2k-2m\) and the power of \(\mathbf{v}\) in the second summand from \(\tilde{z}+2k-2m\) to \(\tilde{z}\) in (4.19). The identity \(\widetilde{T_{1}}(a,b,d,u,w)=0\) follows from the identity \(\widetilde{T}(a,b,d,u,w)=0\) by using the same argument as in [15, SS8.3], which is omitted here.
### Proof of Lemma 4.6
The proof given here is modified from the one of [15, Proposition 8.1]. It is crucial for our purpose to introduce a new variable
\[\theta=n+m-k-d\]
in place of \(n\) in (4.19). Hence we have
\[\begin{bmatrix}u-w\\ 1+a+b-n-2m-r-w\end{bmatrix}_{\mathbf{v}}=\begin{bmatrix}u-w\\ 1+a+b-2d-\theta-w\end{bmatrix}_{\mathbf{v}}\]
and
\[n=\theta-m+k+d\equiv\theta+r\qquad(\operatorname{mod}^{\operatorname{nil}}2). \tag{4.22}\]
The condition \(r\leq n-2k\) in \(\widetilde{T}(a,b,d,u,w)\) in (4.19) is transformed into the condition \(\theta\geq 0\).
By a direct computation we can rewrite the power of \(\mathbf{v}\) in (4.19) as
\[\overline{z}= -(1+a+b-n-2m)b+\tilde{z}=\begin{pmatrix}r+1\\ 2\end{pmatrix}-2(k-1)m+\tilde{L}, \tag{4.23}\]
where
\[\tilde{L}=d(d-1)-a\theta+(u+w+\theta-b)(1+a+b-2d-\theta)-uw+\theta^{2}+1.\]
(We do not need the precise formula for \(\tilde{L}\) except noting that \(\tilde{L}\) is independent of \(k,m,r\), and only depends on \(a,b,d,\theta,u,w\).)
By [15, Lemma 8.3], for \(d\geq 1\), the following identity holds
\[\sum_{\begin{subarray}{c}k,m,r\in\mathbb{N}\\ k+m+r=d\end{subarray}}(-1)^{r}\frac{v\binom{r+1}{2}-2(k-1)m}{[r]^{!}[2k]^{!!}[2 m]^{!!}}=0. \tag{4.24}\]
Hence, for fixed \(a,b,d,\theta,u,w\), using (4.22)-(4.24), we calculate that the contribution to the coefficient of \(\begin{bmatrix}u-w\\ 1+a+b-2d-\theta-w\end{bmatrix}_{\mathbf{v}}\) in \(\widetilde{T}(a,b,d,u,w)\) in (4.19), for \(d>0\), is equal to
\[(-1)^{\theta}\mathbf{v}^{\tilde{L}}\left(\sum_{\begin{subarray}{ c}k,m,r\in\mathbb{N},2|r\\ k+m+r=d\end{subarray}}(-1)^{r}\frac{\mathbf{v}\binom{r+1}{2}-2(k-1)m}{[r]_{ \mathbf{v}}^{!}[2k]_{\mathbf{v}}^{!!}[2m]_{\mathbf{v}}^{!!}}+\sum_{ \begin{subarray}{c}k,m,r\in\mathbb{N},2|r\\ k+m+r=d\end{subarray}}(-1)^{r}\frac{\mathbf{v}\binom{r+1}{2}-2k(m-1)}{[r]_{ \mathbf{v}}^{!!}[2k]_{\mathbf{v}}^{!}[2m]_{\mathbf{v}}^{!!}}\right)\] \[\stackrel{{(*)}}{{=}}(-1)^{\theta}\mathbf{v}^{\tilde{L}} \sum_{\begin{subarray}{c}k,m,r\in\mathbb{N}\\ k+m+r=d\end{subarray}}(-1)^{r}\frac{\mathbf{v}\binom{r+1}{2}-2(k-1)m}{[r]_{ \mathbf{v}}^{!}[2k]_{\mathbf{v}}^{!}[2m]_{\mathbf{v}}^{!!}}=0.\]
Note that the identity \((*)\) above is obtained by switching notation \(k\leftrightarrow m\) in the second summand on the LHS of \((*)\). Therefore, we have obtained that \(\widetilde{T}(a,b,d,u,w)=0\), for \(d>0\).
It remains to determine the contributions of the terms with \(d=0\) to \(\widetilde{T}(a,b,0,u,w)=0\) in (4.19), for fixed \(a,b,u,w\). Recall from (4.20) \(u>0\) when \(d=0\). In this case, we have
\(k=m=r=0\), and a direct computation shows that the power \(\bar{z}\) (see (4.23)) can be simplified to be \(\bar{z}=(u+w-b)(1+a+b)-uw+1+(1+2b-u-w)\theta\). Then for \(0<u\leq 1+a+b\), we have
\[\widetilde{T}(a,b,0,u,w)=\mathbf{v}^{(u+w-b)(1+a+b)-uw+1}\sum_{\theta\geq 0}(-1 )^{\theta}\mathbf{v}^{(1+2b-u-w)\theta}\begin{bmatrix}u-w\\ 1+a+b-\theta-w\end{bmatrix}_{\mathbf{v}}.\]
By changing the variable \(1+a+b-\theta-w\) to \(x\), we get
\[\widetilde{T}(a,b,0,u,w)\] \[= (-1)^{1+a+b-w}\mathbf{v}^{(1+b)(1+a+b)-w(1+2b-u-w)-uw+1}\sum_{x \geq 0}(-1)^{x}\mathbf{v}^{(u+w-1-2b)x}\begin{bmatrix}u-w\\ x\end{bmatrix}_{\mathbf{v}}.\]
Using (4.20) and \((u+w-2b-1)\equiv(u-w-1)\pmod{2}\), one can obtain \(\widetilde{T}(a,b,0,u,w)=0\) by [15, Lemma 5.4 (1)]. The proof is completed.
## 5. \(\imath\)Quantum groups and \(\imath\)Hall algebras
Recall \(\mathbb{I}_{\tau}\) from (2.5). Let \(\varsigma=(\varsigma_{i})\in(\mathbb{Q}(\mathbf{v})^{\times})^{\mathbb{I}}\) be such that \(\varsigma_{i}=\varsigma_{\tau i}\) for each \(i\in\mathbb{I}\) which satisfies \(c_{i,\tau i}=0\). The _reduced Hall algebra associated to \((Q,\tau)\)_ (or _reduced \(\imath\)Hall algebra_), denoted by \(\mathcal{SDH}_{\mathrm{red}}(\Lambda^{i})\), is defined to be the quotient \(\mathbb{Q}(\mathbf{v})\)-algebra of \(\widetilde{\mathcal{H}}(\mathbf{k}Q,\tau)\) by the ideal generated by the central elements
\[[\mathbb{E}_{i}]+q\varsigma_{i}\ (\forall i\in\mathbb{I}\ \text{with}\ i=\tau i),\ \text{and}\ \ [\mathbb{E}_{i}]*[\mathbb{E}_{\tau i}]-\mathbf{v}^{c_{i,\tau i}}\varsigma_{i} \varsigma_{\tau i}\ (\forall i\in\mathbb{I}\ \text{with}\ i\neq\tau i). \tag{5.1}\]
**Theorem 5.1**.: _Let \((Q,\tau)\) be an arbitrary \(\imath\)quiver. Then there exists a \(\mathbb{Q}(\mathbf{v})\)-algebra embedding_
\[\widetilde{\Psi}:\widetilde{\mathbf{U}}^{\imath}_{|v=\mathbf{v}}\longrightarrow \widetilde{\mathcal{H}}(\mathbf{k}Q,\tau), \tag{5.2}\]
_which sends_
\[B_{i}\mapsto\frac{-1}{q-1}[S_{i}],\ \text{if}\ i\in\mathbb{I}_{\tau}, B_{i}\mapsto\frac{\mathbf{v}}{q-1}[S_{i}],\ \text{if}\ i\notin\mathbb{I}_{\tau}; \tag{5.4}\] \[\widetilde{k}_{j}\mapsto-q^{-1}[\mathbb{E}_{j}],\ \text{if}\ \tau j=j, \widetilde{k}_{j}\mapsto\mathbf{v}^{\frac{-c_{j,\tau j}}{2}}[\mathbb{E}_{j} ],\ \ \ \text{if}\ \tau j\neq j. \tag{5.3}\]
_Moreover, it induces an embedding \(\Psi:\mathbf{U}^{\imath}_{|v=\mathbf{v}}\!\rightarrow\!\mathcal{SDH}_{\mathrm{ red}}(\Lambda^{\imath})\), which sends \(B_{i}\) as in (5.3) and \(k_{j}\mapsto\varsigma_{\tau j}^{-1}\mathbf{v}^{\frac{-c_{j,\tau j}}{2}}[ \mathbb{E}_{j}],\ \text{for}\ j\in\mathbb{I}\backslash\mathbb{I}_{\tau}\)._
Proof.: Recall \([S_{i}]_{\overline{p}}^{(m)}\), for \(i=\tau i\), defined in (4.1)-(4.2) and \([S_{i}]^{(m)}\), for \(i\neq\tau i\), defined in (4.3). For any \(m\in\mathbb{N}\), the map \(\widetilde{\Psi}\) in Theorem 5.1 sends the \(\imath\)divided powers \(B_{i,\overline{p}}^{(m)}\) in (3.10)-(3.11) for \(i=\tau i\) (cf. [15, Lemma 6.3]) and the divided powers \(B_{i}^{(m)}\) in (3.12) for \(i\neq\tau i\) to
\[\widetilde{\Psi}(B_{i,\overline{p}}^{(m)})=\frac{[S_{i}]_{ \overline{p}}^{(m)}}{(1-\mathbf{v}^{2})^{m}}\ \ \text{for}\ \overline{p}\in\mathbb{Z}_{2}, \text{if}\ i=\tau i, \tag{5.6}\] \[\widetilde{\Psi}(B_{i}^{(m)})=\begin{cases}\frac{[S_{i}]_{ \overline{p}}^{(m)}}{(1-\mathbf{v}^{2})^{m}}\ \ \ \text{for}\ i\in\mathbb{I}_{\tau}&\text{if}\ i\neq\tau i.\\ \frac{\mathbf{v}^{m}[S_{i}]_{\overline{p}}^{(m)}}{(\mathbf{v}^{2}-1)^{m}}\ \ \ \text{for}\ i\not\in\mathbb{I}_{\tau}, \end{cases}\ \ \ \ \ \ \ \text{if}\ i\neq\tau i. \tag{5.5}\]
To show that \(\widetilde{\Psi}\) is a homomorphism, we verify that \(\widetilde{\Psi}\) preserves the defining relations (3.13)-(3.17) for \(\widetilde{\mathbf{U}}^{\imath}\). According to [15, Lemma 3.9], the verification of the relations is local and hence is reduced to the rank \(1\) and rank \(2\)\(\imath\)quivers. More precisely, the relation (3.13) follows from [15, Proposition 9.1]. The relation (3.14) is obvious. The relation (3.15) follows from [15, Proposition 9.3] by noting that the same result of [15, Lemma 9.2] also holds for arbitrary quivers; see [19,
Theorem 3.16]. The relation (3.16) follows from [15, Proposition 5.7]. Finally, the relation (3.17) follows from Theorem 4.2 and the same proof of [15, Proposition 9.4].
The injectivity of \(\widetilde{\Psi}\) follows by using the same proof of [15, Theorem 9.6]. The proof is completed.
Let \(\mathcal{\widetilde{H}}(\mathbf{k}Q,\tau)\) be the \(\mathbb{Q}(\mathbf{v})\)-subalgebra (called the _composition algebra_) of \(\mathcal{\widetilde{H}}(\mathbf{k}Q,\tau)\) generated by \([S_{i}]\) and \([\mathbb{E}_{i}]^{\pm 1}\), for \(i\in\mathbb{I}\). Using the same proof of [15, Corollary 9.9], we have the following corollary.
**Corollary 5.2**.: _Let \((Q,\tau)\) be an arbitrary \(\imath\)quiver. Then there exists an algebra isomorphism: \(\widetilde{\Psi}:\widetilde{\mathbf{U}}_{|v=\mathbf{v}}^{\imath}\stackrel{{ \cong}}{{\longrightarrow}}\mathcal{\widetilde{H}}(\mathbf{k}Q,\tau)\) given by (5.3)-(5.4)._
Recall an oriented cycle of \(Q\) is called _minimal_ if it does not contain any proper oriented cycle. A minimal cycle of length \(m\) is called an _\(m\)-cycle_. An \(\imath\)quiver \((Q,\tau)\) is called _virtually acyclic_ if its only possible cycles are \(2\)-cycles formed by arrows between \(i\) and \(\tau i\) for \(\tau i\neq i\in Q_{0}\).
_Remark 5.3_.: Theorem 5.1 and Corollary 5.2 generalize [15, Theorem 7.7, Corollary 9.9] by dropping the assumption of \(\imath\)quivers \((Q,\tau)\) being _virtually acyclic_.
For the braid group symmetries of \(\widetilde{\mathbf{U}}^{\imath}\) and their \(\imath\)Hall algebra realization obtained in [16], by using Theorem 5.1 and Corollary 5.2, one can also drop the assumption of \(\imath\)quivers \((Q,\tau)\) being _virtually acyclic_.
_Remark 5.4_.: Let \(C_{n}\) (\(n\geq 2\)) be the oriented cyclic quiver with \(n\) vertices. With the help of Theorem 5.1, we can use the \(\imath\)Hall algebra of \(C_{n}\) with trivial involution to realize the \(\imath\)quantum groups of \(\widehat{\mathfrak{sl}}_{n}\) and \(\widehat{\mathfrak{gl}}_{n}\), especially for the case \(n=2\) where \((C_{2},\mathrm{Id})\) is not a virtually acyclic \(\imath\)quiver; see [13, Section 10].
|
2305.09434 | Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI
Testing | Mobile apps are indispensable for people's daily life, and automated GUI
(Graphical User Interface) testing is widely used for app quality assurance.
There is a growing interest in using learning-based techniques for automated
GUI testing which aims at generating human-like actions and interactions.
However, the limitations such as low testing coverage, weak generalization, and
heavy reliance on training data, make an urgent need for a more effective
approach to generate human-like actions to thoroughly test mobile apps.
Inspired by the success of the Large Language Model (LLM), e.g., GPT-3 and
ChatGPT, in natural language understanding and question answering, we formulate
the mobile GUI testing problem as a Q&A task. We propose GPTDroid, asking LLM
to chat with the mobile apps by passing the GUI page information to LLM to
elicit testing scripts, and executing them to keep passing the app feedback to
LLM, iterating the whole process. Within it, we extract the static context of
the GUI page and the dynamic context of the iterative testing process, design
prompts for inputting this information to LLM, and develop a neural matching
network to decode the LLM's output into actionable steps to execute the app. We
evaluate GPTDroid on 86 apps from Google Play, and its activity coverage is
71%, with 32% higher than the best baseline, and can detect 36% more bugs with
faster speed than the best baseline. GPTDroid also detects 48 new bugs on the
Google Play with 25 of them being confirmed/fixed. We further summarize the
capabilities of GPTDroid behind the superior performance, including semantic
text input, compound action, long meaningful test trace, and test case
prioritization. | Zhe Liu, Chunyang Chen, Junjie Wang, Mengzhuo Chen, Boyu Wu, Xing Che, Dandan Wang, Qing Wang | 2023-05-16T13:46:52Z | http://arxiv.org/abs/2305.09434v1 | # Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing
###### Abstract.
Mobile apps are indispensable for people's daily life, and automated GUI (Graphical User Interface) testing is widely used for app quality assurance. There is a growing interest in using learning-based techniques for automated GUI testing which aims at generating human-like actions and interactions. However, the limitations such as low testing coverage, weak generalization, and heavy reliance on training data, make an urgent need for a more effective approach to generate human-like actions to thoroughly test mobile apps. Inspired by the success of the Large Language Model (LLM), e.g., GPT-3 and ChatGPT, in natural language understanding and question answering, we formulate the mobile GUI testing problem as a Q&A task. We propose GPTDroid, asking LLM to chat with the mobile apps by passing the GUI page information to LLM to elicit testing scripts, and executing them to keep passing the app feedback to LLM, iterating the whole process. Within it, we extract the static context of the GUI page and the dynamic context of the iterative testing process, design prompts for inputting this information to LLM, and develop a neural matching network to decode the LLM's output into actionable steps to execute the app. We evaluate GPTDroid on 86 apps from Google Play, and its activity coverage is 71%, with 32% higher than the best baseline, and can detect 36% more bugs with faster speed than the best baseline. GPTDroid also detects 48 new bugs on the Google Play with 25 of them being confirmed/fixed. We further summarize the capabilities of GPTDroid behind the superior performance, including semantic text input, compound action, long meaningful test trace, and test case prioritization.
Automated GUI testing, Large language model +
Footnote †: journal: Computer Vision and Pattern Recognition |
2307.07260 | A Dynamic Points Removal Benchmark in Point Cloud Maps | In the field of robotics, the point cloud has become an essential map
representation. From the perspective of downstream tasks like localization and
global path planning, points corresponding to dynamic objects will adversely
affect their performance. Existing methods for removing dynamic points in point
clouds often lack clarity in comparative evaluations and comprehensive
analysis. Therefore, we propose an easy-to-extend unified benchmarking
framework for evaluating techniques for removing dynamic points in maps. It
includes refactored state-of-art methods and novel metrics to analyze the
limitations of these approaches. This enables researchers to dive deep into the
underlying reasons behind these limitations. The benchmark makes use of several
datasets with different sensor types. All the code and datasets related to our
study are publicly available for further development and utilization. | Qingwen Zhang, Daniel Duberg, Ruoyu Geng, Mingkai Jia, Lujia Wang, Patric Jensfelt | 2023-07-14T10:21:26Z | http://arxiv.org/abs/2307.07260v1 | # A Dynamic Points Removal Benchmark in Point Cloud Maps
###### Abstract
In the field of robotics, the point cloud has become an essential map representation. From the perspective of downstream tasks like localization and global path planning, points corresponding to dynamic objects will adversely affect their performance. Existing methods for removing dynamic points in point clouds often lack clarity in comparative evaluations and comprehensive analysis. Therefore, we propose an easy-to-extend unified benchmarking framework for evaluating techniques for removing dynamic points in maps. It includes refactored state-of-art methods and novel metrics to analyze the limitations of these approaches. This enables researchers to dive deep into the underlying reasons behind these limitations. The benchmark makes use of several datasets with different sensor types. All the code and datasets related to our study are publicly available for further development and utilization.
## I Introduction
Point clouds are widely used in the domains of robotics, given their effectiveness in facilitating key components such as localization and path planning. Current SLAM (Simultaneous Localization and Mapping) packages [1, 2, 3] fuse data from multiple sensors to obtain corresponding poses. These poses can be used to integrate point cloud frames into a global map shown in Fig. 1.
Removing dynamic points from maps is crucial for accurate representations of the environment. Failing to detect dynamic points while integrating point cloud data can result in the inclusion of ghost points, as illustrated in Fig. 1 yellow part. In the localization task, ghost points may reduce robustness as they introduce ambiguous features or mislead the matching process between the current observation and the global map. For global path planning, the presence of ghost points can lead to suboptimal path selection. If the planning algorithm interprets points corresponding to dynamics as part of the static environment's structure, it will mistake these points as obstacles and classify the region as untraversable, resulting in unnecessarily long path allocation or even failure in path planning.
Various methods are proposed to tackle the issue of removing dynamic points, where different metrics are tailored to showcase the benefits of their own approaches. For example, Lim _et al._[4] utilize the voxel-wise preservation rate to evaluate their results. However, the existing evaluation metrics neglect the classification accuracy in the sub-voxel scale. Our benchmark adopts a set of new metrics for point-wise evaluation with a unified standard.
Existing methods including [4, 5] are mainly evaluated on SemanticKITTI [6], which solely includes small town scenarios by a single type of LiDAR. Our benchmark performs evaluation on various datasets to analyze robustness towards different scenarios and sensor setups. We also prepared a dataset in a semi-indoor scenario where dynamic objects are moving close to the static structure, and the ego agent is equipped with a sparse LiDAR. For qualitative results, we additionally choose the latest Argoverse 2.0 dataset [7] that contains various streetscapes in big cities and has more dynamic objects compared with SemanticKITTI. These diverse datasets enable a comprehensive assessment of the existing techniques to compare their adaptability to a range of scenarios and sensor configurations.
Based on our benchmarking result, we summarise the strengths and weaknesses of each technique revealed from our proposed metrics, facilitating further development and innovation in the field. For instance, the occupancy mapping approach, Octomap [8], is also frequently used as a dynamic point removal baseline. Guided by the benchmarking result, we demonstrate how we improve its dynamic point removal performance by incorporating ground fitting into the pipeline.
We contribute the benchmark implementation and extended datasets to the research community at [https://github.com/KTH-RPL/DynamicMap_Benchmark](https://github.com/KTH-RPL/DynamicMap_Benchmark).
The main additional contributions include the following:
* Refactoring existing methods to establish a unified benchmark for removing dynamic points in the map.
* Introducing new metrics and evaluating the performance of all methods, detailing the challenges associated with this task.
Fig. 1: Illustration of ghost points resulting from dynamic objects in KITTI sequence 7. The yellow points to the right represent points labeled as belonging to dynamic objects in the dataset. These ghost points negatively affect downstream tasks and overall point cloud quality.
* Introducing an extension of Octomap better adapted to the map clean task.
## II Related work
In the field of point cloud processing on dynamic points removal, methods can be broadly categorized into two main approaches: learning-based and traditional algorithms. Learning-based methods have been increasingly popular in detecting dynamic points or objects. However, they require training data and a network to learn latent space representations, often lacking explainability. Therefore, this paper focuses on traditional approaches to removing dynamic points. In the below sections, we will review both learning-based and traditional methods in detail.
### _Learning-based_
Learning-based methods typically involve deep neural networks and supervised training with labeled datasets. Mersch _et al._[9] employ sparse 4D convolutions to segment receding moving objects in 3D LiDAR data, efficiently processing spatiotemporal information using sparse convolutions. Sun _et al._[10] develop a novel framework for fusing spatial and temporal data from LiDAR sensors, leveraging range and residual images as input to the network. Toyungyernsub _et al._[11] predict urban environment occupancy by considering both spatial and temporal information, incorporating environmental dynamics to improve moving object segmentation performance. Huang _et al._[12] propose a novel method for unsupervised point cloud segmentation by jointly learning the latent space representation and a clustering algorithm using a variational autoencoder. Lastly, Khurana _et al._[13] use differentiable raycasting to render future occupancy predictions into future LiDAR sweep predictions for learning, allowing geometric occupancy maps to clear the motion of the environment from the motion of the ego-vehicle.
However, they share common drawbacks, such as the need for extensive labeled datasets, unbalanced data during training [14], and potential limitations when applied to different sensor types they were not trained on.
### _Traditional Algorithm_
In light of these challenges, our focus shifts towards traditional methods, which typically exhibit greater robustness and flexibility in handling diverse sensor types and data distributions. Various approaches have been proposed, often categorized into ray-casting, visibility-based, and visibility-free.
Occupancy grids, often in the form of Octomap [8], are popular techniques that employ ray casting to update the occupancy value of the grid map space by counting the hits and misses of scans. Additionally, other data structures have been proposed, e.g., by representing the truncated signed distance field (TSDF) [15] instead of occupancy. They rely on the concept of occupancy values or truncated signed distances to detect dynamic points in point clouds. These methods update the values for each voxel, frame by frame, based on the measurements obtained from the sensor. If the values within a voxel deviate significantly from a specified threshold, the points inside that voxel are considered dynamic.
Despite their effectiveness, these methods can be computationally expensive when performing ray-casting steps, leading to the development of visibility-based methods to reduce computational costs. Visibility-based methods assume that if a query point is observed behind a previously acquired point in the map, then the previously acquired point is dynamic. Kim _et al._[5] constructs a static point cloud map using multi-resolution range images based on visibility.
Both ray casting and visibility-based methods suffer from the problems illustrated in Fig. 2. (a) shows that rays are far from the ground, and the angle between the rays and the ground line becomes very small. In such scenarios, ray-based methods update the free value when the rays pass through the area, which may cause some ground points to be incorrectly seen as dynamic. (b) means after accumulating multiple scan frames, noise below the ground in some frames can cause previous regions to be updated as free, mislabeling ground points. (c) illustrates how these methods fail to remove dynamic points when no object is behind them. In this example, only some hits on the big truck will later be cleared by hits on the wall, while others will not. The purple hits will erroneously remain, as no new hits pass through them.
Lim _et al._ observed these limitations in [16] and proposed a novel approach based on the height difference between the raw map and the query. They compare the ratio between the difference in the minimum and maximum z-values in regions between a query scan and the map. If the ratio is larger than a predefined threshold, the region is considered to contain dynamic objects. This approach improved the handling of dynamic objects from unlabeled classes.
We have discussed several traditional methods for removing dynamic points from point clouds. They often involve numerous parameters that need to be tuned. For instance,
Fig. 2: Limitation on ray casting-based and visibility-based methods.
Lim's method [16] requires knowledge of the sensor height, making it highly sensitive to height values. This approach also necessitates tuning the maximum and minimum height ranges, as it cannot handle scenarios such as pedestrians walking under trees in Fig. 3.
This paper remains focused on traditional methods because they do not require the creation of a large labeled dataset or training on various datasets to ensure generalization, as compared to learning-based approaches. Nevertheless, it is still possible to include learning-based methods with the effort dedicated to creating various labeled datasets, training networks, and performing inference under unified setups.
## III Methods
In this section, we provide a summary of the methods [8, 5, 16] included in our benchmarks, discussing their algorithm design and frameworks. We prepare the processing dataset and scripts to extract data from several open-dataset and refactored methods without ROS (Robot Operating System) for easier benchmarking and faster running speeds.
Guided by our benchmark analysis and addressing the angle problem and sparse points problems in Fig. 2, we adapt Octomap [8] to estimate the ground, followed by the same ray casting process for hit-and-miss detection in non-ground points.
### _Removert_
Kim _et al._[5] proposes an offline method that requires a prior raw map to compare the difference between query and raw as shown in Fig. 4. Firstly, they convert the query and prior raw map point cloud to depth range images using OpenCV [17]. Subsequently, they compute the difference between these two image matrices \(I_{k}^{Q},I_{k}^{M}\) as follows:
\[I_{k}^{\text{Diff}}=I_{k}^{Q}-I_{k}^{M}\,. \tag{1}\]
Finally, the dynamic map points are defined in [5],
\[\mathcal{P}_{k}^{DM}=\mathbf{p}_{k,ij}^{M}\in\mathcal{P}_{k}^{M}|I_{k,ij}^{ \text{Diff}}>\tau_{D}, \tag{2}\]
where \(\mathcal{P}_{k}^{M}\) is the raw global point cloud map, \(\mathcal{P}_{k}^{DM}\) is the set of dynamic points, and \(\mathbf{p}_{k,ij}^{M}\) is the set of points in the pixel \((i,j)\), \(\tau_{D}\) is a threshold. Fig. 3(a) illustrates their framework.
### _Erasor_
Lim _et al._[16] propose an approach based on the observation that most dynamic objects in urban environments are in contact with the ground. They introduce the novel concept of pseudo-occupancy to represent the occupancy of unit space and discriminate spaces with varying occupancy levels. Subsequently, they determine potential dynamic candidate bins \(\mathcal{P}_{Dynamic}\) based on the height difference between the raw map and query frame, as briefly described in [16]:
\[\Delta h_{(i,j),t}=\sup\left\{Z_{(i,j),t}\right\}-\inf\left\{Z_{(i,j),t}\right\} \tag{3}\]
where \(Z_{(i,j),t}=\{z_{k}\in\mathbf{p}_{k}|\mathbf{p}_{k}\in\mathcal{S}_{(i,j),t}\}\), and \(z\) represents the point's z-value concerning the sensor origin. \(\sup,\inf\) separately means the highest and lowest point height value in the \(Z\).
The condition for determining potential dynamic candidate bins is:
\[\begin{split}\text{if }\frac{\Delta h_{(i,j),t}^{Query}}{ \Delta h_{(i,j),t}^{Map}}<0.2,\\ \text{then }\mathcal{S}(i,j),t\in\mathcal{P}_{Dynamic}\end{split} \tag{4}\]
Finally, they employ Region-wise Ground Plane Fitting (R-GPF) to distinguish static points from dynamic points within the candidate bins that potentially contain dynamic points. Fig. 3(b) illustrates their framework.
### _Octomap and Improvement_
Hornung _et al._[8] offer a popular mapping framework to generate volumetric 3D environment models in the robotics field. It is based on octrees and uses probabilistic occupancy
Fig. 4: Framework of different methods. The term ‘Query Scan” indicates that the data has already been transformed to the world frame with the sensor center pose included.
Fig. 3: Limitations of height-threshold-based methods. (a) When people stand under a tree, using a height threshold \(h_{max}\) typically ignores the highest points that the sensor observed. (b) However, when choosing a threshold \(h_{max}\), larger objects such as a truck may still have remaining points.
estimation. Although it is not initially designed for dynamic point removal, it has frequently been used as a baseline.
First, Octomap rasterizes all points to 3D voxels, where each voxel is a leaf node \(n\). Then, the probability of \(n\) being occupied is updated given the sensor measurements \(z_{1:t}\) according to:
\[\begin{split}& P\left(n\mid z_{1:t}\right)\\ &=\left[1+\frac{1-P\left(n\mid z_{t}\right)}{P\left(n\mid z_{t} \right)}\frac{1-P\left(n\mid z_{1:t-1}\right)}{P\left(n\mid z_{1:t-1}\right)} \frac{P(n)}{1-P(n)}\right]^{-1}\end{split} \tag{5}\]
After updating the whole map with all scan frames, each node in the map will have a final occupancy value. If it exceeds a threshold, we consider the node as a static point. During this process, the occupancy probability of nodes containing dynamic points will decrease as rays pass through these nodes in some frames, reducing their occupancy values.
However, as mentioned earlier, it is not designed for dynamic point removal tasks, and in Section V, we can observe the challenges mentioned in Section II. Guided by our benchmarking analysis, we will enhance the original Octomap by incorporating noise filtering and ground estimation techniques. The performance differences between our improved Octomap and the original version are examined through ablation studies in Section V, demonstrating the benefits of our modifications.
To minimize the impact of noise and abnormal points or reduce the computational burden of our ray casting, we employ the Statistical Outlier Removal (SOR) technique for filtering. Then we perform ground estimation using Sample Consensus (SAC) segmentation [18] on the output point clouds. In the last, we optimize the process by setting the grid cells occupied by the estimated ground points as free, ensuring no ray casting occurs in these regions. This approach prevents the mislabeling of ground points as dynamic and their subsequent removal from the static map, preserving the integrity of the final representation. We then only integrate and update the octree based on the non-ground points throughout all frames.
When exporting the final map, we use a threshold to query the occupancy grid points and integrate the ground points. This approach ensures that the occupancy values of grid cells containing dynamic points are updated when the dynamic objects move away, and rays pass through the area once more, providing an accurate and efficient representation of the environment.
## IV Benchmark Setup
### _Metric_
Although nowadays datasets provide point-wise labels, most methods downsample the ground truth to voxel-wise level for evaluation. To provide a more accurate evaluation, we propose a new benchmark based on point-wise assessments.
The map clean task aims for two goals: remove true dynamic points and keep true static points. This process involves maintaining high recall in the classification of both dynamic and static points, often referred to as Dynamic Accuracy (DA%) and Static Accuracy (SA%) respectively.
Additionally, we utilize the Associated Accuracy (AA %) calculated using the geometric mean as a comprehensive metric that combines both accuracies, offering an overall assessment of the algorithm's performance. AA is more sensitive to smaller values compared to the harmonic mean used in the F1 score.
\[AA=\sqrt{SA\times DA}. \tag{6}\]
There are also distance distribution plots that show the distance from mislabeled points to their nearest correct dynamic points. It serves as a metric to illustrate where errors typically occur, helping researchers identify and address the shortcomings of methods for further improvements.
### _Implementation details_
We conduct experiments on three primary datasets: KITTI, Argoverse 2.0, and a semi-indoor dataset. The first two have their own ground truth pose files, while semi-indoor dataset poses are obtained using the SLAM package [19]. All datasets are integrated into a unified PCD format with point cloud data and pose in it. KITTI has ground truth labels for dynamic objects from SemanticKITTI [6]. Part of Argoverse 2.0 and the semi-indoor dataset collected by us have manually labeled dynamic points ground truth. In the point-wise evaluation, if an algorithm rasterizes the grid, we query all the points in ground truth to search for the corresponding grid and label the point as static or dynamic according to the algorithm's output.
## V Benchmark Results
In this section, our benchmark contains both quantitative and qualitative evaluations. We use it to conduct a detailed analysis of the performance of the methods in Section III on various datasets, identify specific failure scenarios for each method, and explain the reasons for these failures in relation to their theoretical foundations. Additionally, we provide a table outlining the time cost and the number of parameters required for tuning to achieve a better static map.
All experiments are conducted on a desktop computer equipped with a 12th Gen Intel(r) Core(tm) i9-12900KF processor featuring 24 cores. The benchmark link includes all the parameters used for the experiments presented in this paper. Methods marked with an asterisk (*) in tables and figures indicate offline methods, which require a raw global point cloud map as a prior for comparison. More detail can be found in Section III and Fig. 4.
### _Quantitative_
In Table I, we present a quantitative comparison of dynamic object removal methods in various scenarios datasets. Removert mostly retains the complete static map but labels only a few correct dynamic points. In contrast, ERASOR performs better in removing dynamic points and balancing
static and dynamic points. The original Octomap suffers from angle problems in the ground plane and noise points shown in Section V-B which cause their score on SA to be lower than others. Octomap w G denotes the method with ground estimation, resulting in a higher score across most of the sequences. Also adding the noise filter (Octomap w GF), we achieve a \(20\%-30\%\) speed-up, as shown in Table II, since the noise points do not undergo the ray casting process.
Considering Table II and Table I, there is a trade-off between speed and performance, as the method with the lowest score on AA achieves the fastest processing time for a single frame. It is important to note that Removert requires multiple resolutions to produce better results, which may increase the processing time depending on the number of resolutions. The speed of ray-based methods like Octomap has the potential for further optimization and improvement.
To better future analyze, Fig. 5 illustrates where errors typically occur. We observe that most of the false negative points (true dynamic points labeled as static) are close to the true positive points (correctly labeled dynamic points), with all methods' false negative points ranging from \(10\)cm to \(30\)cm away from the true positive points. In such cases, other techniques, such as clustering around the true positive points, can be employed to address this issue. The largest scale difference occurs in Removert, corresponding to the challenges we mentioned earlier in visibility-based methods that involve occlusion behind the true positive points. Techniques that apply understanding object relationships and establishing connections between them may help address this issue more effectively.
### _Qualitative_
To complement the quantitative results discussed earlier, We present the cleaned map in the Argoverse 2.0 and semi-indoor dataset, where the ground truth map is marked with yellow points to represent dynamic objects.
In one sequence of the Argoverse 2.0 LiDAR dataset, Fig. 6 presents the cleaned maps produced by different methods in the Argoverse 2.0 dataset. As this dataset contains more recent and challenging scenarios from various US cities, it features many poles and trees that effectively illustrate the disadvantages of each method. As seen in the raw map, there are dynamic cars, cyclists, and pedestrians near the building. Removert retains the most complete static points but fails to remove the points near the object center. ERASOR keeps the cleanest map among all methods but, removes tree trunks and the ground near the pedestrian due to its sensitivity to height and slightly different pavement heights compared to driving roads. A comparison with the improved Octomap version in Fig. 6 (d) and Fig. 6 (e) demonstrates significant improvements in error reduction for ground points. By incorporating ground estimation, most ground points are preserved, ensuring a more accurate representation of the static environment. As these points are considered definitively static, ray casting is not performed to remove ground points, further enhancing the accuracy of the map. There is room for further improvement by using better ground estimation and clustering techniques to label the missing dynamic points in the map, as discussed in Fig. 5.
Fig. 7 displays the cleaned map for a custom dataset with one VLP-16 LiDAR. As clearly illustrated, the issues discussed in Section II-B are apparent. Removert has difficulty removing points that are behind dynamic obstacles moving around. Without fine-tuning the parameters and using the same settings as for the KITTI dataset, ERASOR fails to remove points higher than the threshold, and the original
\begin{table}
\begin{tabular}{l c c} \hline \hline Methods & Runtime/frame [s] & \# Parameters \\ \hline Removert\({}^{*}\)[5] & **0.044 \(\pm\) 0.002** & 6 \\ ERASOR\({}^{*}\)[16] & 0.718 \(\pm\) 0.039 & 18 \\ Octomap [8] & 2.985 \(\pm\) 0.961 & **5** \\ Octomap w G & 3.054 \(\pm\) 0.966 & 8 \\ Octomap w GF & 2.147 \(\pm\) 0.468 & 10 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Runtime comparison of different methods
Fig. 5: The distribution of distances from false negative points to their nearest true positive points for different algorithms in KITTI sequence 05. The blue dashed line signifies the region containing 10% of the point numbers.
\begin{table}
\begin{tabular}{l|c c c|c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{KITTI sequence 00} & \multicolumn{3}{c|}{KITTI sequence 05} & \multicolumn{3}{c|}{AV2.0 big city} & \multicolumn{3}{c}{Semi-indoor} \\ \hline Methods & SA \(\uparrow\) & DA \(\uparrow\) & AA \(\uparrow\) & SA \(\uparrow\) & DA \(\uparrow\) & AA \(\uparrow\) & SA \(\uparrow\) & DA \(\uparrow\) & AA \(\uparrow\) & SA\(\uparrow\) & DA \(\uparrow\) & AA \(\uparrow\) \\ \hline Removert\({}^{*}\)[5] & 99.44 & 41.53 & 64.26 & 99.42 & 22.28 & 47.06 & 98.97 & 31.16 & 55.53 & 99.96 & 12.15 & 34.85 \\ ERASOR\({}^{*}\)[16] & 66.70 & 98.54 & 81.07 & 69.40 & 99.06 & 82.92 & 77.51 & 99.18 & **87.68** & 94.90 & 66.26 & 79.30 \\ Octomap [8] & 68.05 & 99.69 & 82.37 & 66.28 & 99.24 & 81.10 & 65.91 & 96.70 & 79.84 & 88.97 & 82.18 & **85.51** \\ Octomap w G & 85.92 & 98.88 & 92.17 & 86.15 & 98.46 & 92.10 & 76.38 & 86.26 & 81.17 & 94.95 & 73.95 & 83.80 \\ Octomap w GF & 93.06 & 98.67 & **95.83** & 93.54 & 92.48 & **93.01** & 82.66 & 82.44 & 82.55 & 96.79 & 73.50 & 84.34 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Quantitative comparison of dynamic object removal methods
Figure 6: Qualitative results for the Argoverse 2.0 big city with two VLP-32C LiDARs. The first row displays camera recordings from the cars, the middle row presents the entire map after methods cleaned for this sequence, and the bottom row focuses on a more detailed portion of the map corresponding to the image where cars and cyclists are present in the scene.
Figure 7: Qualitative results for a custom dataset with one sparse VLP-16 LiDAR. Yellow indicates points predicted as dynamic by the algorithm, while the first column shows ground truth labels provided by human annotation.
Octomap exhibits the sparse LiDAR ground problem, leading to many ground points being removed regularly by LiDAR rings. The improved Octomap still requires some fine-tuning of the occupancy probability values, as people are standing in the same place for an extended period, making it challenging to remove them using the default parameters.
## VI Conclusion
In this paper, we conducted a comprehensive review and benchmark of methods for removing dynamic points from point clouds. We refactored three existing methods and contributed them to a unified benchmarking framework. The proposed metric on error distribution offers a novel perspective for analyzing where errors occur and gaining insights for researchers.
In benchmarking evaluation, we provide detailed analyses of each method's strengths and weaknesses for future researchers. Guiding by our evaluation, we also propose a modified Octomap version tailored for this task by filtering and estimating ground points first. Through analysis, there is potential to generalize methods to similar scenarios that minimize reliance on parameter tuning and prior knowledge, as well as accelerate the algorithms for efficient execution.
The future direction of this benchmark extends beyond merely removing dynamic points to encompass generating labels in perception datasets, as demonstrated in studies such as [20], or performing real-time detection in point clouds, as illustrated by [21].
In conclusion, we hope this benchmark, open-source code, and dataset will serve as valuable resources for researchers and practitioners in this field, fostering further advancements and innovations in point cloud processing.
## Acknowledgement
Thanks to RPL's members: Yi Yang, and HKUST Ramlab's members: Bowen Yang, Jin Wu, and Yingbing Chen, who gave constructive comments on this work. Thanks to Shenzhen Unity-Drive Inc. for providing essential experimental devices and services for this work. We also thank the anonymous reviewers for their constructive comments.
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
|
2303.15393 | Probing chaos in the spherical p-spin glass model | We study the dynamics of a quantum $p$-spin glass model starting from initial
states defined in microcanonical shells, in a classical regime. We compute
different chaos estimators, such as the Lyapunov exponent and the
Kolmogorov-Sinai entropy, and find a marked maximum as a function of the energy
of the initial state. By studying the relaxation dynamics and the properties of
the energy landscape we show that the maximal chaos emerges in correspondence
with the fastest spin relaxation and the maximum complexity, thus suggesting a
qualitative picture where chaos emerges as the trajectories are scattered over
the exponentially many saddles of the underlying landscape. We also observe
hints of ergodicity breaking at low energies, indicated by the correlation
function and a maximum of the fidelity susceptibility. | Lorenzo Correale, Anatoli Polkovnikov, Marco Schirò, Alessandro Silva | 2023-03-27T17:11:31Z | http://arxiv.org/abs/2303.15393v3 | **Probing semi-classical chaos in the spherical p-spin glass model**
###### Abstract
**We study semiclassically the dynamics of a quantum \(p\)-spin glass model starting from initial states defined in microcanonical shells. We compute different chaos estimators, such as the Lyapunov exponent and the Kolmogorov-Sinai entropy, and find a marked maximum as a function of the energy of the initial state. By studying the relaxation dynamics and the properties of the energy landscape we show that the maximal chaos emerges in correspondence with the fastest spin relaxation and the maximum complexity, thus suggesting a qualitative picture where chaos emerges as the trajectories are scattered over the exponentially many saddles of the underlying landscape. We also observe hints of ergodicity breaking at low energies, indicated by the correlation function and a maximum of the fidelity susceptibility.**
###### Contents
* 1 Introduction
* 2 Semi-classical dynamics of the p-spin glass spherical model
* 3 Results: Chaos Estimators
* 4 Chaos, Ergodicity and Energy Landscape
* 5 Conclusions
* A The Truncated Wigner approximation for the p-spin spherical model
* B Comparison with exponential growth of the OTOC
* B.1 Classical chaos in dynamical systems
* B.2
B.2 The exponential growth of the OTOC B.3 The exponential growth of the distance
* C Correlation function on a longer time-scale
* D Details on the fidelity susceptibility
* E Calculation of the complexity
## 1 Introduction
Characterizing the dynamics of a system with multiple equilibrium configurations is a challenging problem encompassing several branches of natural sciences, from biology [1] to economy [2] and theoretical ecology [3]. The nontrivial interplay between stable and unstable configurations makes the dynamical evolution of these systems extremely sensitive to the initial conditions and thus unpredictable: since the pioneering works of the 19th [4, 5] and 20th centuries [6, 7], this phenomenon has been known as _chaos_.
Among the physical systems displaying multiple equilibria, a prominent role is played by spin-glasses, a class of models originally introduced to describe magnetic alloys [8, 9]: at low temperature, the dynamics is trapped into one of the exponentially many possible metastable states [10, 11, 12, 13] for a long-time and explore the phase space through rare, activated jumps between different states [14, 15, 16], leading to ergodicity-breaking phenomena [17, 18, 19, 20]. The inclusion of quantum fluctuations usually opens up new route for thermalization in the equilibrium dynamics due to tunnelling between metastable states [21, 22, 23, 24], even though counter-intuitive effects of quantum fluctuations inducing glassiness has been found in quantum quench protocols [25] or more recently in presence of more complex energy landscapes [26].
Within such a rich variety of phenomena, the recent theoretical [27, 28, 29, 30, 31] and experimental [32] observation of many-body chaos in quantum spin-glasses has drawn particular attention. Specifically, in Refs. [29, 30] chaos was detected in the dynamics of a quantum spherical \(p\)-spin glass model [23], from the exponential growth of an out-of-time-order correlator (OTOC) [33, 34, 35, 36, 37, 38, 39]: the corresponding _Lyapunov exponent_\(\lambda_{L}\) exhibits a single broad peak at a temperature scale where the dynamics is still ergodic, but at the onset of slow relaxation.
The behaviour of the Lyapunov exponent \(\lambda_{L}\) extracted from the OTOCs was connected in Ref. [29] to a dynamical crossover between two step and one step time relaxation in the spin-spin correlation functions. This behaviour suggests a deeper connection between this dynamical feature and the structure of the underlying stationary configurations of the model. In particular, if such relation exists other features associated to the peak in other chaos indicators would be observed. Among these indicators a semi-classical example could be the _Kolmogorov-Sinai_ (KS) _entropy_[40, 41, 42], that is the sum of all the positive Lyapunov exponents describing
growth of the sub-leading terms appearing either in the OTOC for quantum systems [43, 44] or in the distance between nearby trajectories for classical chaotic systems [45]. The KS entropy would indeed provide a deeper insight on the emergence on the strength of chaos and to the semi-classical entanglement production in a wide range of quantum systems [46, 47, 48, 44, 43]. A second, more quantum example of indicator focusing on the transition from integrability to chaos in genuine quantum systems is the _fidelity susceptibility_[49, 50] Such quantity, despite being closely connected to the magnetic response obtained in quantum spin-glass experiments [21], has never been investigated in the context of spin-glasses and any possible connection with \(\lambda_{L}\) in a generic system remains unknown.
The purpose of this work is to develop a qualitative understanding of chaotic behavior in the p-spin model by approaching it from a semi-classical perspective, using phase space methods to approximate the unitary dynamics and computing \(\lambda_{L}\) at fixed energy density. We find that \(\lambda_{L}\) exhibits a broad peak around \(E=0\), at the onset of slow dynamics, and vanishes both at low and high energies, compatibly with previous studies. Working at fixed \(E\) we observe that the maximum in \(\lambda_{L}\) occurs when the energy landscape has the maximum possible number of stationary points, as characterized by the complexity (see Ref. [51]). We find that the profile of the Kolmogorov-Sinai entropy is qualitatively identical to that of \(\lambda_{L}\). To further investigate the chaotic features of our model, we also introduce the semi-classical limit of the fidelity susceptibility \(\chi\) in the ergodic phase finding that, as a function of \(E\), it has a single peak close to the boundary between the ergodic and non-ergodic phases. Our results can be easily interpreted in terms of the typical behaviour of the underlying trajectories at different energy scales, which in turn determine the profile of the correlation function: while the low and high-energy limits respectively correspond to trajectories performing small oscillations around a local minimum or a uniform circular motion on the \(N\)-sphere, for intermediate energies the nontrivial interplay between the saddles of the landscape enhance dynamical chaos. Our method is straightforwardly generalized to other spin-glass models, for example describing spins \(1/2\) in a transverse field [28], where a similar behaviour for the trajectories is expected both for the low and high-energy limit.
## 2 Semi-classical dynamics of the p-spin glass spherical model
Throughout this work, we will focus on the unitary dynamics of the \(p\)-spin glass Spherical Model (PSM), whose Hamiltonian
\[\hat{H}_{J}=\frac{1}{2M}\sum_{i=1}^{N}\hat{\Pi}_{i}^{2}+V_{J}(\boldsymbol{ \hat{\sigma}}), \tag{1}\]
with
\[V_{J}(\boldsymbol{\hat{\sigma}})=-\sum_{1\leq i_{1}<\cdots<i_{p}\leq N}J_{i_{ 1},\ldots,i_{p}}\hat{\sigma}_{i_{1}}\cdots\hat{\sigma}_{i_{p}}\, \tag{2}\]
describes a system of \(N\) spins interacting through random, all-to-all couplings \(J_{i_{1},\ldots,i_{p}}\), independently sampled from a Gaussian distribution with zero mean and variance \(\overline{J^{2}}=2p!/N^{p-1}\). Here the spins \(\hat{\sigma}_{i}\) are treated as continuous variables, obeying the spherical constraint \(\sum_{i}\left\langle\hat{\sigma}_{i}^{2}\right\rangle=N\)[52], and quantum fluctuations are implemented by the canonical quantization relations
\([\hat{\sigma}_{i},\hat{\Pi}_{j}]=i\hbar\delta_{ij}\). The model exhibits a thermodynamic phase transition between a paramagnet and a spin-glass state at a temperature \(T_{c}(\hbar)\). The transition is either of the first or second order depending on the strength of quantum fluctuations [23, 24]. At temperature \(T_{d}(\hbar)\gtrsim T_{c}(\hbar)\), a dynamical, ergodicity-breaking transition is also observed [22], whose properties are sensitive to quenches in \(\hbar\) in a non-trivial way [25]. In refs. [29, 30], many-body chaos in the quantum PSM was studied using an OTOC in the large-\(N\) limit and for a unitary evolution of the system from a thermal initial state. The OTOC grows exponentially at any temperature \(T\), with a _quantum Lyapunov exponent_\(\lambda_{L}(T)\), which displays qualitatively the same profile for a wide range of fixed values of \(\hbar\) and in particular in the classical limit \(\hbar\to 0\), having a single maximum at \(T_{m}(\hbar)>T_{d}(\hbar)\) and vanishing in the low and high-temperature limits.
The aim of this work is to gain physical insight on the behaviour of \(\lambda_{L}\) investigating the dynamics of the PSM in the classical limit \(\hbar\to 0\) and at fixed energy-density \(E\), a framework usually used for classical dynamical systems [42]. This goal can be achieved in the framework of Truncated Wigner Approximation (TWA) [53, 54], briefly reviewed in the following (see Refs. [55, 56] for further details). We begin by observing that, for a generic system with \(N\)-dimensional position and momentum-like degrees of freedom, like \(\boldsymbol{\sigma}\) and \(\boldsymbol{\Pi}\), the Heisenberg dynamics from an initial state \(\hat{\rho}\) of any operator \(\hat{O}=\mathcal{O}(\boldsymbol{\hat{\sigma}},\hat{\boldsymbol{\Pi}})\) can be represented as
\[\langle\hat{O}(t)\rangle=\mathrm{Tr}[\hat{\rho}\ \hat{O}]=\int\frac{d \boldsymbol{\sigma}d\boldsymbol{\Pi}}{(2\pi\hbar)^{N}}W_{\rho}(\boldsymbol{ \sigma},\boldsymbol{\Pi})O_{W}(\boldsymbol{\sigma},\boldsymbol{\Pi},t)\, \tag{3}\]
where
\[O_{W}(\boldsymbol{\sigma},\boldsymbol{\Pi},t)=\int d\boldsymbol{\xi}\Big{\langle} \boldsymbol{\sigma}-\frac{\boldsymbol{\xi}}{2}\Big{|}\mathcal{O}(\boldsymbol {\hat{\sigma}}(t),\boldsymbol{\hat{\Pi}}(t))\Big{|}\boldsymbol{\sigma}+\frac{ \boldsymbol{\xi}}{2}\Big{\rangle}\exp\Big{[}i\frac{\boldsymbol{\Pi}\cdot \boldsymbol{\xi}}{\hbar}\Big{]}, \tag{4}\]
is called _Weyl symbol_ of the operator \(\hat{O}\), while the _Wigner function_\(W_{\rho}(\boldsymbol{\sigma},\boldsymbol{\Pi})\) is the Weyl symbol of the density matrix \(\hat{\rho}\). The TWA is then a classical approximation for the dynamics of \(O_{W}(\boldsymbol{\sigma},\boldsymbol{\Pi},t)\), summarized as
\[O_{W}(\boldsymbol{\sigma},\boldsymbol{\Pi},t)\simeq\mathcal{O}(\boldsymbol{ \sigma}(t),\boldsymbol{\Pi}(t)) \tag{5}\]
where \((\boldsymbol{\sigma}(t),\boldsymbol{\Pi}(t))\) is the classical trajectory evolving from the initial condition \((\boldsymbol{\sigma},\boldsymbol{\Pi})\) and determined by the following Hamilton equations:
\[\left\{\begin{aligned} \partial_{t}\sigma_{i}&=\Pi_{i}/M \\ \partial_{t}\Pi_{i}&=-\sum_{j=1}^{N}\big{(}\delta_{ij} -\frac{\sigma_{i}\sigma_{j}}{N}\big{)}\frac{\partial V_{J}}{\partial\sigma_{j} }-\frac{\sum_{i}\Pi_{i}^{2}}{MN}\sigma_{i}\,\end{aligned}\right. \tag{6}\]
The eqs. (6) are essentially derived by adding to the Hamiltonian a Lagrange multiplier term \(z(t)(\boldsymbol{\sigma}^{2}-N)\), where \(z(t)\) is determined in a self-consistent way so that the dynamics satisfies the spherical constraint at any time \(t\) (see Appendix A for more details). The TWA is believed to be valid at least up to an Ehrenfest time \(t_{Ehr}\sim\log\hbar^{-1}\)[33] diverging for small-\(\hbar\).
To investigate the dynamics at fixed energy density \(E\), the ideal setup would be to fix \(\hat{\rho}\) as a micro-canonical state and compute the corresponding Wigner function, which is in general a formidable task. Instead, we fix an initial condition
\[\hat{\rho}=\frac{1}{\mathcal{N}_{s}}\sum_{l=1}^{\mathcal{N}_{s}}|\boldsymbol{ \alpha}^{(l)}\rangle\left\langle\boldsymbol{\alpha}^{(l)}\right|\, \tag{7}\]
as an ensemble of coherent states \(|\mathbf{\alpha}^{(l)}\rangle=\otimes_{j=1}^{N}|\alpha_{j}^{(l)}\rangle\)[57], defined as eigenstates of the operators \(\hat{a}_{j}=\hat{\sigma}_{j}/(\sqrt{2}l)+il\hat{\Pi}_{j}/(\sqrt{2}\hbar)\), for \(j=1,\ldots,N\) and a free parameter \(l\), with eigenvalues \(\mathbf{\alpha}^{(l)}=\{\alpha_{1}^{(l)},\ldots,\alpha_{N}^{(l)}\}\). For each state \(|\mathbf{\alpha}\rangle\), the Wigner function is the Gaussian wave-packet
\[W_{\mathbf{\alpha}}(\mathbf{\sigma},\mathbf{\Pi})=2^{N}\prod_{j}\exp\Big{\{}-\frac{(\sigma _{j}-\sigma_{\mathbf{\alpha}j})^{2}}{l^{2}}-\frac{l^{2}(\Pi_{j}-\Pi_{\mathbf{\alpha}j} )^{2}}{\hbar^{2}}\Big{\}}\, \tag{8}\]
In the following, we fix \(l=\sqrt{\hbar}\), to have the same uncertainty in the variables \(\mathbf{\sigma}\) and \(\mathbf{\Pi}\). The centers \((\mathbf{\sigma}_{\mathbf{\alpha}},\mathbf{\Pi}_{\mathbf{\alpha}})\) of the various wave-packets are chosen in a way that \(\langle\alpha|\hat{H}_{J}|\alpha\rangle\,/N=E\). For the small values of \(\hbar\) we are interested in, such choice is practically equivalent to fix the classical energy density
\[\frac{1}{N}\mathcal{H}_{cl}(\mathbf{\sigma},\mathbf{\Pi})=\frac{\mathbf{\Pi}^{2}}{2MN}+ \frac{V_{J}(\mathbf{\sigma})}{N}\simeq E\, \tag{9}\]
so that we can determine the centers of the wave-packets in Eq.8 using the following "classical annealing" algorithm:
1. first extract a random configuration \((\mathbf{\sigma}_{0},\mathbf{\Pi}_{0})\) in phase space, with \(\mathbf{\sigma}_{0}\) uniformly sampled on the \(N\)-sphere and \(\mathbf{\Pi}_{0}\) sampled from the normal distribution with zero mean and unit variance.
2. bring the system in a configuration at the desired energy density \(E\), we integrate the dynamics starting from \((\mathbf{\sigma}_{0},\mathbf{\Pi}_{0})\) and defined by the equations \[\begin{cases}\partial_{t}\sigma_{i}=\Pi_{i}/M\\ \partial_{t}\Pi_{i}=-\sum_{j=1}^{N}\big{(}\delta_{ij}-\frac{\sigma_{i}\sigma_{ j}}{N}\big{)}\big{(}\frac{\partial V_{J}}{\partial\sigma_{j}}+\gamma\Pi_{j} \big{)}-\frac{\sum_{i}\Pi_{i}^{2}}{MN}\sigma_{i}\,\end{cases}\] (10) where a dissipative term of strength \(\gamma>0\) has been added. Notice that \(\gamma>0\) if \(\mathcal{H}_{cl}(\mathbf{\sigma}_{0},\mathbf{\Pi}_{0})>NE\) and \(\gamma<0\) otherwise.
3. finally stop the integration as soon as the system reaches a configuration \((\mathbf{\sigma}_{1},\mathbf{\Pi}_{1})\) such that \(\mathcal{H}_{cl}(\mathbf{\sigma}_{1},\mathbf{\Pi}_{1})=NE\). Afterwards we may set \(\gamma=0\) and integrate the Hamilton dynamics (eq. (6)) from \((\mathbf{\sigma}_{1},\mathbf{\Pi}_{1})\) for a time \(t_{eq}\), to let the system reach a typical configuration on the corresponding classical microcanonical manifold, which we finally take as the center \((\mathbf{\sigma}_{\mathbf{\alpha}},\mathbf{\Pi}_{\mathbf{\alpha}})\) of the wave-packet in eq. (4). Throughout the rest of this work, we fix \(\gamma=0.5\) and \(t_{eq}=5\).
In practice, the Wigner function for the initial state in eq. (7) is obtained taking the average over \(\mathcal{N}_{s}\) different wave-packets, sampled from the classical annealing algorithm for the same fixed configuration of the disorder \(\{J_{i_{1},\ldots,i_{p}}\}\), as
\[W_{\rho}(\mathbf{\sigma},\mathbf{\Pi})=\frac{1}{\mathcal{N}_{s}}\sum_{l=1}^{\mathcal{N }_{s}}W_{\mathbf{\alpha}^{(l)}}(\mathbf{\sigma},\mathbf{\Pi})\, \tag{11}\]
We repeat the algorithm for each of the \(\mathcal{N}_{s}\) states: the resulting set of points \(\{(\mathbf{\sigma}_{c}^{(l)},\mathbf{\Pi}_{c}^{(l)})\}_{l=1\ldots\mathcal{N}_{s}}\) is then a non-uniform sampling of the classical microcanonical manifold at energy density \(E\).
The presence of small fluctuations enables us to sample orbits from the neighboring of each configuration \((\boldsymbol{\sigma}_{\boldsymbol{\alpha}},\boldsymbol{\Pi}_{\boldsymbol{\alpha}})\), a feature which we use in Section 3 to investigate chaos in the PSM. We compute all the observables we are interested in by averaging over an ensemble of trajectories, evolving according to eq. (6) from an initial condition \((\boldsymbol{\sigma},\boldsymbol{\Pi})\) sampled from the distribution in eq. (11). In the rest of this work, we will denote by \(\langle\;\;\boldsymbol{\cdot}\;\;\rangle\) the average over the sampled trajectories and by \(\overline{(\;\;\boldsymbol{\cdot}\;\;)}\) the one over the disorder configurations \(\{J_{i_{1}\ldots i_{p}}\}\). We fix \(\hbar=10^{-8}\), \(N=100\) and we focus on the paradigmatic case of \(p=3\).
We conclude this section with two technical remarks. First, within the classical annealing algorithm described above we are able to sample arbitrary high, but not arbitrary low energies. In particular, in the PSM the minima of the potential \(V_{J}(\boldsymbol{\sigma})\) lie below a typical energy scale \(E_{th}=-\sqrt{2(p-1)/p}\)[10] (see also Sec. 4 and Appendix E): by classical annealing we cannot explore phase space configurations with \(E<E_{th}\), as the dissipative dynamics of eq. (10) gets stuck in a neighborhood the first local minimum encountered. The second remark is that the configurations \(\boldsymbol{\sigma}\) extracted from the distribution in eq. (11) are not restricted to the \(N\)-sphere. While this could in general lead to incorrect results for arbitrarily large fluctuations, in the \(\hbar\to 0\) limit it is sufficient that the center \(\boldsymbol{\sigma}_{c}\) of each wave-packet \(W_{\boldsymbol{\alpha}^{(l)}}(\boldsymbol{\sigma},\boldsymbol{\Pi})\) lies on the \(N\)-sphere, a condition always satisfied within classical annealing.
Figure 1: **(Left)** Time-evolution of the log-average over the disorder of the distance \(d_{J}(t)\) in eq. (12), reported for few paradigmatic energy densities. The average over the initial condition is performed extracting 25 random configurations from each of the \(\mathcal{N}_{s}=20\) wave-packets, defined in eq. (8) and obtained through the classical annealing algorithm. The dynamics is integrated up to a time \(t_{max}=80\). The log-average over the disorder is taken over \(\mathcal{N}_{d}=30\) configurations. **(Right)** Lyapunov exponent \(\lambda_{L}\), defined in eq. (13), obtained through a linear fit of \(\overline{\log d_{J}(t)}\) over a time window \([t_{I},t_{F}]\), defined in such a way that, for each \(E\), \(t_{I}\) is beyond the oscillations at early times displayed by \(\overline{\log d_{J}(t)}\) and \(t_{F}\) is smaller than the saturation time-scale.
## 3 Results: Chaos Estimators
We present here our results for the chaos estimators of the PSM, starting from the Lyapunov exponent \(\lambda_{L}\). In principle, \(\lambda_{L}\) can be obtained as a function of \(E\) by computing the Weyl symbol, defined in eq. (4), of the OTOC [28]. However, in the \(\hbar\to 0\) limit we are interested in, \(\lambda_{L}\) becomes the exponential rate of divergence of pairs of nearby orbits of the emerging classical dynamics1 and can be computed from
Footnote 1: It is important to notice that the classical limit of \(\lambda_{L}\) we are investigating corresponds to _twice_ the classical Lyapunov exponent \(\lambda_{cl}\), [38], as the semi-classical limit of the OTOC is actually the square of a typical distance between the underlying trajectories (see also Appendix B.1).
\[d_{J}(t)=\frac{1}{2N\hbar}\frac{1}{\mathcal{N}_{s}}\sum_{\alpha=1}^{\mathcal{ N}_{s}}\sum_{i=1}^{2N}\left\langle\alpha\right|\left[\hat{z}_{i}(t)-\left\langle \alpha\right|\hat{z}_{i}(t)\left|\alpha\right\rangle\right]^{2}\left|\alpha \right\rangle\, \tag{12}\]
The quantity \(d_{J}(t)\) is easier to compute than the OTOC in the TWA framework, and in Appendix B, we show explicitly that both \(d_{J}(t)\) and the OTOC grow exponentially with the same rate in the classical limit. To get rid of the _small time_ scale fluctuations appearing in the dynamics of \(\log d_{J}(t)\), we compute its average \(\overline{\log d_{J}(t)}\) over the disorder, and retrieve a smooth linear growth
\[\overline{\log d_{J}(t)}\sim\lambda_{L}t \tag{13}\]
on intermediate time scales, as shown Fig.1 (left). The corresponding Lyapunov exponent \(\lambda_{L}\), shown in Fig. 1 (right) against the energy density \(E\), has a clear peak close to \(E=0\), while it asymptotically vanishes at low and high energies. In Refs. [29, 30], it was shown that \(\lambda_{L}\) has the same qualitative profile as a function of \(T\), displaying for small \(\hbar\) a single maximum around \(T_{m}(\hbar)\simeq 1\): this maximum is consistent with the one we find at \(E=0\), since in the paramagnetic phase the classical energy density \(E\) and the temperature \(T\) are related by the equation \(E=T/2-1/(2T)\) (as shown in Ref. [58]). In summary, the exponent \(\lambda_{L}\) we calculate is essentially classical, and the introduction of small fluctuations in the initial conditions is merely a tool we use to sample nearby trajectories starting from the same wave-packet. It is worth noting that while we focus on quantum fluctuations, small thermal ones are expected to behave similarly.
In the classical limit, the strength of chaos at different energy densities can be also connected to entropy generation by looking at the Kolmogorov-Sinai (KS) entropy [40, 41]. This is defined from the observation that, in general, \(N\) exponential terms in the form of \(\exp(\lambda_{i}t)\) contribute to the growth of the distance in eq. (12), with non-negative _Lyapunov exponents_ hierarchically ordered as \(\lambda_{L}=\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{N}\geq 0\)[45, 59, 60]. The KS entropy is then just the sum \(\Lambda_{\text{KS}}=\sum_{i=1}^{N}\lambda_{i}/N\): classically, \(\Lambda_{\text{KS}}\) quantifies the rate of spreading of coarse-grained volumes in phase space [61], while its quantum counterpart is believed to describe the entanglement growth at early times for a wide range of systems [46, 47, 48] and to be a stronger indicator of scrambling dynamics than the single \(\lambda_{L}\)[44]. As shown in Refs. [46, 48], the eigenvalues of the fluctuation matrix
\[\mathbf{G}_{jl}(t)=\frac{1}{2\hbar}\langle\left[\left(\hat{z}_{j}(t)-\left\langle \hat{z}_{j}(t)\right\rangle\right)\left(\hat{z}_{l}(t)-\left\langle\hat{z}_{l} (t)\right\rangle\right)+\left(\hat{z}_{l}(t)-\left\langle\hat{z}_{l}(t)\right \rangle\right)\left(\hat{z}_{j}(t)-\left\langle\hat{z}_{j}(t)\right\rangle \right)\right]\rangle. \tag{14}\]
diverge as \(\exp(\lambda_{i}t)\) in the semi-classical limit. Then, the KS entropy is straightforwardly extracted from the growth rate of the quantity
\[S_{\text{KS}}(t)=\frac{1}{N}\overline{\log\det[\mathbf{G}_{ij}(t)]_{1\leq ij \leq N}}\sim\Lambda_{\text{KS}}t \tag{15}\]
on intermediate time-scales. We plot \(S_{\rm KS}(t)\) inf Fig. 2 (left) and, in Fig. 2 (right), its corresponding slope \(\Lambda_{\rm KS}\), showing that the maximal chaos located at \(E=0\) can be detected also by the KS entropy.
## 4 Chaos, Ergodicity and Energy Landscape
In this section we elaborate further on the results of previous section and try to provide a qualitative interpretation for the observed maximal chaos around energy \(E=0\) in the PSM. In particular a natural question is whether this result can be understood from the relaxation dynamics of the PSM at fixed energy density, as probed for example from the spin correlation function, or related to properties of the energy landscape. As we are going to show, the maximal chaos around \(E=0\) in the PSM occurs when the spin relaxation is the fastest and when the complexity of the underlying energy landscape is maximal.
The relaxation dynamics for both classical [17, 18] and quantum [22] spin glasses is most often studied in presence of a finite temperature bath. Relaxation is usually defined in terms
Figure 2: **(Color online) (Left)** Time-evolution of the quantity \(S_{\rm KS}\) defined in eqs. (14) and (15), reported for few paradigmatic energy densities. The average over the initial condition is performed extracting 30 random configurations from each of the \(\mathcal{N}_{s}=30\) wave-packets, defined in eq. (8) and obtained through the classical annealing algorithm. The dynamics is integrated up to a time \(t_{max}=80\). The log-average over the disorder is taken over \(\mathcal{N}_{d}=96\) configurations. **(Right)** Kolmogorov-Sinai entropy \(\Lambda_{\rm KS}\), defined in eq. (15), obtained through a linear fit of \(S_{\rm KS}\) over a time window \([t_{I},10]\), where \(t_{I}\) is beyond the scale of oscillations at early time displayed by \(S_{\rm KS}\), for each \(E\).
of the correlation function
\[C(t,t^{\prime})=\frac{1}{2N}\sum_{i=1}^{N}\overline{\langle\hat{\sigma}_{i}(t) \hat{\sigma}_{i}(t^{\prime})+\hat{\sigma}_{i}(t^{\prime})\hat{\sigma}_{i}(t) \rangle}. \tag{16}\]
At high temperatures, the function \(C(t_{w},t_{w}+\tau)\) becomes approximately time-translation invariant for moderately high values of \(t_{w}\), and decays to zero for large \(\tau\), indicating that the underlying dynamics of the system is ergodic. However, at sufficiently low temperatures, the system may exhibit the so-called '_weak ergodicity breaking_ scenario' [17, 22, 62] (see Ref. [18] for a review), meaning that
\[\lim_{t_{w}\to\infty}C(t_{w},t_{w}+\tau)=q_{1}+C_{st}(\tau)\, \tag{17}\]
with a finite dynamical overlap \(q_{1}>0\) and where again \(C_{st}(\tau)\) vanishes for \(\tau\to\infty\), determining a non-ergodic dynamics. In spin glasses, ergodicity breaking is usually accompanied by a breaking of time-translational invariance in \(C(t_{w},t_{w}+\tau)\) a phenomenon usually referred to as _aging_[18, 63]: \(C(t_{w},t_{w}+\tau)\) has a plateau around \(q_{1}\), with a length that increases as \(t_{w}\) grows (and diverging for \(t_{w}\to\infty\)), before eventually decaying to zero for longer time-scales. Here we are interested instead in the Hamiltonian relaxation dynamics, starting from fixed energy initial conditions. We note that the Hamiltonian dynamics of both classical and quantum PSMs starting from a finite temperature state has been studied recently [25, 58]. To this extent we compute \(C(t_{w},t_{w}+\tau)\) in the TWA formalism at finite energy density \(E\), making use of eq. (3) and of the identity
\[\frac{1}{2}\{\hat{\sigma}_{i}(t)\hat{\sigma}_{i}(t^{\prime})+\hat{\sigma}_{i}( t^{\prime})\hat{\sigma}_{i}(t)\}_{W}=\sigma_{i}(t)\sigma_{i}(t^{\prime}). \tag{18}\]
The results in Fig. 3 (a) show that the correlation function undergoes a temporal crossover from high energies, where it displays wide oscillations, to low energies, where a plateau around a finite \(q_{1}\) appears at low energies. Quite interestingly, we see that at the maximally chaotic point \(E=0\), we observe the fastest relaxation of the correlation function, decaying to zero with few oscillations, again compatibly with Refs. [29, 30]. Upon decreasing further the energy below \(E=0\) the dynamics slow down and a plateau \(q_{1}\) starts to appear. Although within the simulation times we used it is not possible to access the real \(t\to\infty\) limit and understand precisely whether the length of the plateau is finite or infinite, we can give an estimate of \(q_{1}\) as the asymptotic value of \(C(t_{w},t_{w}+\tau)\) at large \(\tau\) and fixed \(t_{w}\). In Figure 3-(b) we show \(q_{1}\) becomes nonzero below a certain energy threshold (estimated to be around \(E\simeq-0.38\)) and that it increases as \(E\) further decreases below the threshold. At the same time, the profiles of the correlation function shown in Fig. 3-(c) lose time-translation invariance again below \(E\simeq-0.38\): these results are confirmed by the plots in Appendix C obtained for simulations on a longer time-scale. From the results discussed above, we suggest that an ergodicity breaking transition takes place at low energies, at an energy \(E_{d}\lesssim-0.38\). To the best of our knowledge, this is the first attempt to estimate the ergodicity breaking transition point at fixed energy density \(E\) in the PSM.
We now argue that both maximal chaoticity and fastest spin relaxation emerge alongside maximal complexity in the topology of the potential \(V_{J}(\mathbf{\sigma})\) from eq. (2) at the \(E=0\) level surface. To understand this connection, we first observe that the profile of the correlation
function in Fig. 3-(a) can be associated to the typical behaviour of the underlying trajectories, as sketched in Fig. 3-(d): while the regular oscillations at high energies are due to an underlying uniform circular motion on the \(N\)-sphere, in the limit of low energies the trajectories oscillate in a well around a local minimum of \(V_{J}(\mathbf{\sigma})\), whose amplitude can be roughly estimated as the typical distance between two configurations of the same trajectory, observed at large times
Figure 3: **(a)** Time-dependence of the correlation function, at fixed \(t_{w}=40\). **(b)** Asymptotic value \(q_{1}\) of the correlation functions \(C(t_{w},t_{w}+\tau)\) reported in panel (a), obtained through the time-average of the latter over \(\tau\in[39,40]\). **(c)** Comparison between the profiles of \(C(t_{w},t_{w}=\tau)\) obtained fixing different values of \(t_{w}\). Each panel correspond to a different energy density \(E\). **(d)** Time-evolution of three typical orbits, whose initial condition are obtained extracting one configuration from the distribution in eq. (11), at different energy densities \(E\) and for the same configuration of the disorder. The orbits evolve in a 200-dimensional phase space and are projected over the two axes defined by the initial spin configuration \(\mathbf{\sigma}(0)\) and the initial momentum \(\mathbf{\Pi}(0)\).
separation \(\tau\):
\[\sum_{i=1}^{N}[\sigma_{i}(t_{w}+\tau)-\sigma_{i}(t_{w})]^{2}=2N-2\sum_{i=1^{N}} \sigma_{i}(t_{w}+\tau)\sigma_{i}(t_{w})\simeq 2N(1-q_{1}). \tag{19}\]
Chaos and relaxation emerge in between these two trivial limits, where the trajectories are scattered in neighbors of the stationary configurations of the dynamics in eq. (6) and explore the whole configuration space. Within any manifold at fixed energy density \(E\), the stationary configurations can be defined as solutions of the equations (see also Ref. [64])
\[\begin{cases}-\dfrac{\partial V_{J}}{\partial\sigma_{i}}+pE\sigma_{i}=0\\ \sum_{i}\sigma_{i}^{2}=N\\ \Pi_{i}=0\.\end{cases} \tag{20}\]
The average number of solutions of eq. (20), which are also in a one-to-one correspondence with the stationary points of the potential \(V_{J}(\mathbf{\sigma})\) on the \(N\)-sphere, can be expressed as:
\[\overline{\mathcal{N}(E)}=\int D\sigma\overline{\prod_{i}\delta\Big{(}-\dfrac{ p}{p!}\sum_{kl}J_{ikl}\sigma_{k}\sigma_{l}-pE\sigma_{i}\Big{)}\Big{|}\det \Big{(}-\dfrac{p(p-1)}{p!}\sum_{k}J_{ijk}\sigma_{k}-pE\delta_{ij}\Big{)}}\Big{|}\, \tag{21}\]
For the classical potential \(V(\mathbf{\sigma})\) in eq. (2) and in the large-\(N\) limit, the number of stationary points scales exponentially as \(\overline{\mathcal{N}(E)}\simeq\exp\{N\Sigma(E)\}\)[10, 51], where \(\Sigma(E)\) is usually referred to as _complexity_. At the same time, the stability of such stationary points is characterized by the index \(k(E)\), where \(Nk(E)\) is the average number of unstable directions around every stationary points. In Appendix E, we derive the analytical expressions for both \(\Sigma(E)\) and \(k(E)\) as functions of \(E\), finding that:
\[\Sigma(E)=\dfrac{z(E)^{2}}{p(p-1)}+\log(z(E)-pE)-E^{2}-\dfrac{1}{2}\log\dfrac{ p}{2}+\dfrac{1}{2}\, \tag{22}\]
where
\[z(E)=\begin{cases}&p\big{(}E+\sqrt{E^{2}-2(p-1)/p}\big{)}/2\,\ \text{if}\ |E|<|E_{th}|\equiv\sqrt{2(p-1)/p}\\ &p\big{(}E-\sqrt{E^{2}-2(p-1)/p}\big{)}/2\,\ \text{if}\ |E|>|E_{th}|\end{cases}\, \tag{23}\]
and
\[k(E)=\begin{cases}&0\,\ \text{if}\ E<E_{th}\\ &\frac{p}{2\pi(p-1)}E\sqrt{E_{th}^{2}-E^{2}}+\frac{1}{\pi}\arctan(\frac{\sqrt{ E_{th}^{2}-E^{2}}}{E})\,\ \text{if}\ |E|<|E_{th}|\\ &1\,\ \text{if}\ E>|E_{th}|\end{cases}. \tag{24}\]
Intuitively, the value \(E_{th}=-\sqrt{2(p-1)/p}\) appearing in the previous formulas represents the _threshold_ energy density below which the stationary points are typically local minima of \(V(\mathbf{\sigma})\), so that \(k(E)=0\) (similarly, \(-E_{th}\) is the energy density above which all stationary points of \(V(\mathbf{\sigma})\) are typically local maxima). Both \(\Sigma(E)\) and \(k(E)\) are plotted in Fig. 4. The first observation that we make is that the complexity \(\Sigma(E)\) has a maximum on the \(E=0\) surface, where stationary points are predominantly saddles of \(V_{J}(\mathbf{\sigma})\), surrounded on average
by half stable and half unstable directions, as \(k(E=0)=1/2\). Second, we notice that the complexity \(\Sigma(E)\) vanishes at two points \(E=\pm E_{0}\), with \(E_{0}>|E_{th}|\). Beyond \(E_{0}\), we have \(\Sigma(E)<0\), which implies a vanishing number of stationary configurations, so we interrupt the plot at \(E_{0}\). Intuitively, \(-E_{0}\) (\(E_{0}\)) can be interpreted as the typical value of the absolute minimum (maximum) of \(V_{J}(\mathbf{\sigma})/N\)[64], which is always finite for \(\mathbf{\sigma}\) lying on the \(N\)-sphere. We observe that our results for the complexity at fixed energy are compatible with the ones in the literature obtained using different methods [51, 65]. The maximum of \(\Sigma(E)\) at \(E=0\) unveils a correlation between the number of saddles in the energy landscape and the maximal chaos, detected by \(\lambda_{L}\). In fact, it is well established that the interplay between unstable stationary points is responsible of chaotic behaviour in classical Hamiltonian systems [66, 67, 4]: for the PSM we observe that their number is connected to the _strength_ of chaos, measured by \(\lambda_{L}\).
We conclude our analysis by discussing the connection between chaos and ergodicity in the classical PSM. In the TWA framework we employed, chaos is probed by the Lyapunov exponents while ergodicity is determined by the long-time behavior of the correlation function. However, recent works [68, 50] have shown a possibly universal connection between ergodicity in quantum systems, defined there as the emergence of a random matrix behaviour [69], and the _fidelity susceptibility_[49], which characterizes the sensitivity of the eigenstates to an external perturbation. Specifically they shown that an ergodic Hamiltonian exhibits a scaling behavior of \(\chi\sim\omega_{L}^{-1}\) against the mean level spacing \(\omega_{L}\), a behaviour which is consistent with random matrix theory predictions, while for an integrable [70, 71, 72, 73, 74] or disorder-localized [75, 76, 77, 74] system has a value of order one. Then, a stronger divergence of the fidelity as \(\chi\sim\omega_{L}^{-\alpha}\), with \(\alpha>1\), indicates the approach to a non-ergodic (either integrable or disorder-localized) point. Such scaling is expected to be retrieved in a region around the non-ergodic point which become exponentially small in the system size, when taking the thermodynamic limit. Despite such a complete picture, the fidelity susceptibility has never been tested for semi-classical systems. Here we use it as a tool to identify ergodicity in our \(p\)-spin model, where ergodicity and its breaking are controlled by the energy density \(E\). In particular, here we perturb the Hamiltonian in Eq.(1) with local magnetic fields \(B_{i}\), summarized in the extra term
\[\hat{H}_{1}=-\sum_{i}B_{i}\hat{\sigma}_{i}. \tag{25}\]
Figure 4: Plots of the complexity function \(\Sigma(E)\) (left) and of the average stability index \(k(E)\) (right), defined respectively in eqs.(78) and (79), for \(p=3\).
Then, by perturbation theory, the sensitivity \(\chi_{n}^{(i)}\) of the \(n\)-th eigenstate to the magnetic field \(B_{i}\) (posing \(B_{j}=0\) on every other site \(j\)), is defined by
\[\langle n(0)|n(B_{i})\rangle=1-\frac{1}{2}\chi_{n}^{(i)}B_{i}^{2}+O(B_{i}^{3})\, \tag{26}\]
and we define the fidelity susceptibility as its average \(\chi\) over the initial condition in eq. (7) and the disorder configurations:
\[\chi=\frac{1}{N}\sum_{i=1}^{N}\sum_{n=0}^{\infty}\overline{\langle n|\hat{ \rho}|n\rangle\,\chi_{n}^{(i)}}. \tag{27}\]
In principle, the fidelity defined in eq.(27) can be computed in our semi-classical framework because, as already observed in Ref. [50], \(\chi\) can be expressed in terms of the spectral function \(\tilde{C}_{av}(\omega)\), that is the Fourier transform of the time-averaged correlation function
\[C_{av}(\tau)=\lim_{\mathcal{T}\to\infty}\frac{1}{\mathcal{T}}\int_{0}^{ \mathcal{T}}dt_{w}C(t_{w},t_{w}+\tau). \tag{28}\]
The two are related by the equation (proven in detail in Appendix D):
\[\chi=\int_{|\omega|>\omega_{L}}\frac{d\omega}{2\pi}\frac{\tilde{C}_{av}( \omega)}{\omega^{2}}\, \tag{29}\]
where \(\omega_{L}\) is again the average quantum level spacing.
In practice, two obstacles prevent us to use directly the Eq.(29). The first one is that we do not have directly access to the spacing \(\omega_{L}\): we then study regularized fidelity (see also Ref. [68])
\[\chi_{\mu}=\int\frac{d\omega}{2\pi}\frac{\omega^{2}}{(\omega^{2}+\mu^{2})^{2} }\tilde{C}_{av}(\omega)\, \tag{30}\]
where we introduced the cutoff \(\mu\) to suppress the contribution to the integral from frequencies \(|\omega|\lesssim\mu\) and plays a role equivalent to the one played by \(\omega_{L}\) in genuinely quantum systems. We determine the asymptotic profile of \(\chi_{\mu}\) by analyzing its behaviour as \(\mu\to 0\). The second, more practical obstacle is that in our simulation we do not have access to infinite-time average, so that the integral in eq. (28) is performed over a finite time-window \([0,\mathcal{T}]\). While in the ergodic phase this is a good approximation of the long-time average, due to time-translation invariance, in the non-ergodic one \(\chi\) always depends on choice of the time-window and converges to the definition in eq. (27) only in the limit \(\mathcal{T}\to\infty\). However, as shown in Appendix D, the qualitative profile we retrieve for \(\chi_{\mu}\) are the same for a wide range of \(\mathcal{T}\) between \(0\) and the maximum integration time \(t_{max}\), so that here we can focus on the specific case of \(\mathcal{T}=t_{max}/2\). We plot \(\chi_{\mu}\), as a function of \(E\) and for several values of \(\mu\), in Fig. 5-(a): its profile has a peak close to the estimated ergodicity breaking energy density \(E\simeq-0.38\) and the maximum point has a little drifting to the left approaching small values of \(\mu\). We also observe that, in general, a natural low-frequency cutoff \(\Delta\omega\sim 2\pi/t_{max}\) emerges when discretizing the integral in eq. (30) in our finite-time simulations, so that the asymptotic behaviour of \(\chi_{\mu}\) can be studied only up to \(\mu\gtrsim\Delta\omega\). Thus, to refine our analysis, we compute \(\chi_{\mu}\) for a dynamics integrated for a larger \(t_{max}\) so that \(\Delta\omega<3\mu\) for all the values of \(\mu\) we investigate: the new results, shown in Fig. 5-(b) is qualitatively the same of the one shown in Fig. 5-(a), further
validating our analysis. To complete the comparison between our semi-classical analysis and the one performed in Ref. [50], we also analyze the scaling behavior of \(\chi_{\mu}\) against \(\mu\) and find that the fidelity susceptibility scales as \(\chi_{\mu}\sim 1/\mu^{\alpha}\) in the range of values of \(\mu\) explored, as shown in Fig.,5-(c). We compute the corresponding exponent \(\alpha\) as a function of the energy density \(E\) and plot it in Fig.,5-(d): in the ergodic phase, we find that \(\alpha\) is slightly greater than 1 (see inset in Fig. 5-(d)), resulting in a scaling approximately consistent with random matrix theory 2, while \(\alpha\) exhibits a maximum of \(\alpha\simeq 1.8\) at \(E\simeq-0.4\), close to our estimate for the ergodicity breaking transition. Further insight is gained when investigating the profiles of the rescaled fidelities \(\mu\chi_{\mu}\) against \(E\), as done in the inset Fig. 5-(d): at high energy densities, the profiles collapse in a region which expands toward lower energies as we decrease \(\mu\). This collapse will in general break down at the ergodicity breaking transition point, where the maximum of \(\chi_{\mu}\) is also expected to occur. It is also worth mentioning that from a deeper inspection collapse of the various profiles, we can in principle extract the Thouless time, [78],defined as the typical relaxation time-scale \(\tau_{th}(E)\) of the correlation function at fixed energy density \(E\), using the following prescription: first we define the cut-off \(\mu_{E}\) as the largest number such that the profiles of \(\chi_{\mu}\) collapse for all energy densities greater than \(E\) and for all \(\mu>\mu_{E}\); then the Thouless time can be easily obtained as \(\tau_{th}(E)\sim\mu_{E}^{-1}\)[68]. We expect that \(\tau_{th}(E)\) diverges as we the dynamics approaches the ergodicity breaking point from above.
Footnote 2: As our analysis is limited by the finite simulation times, we do not exclude that an even better agreement with random matrix predictions may be reached considering a larger \(t_{max}\) and consequently a larger set of values for \(t_{w}\)
As the cutoff \(\mu\) plays the same role of the lowest level spacing for our analysis, these findings are consistent with those in Refs., [50, 68], which show that the fidelity exhibits the strongest divergence with \(\mu\) when the corresponding spin relaxation dynamics slows down. We therefore conclude that the fidelity susceptibility could be a good indicator of ergodicity even in classical systems, which warrants further investigation.
## 5 Conclusions
In this work we have investigated the unitary dynamics of the quantum \(p\)-spin glass spherical model in the limit of small \(\hbar\), where the dynamics is effectively classical, and for the paradigmatic case of \(p=3\). Fixing the initial condition as an ensemble of narrow wave-packets centered on a fixed classical energy shell, at energy density \(E\), we have investigated the chaotic dynamics of the model by the exponential divergence of the classical trajectories evolving from the same wave-packets, We have found that the corresponding exponent \(\lambda_{L}\) is maximised when \(E\) is close to 0. We have found a similar behavior in a different chaos estimator, the Kolmogorov-Sinai entropy which also shows a pronounced maximum around \(E=0\).
To gain further insights into this result we have investigated the relaxation dynamics of the semiclassical PSM at fixed energy density. We have shown that the spin correlation function displays a crossover in energy, from wide oscillations at high-energy to aging and weak-ergodicity breaking at low energy, consistently with the known results obtained at finite temperature. Interestingly we have found that around \(E=0\), where the chaos is maximized, the spin relaxation dynamics is the fastest. We give a physical interpretation of all our
results in terms of the typical behaviour of the underlying trajectories, which either perform a uniform circular motion at asymptotically high energies or oscillate, at low energies, around a local minimum of the energy landscape. In between this two limits, we suggest that chaos emerges as the trajectories are scattered over the exponentially many saddles of the underlying trajectory.
Figure 5: **(Color online)****(a)** Fidelity susceptibility \(\chi_{\mu}\) from eq. (30), shown as a function of \(E\) and fixing several values of the cut-off \(\mu\). The data are obtained from a dynamics up to time \(t_{max}=80\), with the same parameters described in Fig. 3. The average time window for the correlation function in eq. (28) is set to \([0,\mathcal{T}]\), with \(\mathcal{T}=t_{max}=40\). **(b)** Fidelity susceptibility \(\chi_{\mu}\) from eq. (30), plotted as in panel (c), this time for a dynamics integrated up to time \(t_{max}=320\). Here the average over the initial condition is performed over 5 random configurations extracted from each the \(\mathcal{N}_{s}=10\) wave-packets constructed by the classical annealing algorithm. We average over \(\mathcal{N}_{d}=42\) disorder configurations. **(c)** Same data of panel (b). The fidelity susceptibility \(\chi_{\mu}\) is shown as a function of the cutoff \(\mu\), on a log-log scale, for some fixed values of the energy density \(E\). **(d)** Exponent \(\alpha(E)\) obtained by a linear fit of \(\log\chi_{\mu}\) against \(-\log\mu\), at several fixed values of the energy density \(E\). For each \(E\), the data used for the fit are the ones from panel (b) of the main text, for all the values of \(\mu\) reported there.
landscape. Indeed a calculation of the number of stationary configurations shows that the complexity is also maximal at the same energy. Finally, we also gave a semi-classical definition of the fidelity susceptibility. We found that the fidelity has a single maximum, as function of \(E\), between the ergodic and non-ergodic phase, a result reminiscent of the ones found in Ref. [50].
The calculations made here could be in principle directly extended to the transverse-field counterpart of the \(p\)-spin glass model, where has been recently observed experimentally [32] and where the energy minima exhibit a more complicate structure [79].
## Acknowledgements
We thank L. Cugliandolo and V. Ros for insightful comments. L.C. wants to thank A. Lerose, S. Pappalardi and D. Venturelli for interesting discussions about some of the techniques employed in this work. M.S. acknowledges support from the ANR grant "NonEQuMat" (ANR-19-CE47-0001). A.P. acknowledges support from the NSF Grant No. DMR-2103658, and AFOSR Grant No. FA9550-21-1-0342.
## Appendix A The Truncated Wigner approximation for the p-spin spherical model
In this Appendix we show that, for the quantum spin glass defined in eq. (1) of the main text in the Truncated Wigner Approximation (TWA) the dynamics is ruled by eqs.(6) and that such approximation becomes exact in the classical limit. We begin by rewriting the Weyl symbol
\[O_{W}(\boldsymbol{\sigma},\boldsymbol{\Pi},t)=\int d\boldsymbol{\xi}\Big{\langle} \boldsymbol{\sigma}-\frac{\boldsymbol{\xi}}{2}\Big{|}\mathcal{O}(\boldsymbol {\hat{\sigma}}(t),\boldsymbol{\hat{\Pi}}(t))\Big{|}\boldsymbol{\sigma}+\frac {\boldsymbol{\xi}}{2}\Big{\rangle}\cdot\exp\Big{[}i\frac{\boldsymbol{\Pi} \cdot\boldsymbol{\xi}}{\hbar}\Big{]}\ \, \tag{31}\]
for a generic operator
\[\mathcal{O}(\boldsymbol{\hat{\sigma}}(t),\boldsymbol{\hat{\Pi}}(t))=e^{i \hat{H}t}\mathcal{O}(\boldsymbol{\hat{\sigma}},\boldsymbol{\hat{\Pi}})e^{-i \hat{H}t}. \tag{32}\]
evolving in the Heisenberg picture. As shown in Ref. [56], the eq. (31) can be represented in a path integral form, suitable to study both the semi-classical and the thermodynamic limit. Without reproducing the details of the calculation, here we just quote the final result:
\[O_{W}(\boldsymbol{\sigma},\boldsymbol{\Pi},t)= \int D\boldsymbol{\sigma}\int D\boldsymbol{\Pi}\int D\boldsymbol{ \xi}\int D\boldsymbol{\eta}\,O_{W}(\boldsymbol{\sigma}(t),\boldsymbol{\Pi}(t ))\cdot \tag{33}\] \[\cdot\exp\Big{\{}\frac{i}{\hbar}\int_{0}^{t}d\tau\Big{[} \boldsymbol{\eta}(\tau)\cdot\frac{\partial\boldsymbol{\sigma}(\tau)}{ \partial\tau}-\boldsymbol{\xi}(\tau)\cdot\frac{\partial\boldsymbol{\Pi}(\tau)} {\partial\tau}-\mathcal{H}_{W}\Big{(}\boldsymbol{\sigma}(\tau)+\frac{ \boldsymbol{\xi}(\tau)}{2},\boldsymbol{\Pi}(\tau)+\frac{\boldsymbol{\eta}( \tau)}{2}\Big{)}\] \[\qquad\qquad+\mathcal{H}_{W}\Big{(}\boldsymbol{\sigma}(\tau)- \frac{\boldsymbol{\xi}(\tau)}{2},\boldsymbol{\Pi}(\tau)-\frac{\boldsymbol{ \eta}(\tau)}{2}\Big{)}+z(\tau)\boldsymbol{\xi}\cdot\boldsymbol{\sigma}\Big{]} \Big{\}}\,\]
with initial conditions \(\boldsymbol{\sigma}(0)=\boldsymbol{\sigma}\), \(\boldsymbol{\xi}(0)=\boldsymbol{\xi}\) and \(\boldsymbol{\Pi}(0)=\boldsymbol{\Pi}\). The Weyl symbol of the Hamiltonian is defined as \(\mathcal{H}_{W}\big{(}\boldsymbol{\sigma},\boldsymbol{\Pi}\big{)}=\boldsymbol {\Pi}^{2}/2M+V_{J}(\boldsymbol{\sigma})\), and we inserted a term proportional to the Lagrange multiplier \(z(\tau)\) in the action, to enforce the spherical constraint (see also Ref. [22]).
It is straightforward to see that TWA is equivalent to a (classical) expansion of the action in the integrand of eq. (33): indeed we obtain, at leading order, that
\[\begin{split}-\mathcal{H}_{W}\Big{(}\mathbf{\sigma}(\tau)+\frac{\mathbf{ \xi}(\tau)}{2},\mathbf{\Pi}(\tau)+\frac{\mathbf{\eta}(\tau)}{2}\Big{)}&+ \mathcal{H}_{W}\Big{(}\mathbf{\sigma}(\tau)-\frac{\mathbf{\xi}(\tau)}{2},\mathbf{\Pi}( \tau)-\frac{\mathbf{\eta}(\tau)}{2}\Big{)}\sim\\ -\mathbf{\xi}(\tau)\cdot\frac{\partial\mathcal{H}_{W}(\mathbf{\sigma}( \tau),\mathbf{\Pi}(\tau))}{\partial\mathbf{\sigma}}&+\mathbf{\eta}(\tau) \cdot\frac{\partial\mathcal{H}_{W}(\mathbf{\sigma}(\tau),\mathbf{\Pi}(\tau))}{ \partial\mathbf{\Pi}}+O(\mathbf{\eta}^{2},\mathbf{\xi}^{2})\.\end{split} \tag{34}\]
Integrating out the variables \(\mathbf{\eta}(\tau)\) and \(\mathbf{\xi}(\tau)\), we obtain a \(\delta\)-function constraint on the trajectories \(\mathbf{\sigma}(\tau)\) and \(\mathbf{\Pi}(\tau)\):
\[\begin{split}\begin{cases}\frac{\partial\mathbf{\sigma}}{\partial \tau}&=\frac{\mathbf{\Pi}}{M}\\ \frac{\partial\mathbf{\Pi}}{\partial\tau}&=-\frac{ \partial V_{J}(\mathbf{\sigma},\mathbf{\Pi})}{\partial\mathbf{\sigma}}-z(t)\mathbf{\sigma}\,\end{cases}\end{split} \tag{35}\]
The Lagrange multiplier is obtained by imposing the constraint \(\mathbf{\sigma}^{2}=N\), that by imposing that the total radial force appearing in the second of eqs. (35) is the correct centripetal one:
\[-\frac{1}{N}\frac{\partial V_{J}(\mathbf{\sigma},\mathbf{\Pi})}{\partial\mathbf{\sigma}} \cdot\mathbf{\sigma}-z(t)=-\frac{\mathbf{\Pi}^{2}}{MN}. \tag{36}\]
Replacing the expression of \(z(t)\) obtained in this way into eqs. (35), we obtain exactly the eqs. (6) of the main text. Such expansion at leading order is exact in the limit \(\hbar\to 0\) where the dynamics is determined by the saddle point of the action appearing in the path-integral.
## Appendix B Comparison with exponential growth of the OTOC
In this appendix, we show that, in the classical limit of \(\hbar\to 0\), the distance \(d_{J}(t)\) defined in eq. (12) of the main text diverge with the same Lyapunov exponent \(\lambda_{L}\) of the corresponding out-of-time-order correlator (OTOC).
### Classical chaos in dynamical systems
For a classical dynamical system, chaos is defined as the exponential divergence in time of the distance between orbits starting at nearby initial conditions. For example, we consider the system of eqs. (6) of the main text and define consider two solutions of such equations, a reference trajectory \(\tilde{\mathbf{z}}(\mathbf{z},t)\) evolving respectively form a generic initial condition \(\mathbf{z}=(\sigma,\mathbf{\Pi})\), and a perturbed one \(\tilde{\mathbf{z}}(\mathbf{z}+\delta\mathbf{z},t)\), evolving from \(\mathbf{z}+\delta\mathbf{z}\) for \(|\delta\mathbf{z}|\ll 1\). Then the reference trajectory is said to be chaotic if the square distance \(\Delta(t)=|\tilde{\mathbf{z}}(\mathbf{z}+\delta\mathbf{z},t)-\tilde{\mathbf{ z}}(\mathbf{z},t)|^{2}\) grows exponentially in time or, in a more mathematical language, if the corresponding _Lyapunov exponent_[42]
\[\lambda_{L}=\lim_{t\to\infty}\lim_{\Delta(0)\to 0}\frac{1}{t}\log\frac{ \Delta(t)}{\Delta(0)} \tag{37}\]
is positive.
In practice, for \(\Delta(0)\to 0\) we have that
\[\tilde{\mathbf{z}}(\mathbf{z}+\delta\mathbf{z},t)-\tilde{\mathbf{z}}(\mathbf{ z},t)\simeq\frac{\partial\tilde{\mathbf{z}}(\mathbf{z},t)}{\partial\mathbf{z}} \cdot\delta\mathbf{z}\, \tag{38}\]
so that \(\lambda_{L}\) is also detected by the exponential growth at long times of the matrix elements of the _derivative matrix_\(\mathbf{M}(\mathbf{z},t)=\partial\tilde{\mathbf{z}}(\mathbf{z},t)/\partial \mathbf{z}\).
We remark however that, for classical systems, the Lyapunov exponent \(\lambda_{cl}\) is defined from the exponential growth of the distance \(\sqrt{\Delta(t)}\), instead of \(\Delta(t)\), so that we have \(\lambda_{L}=\lambda_{cl}\). However, as explained in the next section, the definition in eq. (37) is the classical limit of the quantum Lyapunov exponent detected by the out-of-time-order correlator (OTOC) and thus more suitable to make a comparison with the chaotic dynamics in a quantum system.
### The exponential growth of the OTOC
For quantum systems, a generalization of the Lyapunov exponent can be defined from the exponential growth of the average square commutators [33]
\[\mathcal{F}(t)=-\frac{1}{N^{2}\hbar^{2}}\sum_{ij}\left\langle[\hat{\sigma}_{i} (t),\hat{\Pi}_{j}(0)]^{2}\right\rangle\, \tag{39}\]
retrieved also in the corresponding OTOC [37]
\[\mathcal{C}(t)=\frac{1}{N^{2}\hbar^{2}}\sum_{ij}\left\langle\hat{\sigma}_{i} (t)\hat{\Pi}_{j}(0)\hat{\sigma}_{i}(t)\hat{\Pi}_{j}(0)\right\rangle. \tag{40}\]
Such a generalization comes from the observation that, in the limit of small \(\hbar\), the commutator \([\hat{\sigma}_{i}(t),\hat{\Pi}_{j}(0)]/i\hbar\) can be replaced by the corresponding Poisson parenthesis:
\[\mathcal{F}(t)\simeq\frac{1}{N^{2}}\sum_{ij}\left\langle\{\sigma_{i}(t),\Pi_{ j}(0)\}^{2}\right\rangle=\frac{1}{N^{2}}\sum_{ij}\left\langle\left|\frac{ \partial\sigma_{i}(t)}{\partial\Pi_{j}(0)}\right|^{2}\right\rangle\, \tag{41}\]
where the quantum average \(\left\langle\cdot\right\rangle\) is replaced by a suitable average over the classical trajectories. Then, for \(\hbar\to 0\), \(\mathcal{F}(t)\) is expected to grow with the same Lyapunov exponent \(\lambda_{L}\) characterizing the underlying classical dynamical system.
### The exponential growth of the distance
In this subsection we show that, for \(\hbar\to 0\), the distance \(d_{J}(t)\) defined in eq. (12) of the main text also grows exponentially in time, with exponent given by \(\lambda_{L}\). We begin by rewriting each of the average over a single wave-packet \(|\boldsymbol{\alpha}\rangle\), appearing on the right-hand side of eq. (12) as
\[\frac{1}{2N\hbar}\sum_{i}\left\langle\alpha\right|\left[\hat{z}_{i}(t)-\left \langle\alpha\right|\hat{z}_{i}(t)\left|\alpha\right\rangle\right]^{2}\left| \alpha\right\rangle=\frac{1}{N\hbar}\sum_{i}\int\frac{d^{2N}z}{(2\pi\hbar)^{N }}W_{\boldsymbol{\alpha}}(\mathbf{z})[\tilde{z}_{i}(\mathbf{z},t)-\left\langle \tilde{z}_{i}(\mathbf{z},t)\right\rangle_{\boldsymbol{\alpha}}]^{2}\, \tag{42}\]
where we remind that \(\tilde{\mathbf{z}}(\mathbf{z},t)\) is a trajectory evolving from \(\mathbf{z}=(\sigma,\boldsymbol{\Pi})\) and the average \(\left\langle\cdot\right\rangle_{\boldsymbol{\alpha}}\) on the right-hand side is performed over the coherent wave-packet
\[W_{\alpha}(\mathbf{z})=2^{N}\exp\big{\{}-\frac{(\mathbf{z}-\mathbf{z}_{ \boldsymbol{\alpha}})^{2}}{\hbar}\big{\}}. \tag{43}\]
For \(\hbar\to 0\), \(W_{\alpha}(\mathbf{z})\) becomes a \(\delta\)-function of \(0\) width. To investigate the behaviour of eq. (42) in this limit, it is useful to perform the change of variable \(\mathbf{z}=\mathbf{z}_{\boldsymbol{\alpha}}+\sqrt{\hbar}\mathbf{x}\). Then we have:
\[\begin{split}\tilde{\mathbf{z}}(\mathbf{z}_{\boldsymbol{\alpha}}+ \hbar\mathbf{x},t)&\sim\tilde{\mathbf{z}}(\mathbf{z}_{\boldsymbol {\alpha}},t)+\sqrt{\hbar}\frac{\partial\tilde{\mathbf{z}}(\mathbf{z}_{ \boldsymbol{\alpha}},t)}{\partial\mathbf{z}}\cdot\mathbf{x}+O(\hbar)\,\\ \left\langle\tilde{\mathbf{z}}(\mathbf{z}_{\boldsymbol{\alpha}}+ \hbar\mathbf{x},t)\right\rangle_{\boldsymbol{\alpha}}&\sim\tilde{ \mathbf{z}}(\mathbf{z}_{\boldsymbol{\alpha}},t)+O(\hbar)\.\end{split} \tag{44}\]
Then, plugging eqs. (44) into eq. (42) and performing a little algebra, we obtain
\[\frac{1}{N\hbar}\sum_{i}\left\langle\alpha\right|\left[\hat{z}_{i}(t)-\left\langle \alpha\right|\hat{z}_{i}(t)\left|\alpha\right\rangle\right]^{2}\left|\alpha \right\rangle\sim\frac{1}{2N}\sum_{i}\left|\frac{\partial\hat{z}_{i}(\mathbf{ z_{\alpha}},t)}{\partial z_{j}}\right|^{2}+O(\hbar). \tag{45}\]
As explained in the first subsection of this Appendix, the terms in the sum on the right-hand side are expected to growth exponentially at long times. However, from eq. (45) we can observe the dynamics of the derivative matrix only for a finite time-window, as the noise produced by the fluctuations of order \(\hbar\) eventually lead to a saturation of the typical distance between trajectories evolving from a neighbourhood of \(\mathbf{z_{\alpha}}\). To obtain the Lyapunov exponent within such finite time-scale, we also average over the possible configurations of \(\mathbf{z_{\alpha}}\) on the same energy shell, as explained in the main text and obtain
\[d_{J}(t)=\frac{1}{2N\hbar}\frac{1}{\mathcal{N}_{s}}\sum_{\alpha=1}^{\mathcal{N }_{s}}\sum_{i=1}^{2N}\left\langle\alpha\right|\left[\hat{z}_{i}(t)-\left\langle \alpha\right|\hat{z}_{i}(t)\left|\alpha\right\rangle\right]^{2}\left|\alpha \right\rangle\, \tag{46}\]
which for small \(\hbar\) is expected to grow exponentially with exponent \(\lambda_{L}\), as confirmed also by Fig. 1 of the main text.
## Appendix C Correlation function on a longer time-scale
In this Appendix, we discuss some results obtained from the dynamics of the \(p\)-spin model on larger time-scales. In particular, we integrate the dynamics according to the protocol described in Section 2 of the main text, and compute the corresponding correlation function \(C(t,t^{\prime})\) in eq. (16). Here the averages are performed on fewer realizations, with respect to the results presented in the main text. The results in Fig. 6 show that the correlation function seems to break time-translation invariance on energy scales between \(-0.25\) and \(-0.42\), even though the profiles of \(C(t_{w}+\tau,t_{w})\), for various \(t_{w}\), are less smooth due to the noise induced by the fewer realization we took. The results shown here are anyway compatible with the ones presented in the main text. The results shown in Fig. 6 are anyway compatible with the ones presented in the main text and are also qualitatively similar with the plots describing the correlation function of a classical spin-glass in the non-ergodic phase and in the large-\(N\) limit, see for example Fig. (10) of Ref. [18].
## Appendix D Details on the fidelity susceptibility
In this appendix, we prove the expression in eq. (27) of the main text for the average fidelity susceptibility \(\chi\). We also discuss how the qualitative profile of the fidelity \(\chi_{\mu}\), defined in eq. (30) and shown in Fig. 5-(a) and (b) of the main text, is independent both of the time window \([0,\mathcal{T}]\), over which we average the average correlation function in eq. (28), and of the value of the cut-off \(\mu\), provided that the latter is sufficiently small.
We begin by recalling that our definition for the fidelity susceptibility: we perturb the Hamiltonian \(\hat{H}_{J}=\frac{1}{2M}\sum_{i=1}^{N}\hat{\Pi}_{i}^{2}+V_{J}(\boldsymbol{ \hat{\sigma}})\) of the PSM with local magnetic fields, as
\[\hat{H}(\boldsymbol{B})=\hat{H}_{J}-\sum_{i}B_{i}\hat{\sigma}_{i}\, \tag{47}\]
and define the local susceptibilities of the as
\[\chi_{n}^{(i)}=\Big{[}-\frac{\partial^{2}}{\partial B_{i}^{2}}\left\langle n|n( \mathbf{B})\right\rangle\Big{]}_{\mathbf{B}=0}\, \tag{48}\]
where \(|n(\mathbf{B})\rangle\) is the \(n\)-th eigenstate of the perturbed Hamiltonian \(\hat{H}(\mathbf{B})\) and \(|n\rangle=|n(0)\rangle\). By standard calculations of non-degenerate perturbation theory [80], we have
\[\chi_{n}^{(i)}=\sum_{m\neq n}\frac{|\left\langle n|\hat{x}_{i}|m\right\rangle|^{ 2}}{(E_{n}-E_{m})^{2}} \tag{49}\]
where \(E_{n}\) is the \(n\)-th unperturbed energy level of the Hamiltonian \(\hat{H}_{J}\). For the initial state \(\rho\) defined in eq. (7) of the main text, we define the fidelity susceptibility \(\chi\) as the weighted
Figure 6: Several plots of the correlation function \(C(t_{w}+\tau,t_{w})\), for different fixed values of the energy density \(E\). For all the panels, the data are obtained from a dynamics up to time \(t_{max}=320\), with the same simulation parameters described in Fig. 5-(b) of the main text. The data are presented on a log-log scale.
average
\[\chi=\frac{1}{N}\sum_{i=1}^{N}\sum_{n}\left\langle n|\rho|n\right\rangle\overline{ \chi_{n}^{(i)}}=\frac{1}{N}\sum_{i=1}^{N}\sum_{n,m\neq n}|\left\langle n|\psi \right\rangle|^{2}\overline{\frac{\left\langle n|\hat{x}_{i}|m\right\rangle|^ {2}}{(E_{n}-E_{m})^{2}}} \tag{50}\]
performed over the sites, the eigenstates and the disorder configurations. Then, we observe that \(\chi\) is connected to the Fourier transform of the time-averaged correlation function
\[C_{av}(\tau)=\lim_{\mathcal{T}\rightarrow\infty}\frac{1}{\mathcal{T}}\int_{0} ^{\mathcal{T}}dt_{w}C(t_{w},t_{w}+\tau). \tag{51}\]
In particular, \(C(t_{w},t_{w}+\tau)\) and \(C_{av}(\tau)\) can be represented as follows:
\[\begin{split} C(t_{w},t_{w}+\tau)&=\frac{1}{N}\sum_ {i=1}^{N}\sum_{lmn}e^{i(E_{l}-E_{n})t_{w}}e^{i(E_{l}-E_{m})\tau}\left\langle l |\hat{x}_{i}|m\right\rangle\left\langle m|\hat{x}_{i}|n\right\rangle\left\langle \psi|l\right\rangle\left\langle n|\psi\right\rangle\,\\ C_{av}(\tau)&=\frac{1}{N}\sum_{i=1}^{N}\sum_{n}| \left\langle n|\psi\right\rangle|^{2}\sum_{m}e^{i(E_{n}-E_{m})\tau}|\left\langle n |\hat{x}_{i}|m\right\rangle|^{2}\.\end{split} \tag{52}\]
The second line leads immediately to the Lehmann representation of the average correlation function, which reads as3
Footnote 3: In our semi-classical framework, \(C(t_{w}\tau,t_{w})\) is actually defined only for \(\tau>t_{w}\). To compute the Fourier transform, we first symmetrized \(C_{av}(\tau)\) with respect to \(\tau\).
\[\tilde{C}_{av}(\omega)=\int_{-\infty}^{\infty}d\tau e^{-i\omega\tau}C_{av}( \tau)=\frac{2\pi}{N}\sum_{i=1}^{N}\sum_{nm}\left\langle n|\rho|n\right\rangle| \left\langle n|\hat{x}_{i}|m\right\rangle|^{2}\delta(\omega-E_{n}+E_{m}). \tag{53}\]
The latter is immediately related to the typical susceptibility \(\chi\) in eq.(50) via the expression
\[\chi=\int_{|\omega|>\omega_{L}}\frac{d\omega}{2\pi}\frac{\tilde{C}_{av}( \omega)}{\omega^{2}}\, \tag{54}\]
where \(\omega_{L}\) is the average spacing of the unperturbed energy levels [50].
As stated in the main text, within the TWA framework we can not compute the fidelity directly from eq. (54) for two reasons: first, our simulations are performed up to a finite maximum time \(t_{max}\), so that we can perform the average in eq. (28) only on a finite time window \([0,\mathcal{T}]\), with \(\mathcal{T}<t_{max}\); second, in the limit \(\hbar\to 0\) we do not have access to the spacing \(\omega_{L}\) and have a frequency cut-off set by \(\Delta\omega=2\pi/t_{max}\). In Fig. 7 we show that the profile of \(\chi_{\mu}\) is qualitatively the same for a wide range of \(\mathcal{T}\) between \(0\) and \(t_{max}\), so that the first issue is actually irrelevant for our results.
## Appendix E Calculation of the complexity
In this appendix, we compute in detail the average number of stationary configurations of the dynamics induced by eq. (6) of the main text. Imposing the condition \(\Pi_{i}=0\) for each
\(i=1,\ldots,N\), it easy to see that such stationary points at fixed energy density \(E=V_{J}(\mathbf{\sigma})/N\) coincide with the stationary point of the potential \(V_{J}(\mathbf{\sigma})\), determined by the equations 4
Footnote 4: To keep all the expressions of our intermediate results readable, we write all the formulas as if \(V_{J}(\sigma)\) was a 3-body interaction potential.
\[\left\{\begin{aligned} 0=&-\frac{p}{p!}\sum_{kl}J_{ikl} \sigma_{k}\sigma_{l}-pE\sigma_{i}\\ &\sum_{i}\sigma_{i}^{2}=N\end{aligned}\right. \tag{55}\]
In the spirit of Ref. [64], the average number of saddles at energy \(E\), defined by
\[\overline{\mathcal{N}(E)}=\int D\sigma\overline{\prod_{i}\delta \Big{(}-\frac{p}{p!}\sum_{kl}J_{ikl}\sigma_{k}\sigma_{l}-pE\sigma_{i}\Big{)} \Big{|}\det\Big{(}-\frac{p(p-1)}{p!}\sum_{k}J_{ijk}\sigma_{k}-pE\delta_{ij} \Big{)}\Big{|}}\, \tag{56}\]
the overbar denoting the average over the disorder and the spherical constraint will be from now on hidden in the integration measure \(D\sigma=\delta(\sum_{i}\sigma_{i}^{2}-N)\prod_{i=1}^{N}d\sigma_{i}\).
Figure 7: Fidelity susceptibility \(\chi_{\mu}\) from eq. (30) of the main text, shown as a function of \(E\) for a fixed cutoff \(\mu=0.07\). The average time window for the correlation function in eq. (28) is set to \([0,\mathcal{T}]\), for several values of \(\mathcal{T}\). The data are obtained from a dynamics up to time \(t_{max}=320\), with the same parameters described in Fig. 5-(b) of the main text.
To compute \(\overline{\mathcal{N}(E)}\) in the large-\(N\) limit, several approximation are needed. The first one is to assume that there is no correlation between the last two terms in the right-hand side of eq. (56) [11, 64], that is
\[\overline{\delta\Big{(}-\frac{p}{p!}\sum_{kl}J_{ikl}\sigma_{k} \sigma_{l}-pE\sigma_{i}\Big{)}\Big{|}\det\Big{(}-\frac{p(p-1)}{p!}\sum_{k}J_{ ijk}\sigma_{k}-pE\delta_{ij}\Big{)}\Big{|}}\simeq \tag{57}\] \[\overline{\delta\Big{(}-\frac{p}{p!}\sum_{kl}J_{ikl}\sigma_{k} \sigma_{l}-pE\sigma_{i}\Big{)}}\;\overline{\Big{|}\det\Big{(}-\frac{p(p-1)}{p! }\sum_{k}J_{ijk}\sigma_{k}-pE\delta_{ij}\Big{)}\Big{|}}\]
so that each the average of the \(\delta\)-function and the one of the determinant can be computed independently from each other. We compute the average \(\delta\)-function by using the exponential representation
\[\overline{\delta\Big{(}-\frac{p}{p!}\sum_{kl}J_{ikl}\sigma_{k} \sigma_{l}-pE\sigma_{i}\Big{)}}=\int\prod_{i}\frac{d\mu_{i}}{2\pi}\overline{ e^{-ip/p!\sum_{ikl}J_{ikl}\mu_{i}\sigma_{k}\sigma_{l}}}e^{ipE\sum_{j}\mu_{j} \sigma_{j}} \tag{58}\]
and averaging out the disorder after a proper symmetrization of the exponent. Taking into account the spherical constraint \(\sum_{i}\sigma_{i}^{2}=N\) and posing \(J=1\) for simplicity, we get
\[\overline{\delta\Big{(}-\frac{p}{p!}\sum_{kl}J_{ikl}\sigma_{k} \sigma_{l}-pE\sigma_{i}\Big{)}}=\int\prod_{i}\frac{d\mu_{i}}{2\pi}\exp\Big{\{} -\frac{p}{4}\sum_{j}\mu_{j}^{2}-\frac{p(p-1)}{4N}(\sum_{j}\mu_{j}\sigma_{j})^ {2}+ipE\sum_{j}\mu_{j}\sigma_{j}\Big{\}} \tag{59}\]
To get rid of the term \((\sum_{j}\mu_{j}\sigma_{j})^{2}\), we perform and extra Hubbard-Stratonovich transformation and perform straightforwardly all the remaining Gaussian integrals, posing again \(\sum_{i}\sigma_{i}^{2}=N\) in all our expressions. The final result is:
\[\overline{\delta\Big{(}-\frac{p}{p!}\sum_{kl}J_{ikl}\sigma_{k} \sigma_{l}-pE\sigma_{i}\Big{)}}\simeq\frac{1}{(2\pi)^{N/2}}\exp\Big{\{}-N \Big{(}E^{2}+\frac{1}{2}\log\frac{p}{2}\Big{)}\Big{\}} \tag{60}\]
up to a multiplicative constant which becomes irrelevant in the thermodynamic limit. The integration of the Jacobian factor is a bit more tricky. To perform it, we assume assume that the sign of the determinant, for any fixed configuration of the disorder, is given by the average number of negative eigenvalues of the corresponding Hessian matrix at energy density \(E\), that we write as \(Nk(E)\). In formulas, this is equivalent to:
\[\overline{\Big{|}\det\big{(}-\frac{p(p-1)}{p!}\sum_{k}J_{ijk} \sigma_{k}-pE\delta_{ij}\big{)}\Big{|}}\simeq\overline{\det\big{(}-\frac{p(p-1 )}{p!}\sum_{k}J_{ijk}\sigma_{k}-pE\delta_{ij}\big{)}}\cdot(-1)^{-Nk(E)} \tag{61}\]
\(k(E)\) being the average _fraction_ of negative eigenvalues. Once we got rid of the modulus, we rewrite the average of the determinant using a fermionic representation [81]:
\[\overline{\det\big{(}-\frac{p(p-1)}{p!}\sum_{k}J_{ijk}\sigma_{k} -pE\delta_{ij}\big{)}}=\int\prod_{j}d\psi_{j}d\overline{\psi}_{j}\overline{e^ {-p(p-1)/p!\sum_{ikl}J_{ikl}\sigma_{i}\overline{\psi}_{k}\psi_{l}}}e^{-pE\sum_ {i}\overline{\psi}_{i}\psi_{i}} \tag{62}\]
Integrating out the disorder and using again an Hubbard-Stratonovich transformation to get rid of quartic fermionic terms (see Ref. [64] for more details about this step), we arrive at
\[\overline{\det\big{(}-\frac{p(p-1)}{p!}\sum_{k}J_{ijk}\sigma_{k} -pE\delta_{ij}\big{)}}\propto\int_{-i\infty}^{i\infty}dze^{NG(z)} \tag{63}\]
where \(G(z)=\frac{z^{2}}{p(p-1)}+\log(z-pE)\). Plugging everything together, we have that (up to an irrelevant prefactor)
\[\overline{\mathcal{N}(E)}=(-1)^{Nk(E)}\exp\Big{\{}-N\Big{(}E^{2}+\frac{1}{2} \log\frac{p}{2}\Big{)}\Big{\}}\int_{-i\infty}^{i\infty}dze^{NG(z)} \tag{64}\]
Thus, we are left with the computation of the integral
\[I_{\Gamma}=\int_{\Gamma}dze^{NG(z)} \tag{65}\]
along the imaginary axis \(\Gamma\) in the complex plane. In the thermodynamic limit \(N\to\infty\), this goal can be achieved by using the saddle-point method [82], which we briefly review in the following. First, we observe that, for generic \(z=x+iy\) (for \(x\), \(y\) real numbers), the function \(G(z)=u(x,y)+v(x,y)\) can be decomposed in its real and imaginary parts as
\[\begin{cases}u(x,y)=\frac{1}{2}\log[(x-pE)^{2}+y^{2}]+\frac{x^{2}-y^{2}}{p(p-1) }\\ v(x,y)=\frac{2}{p(p-1)}xy+\arctan\frac{y}{x-pE}+\pi\Theta(pE-x)\text{ sign}(y)\end{cases} \tag{66}\]
In summary, the saddle point method states that if we find a deformation \(\gamma\) of \(\Gamma\) in the context plane such that:
1. \(v(x,y)\) is constant over \(\gamma\),
2. \(u(x,y)\) has a global maximum along \(\gamma\) at some point \(z=z_{0}\),
3. \(G(z)\) is analytic in the closed domain encompassed by the curves \(\Gamma\) and \(\gamma\),
we have that
\[I_{\Gamma}=I_{\gamma}\equiv\int_{\gamma}dze^{NG(z)}\simeq\exp\big{(}NG(z_{0})+ o(N)\big{)}\, \tag{67}\]
where the last asymptotic relation holds in the \(N\to\infty\) limit and is known as Laplace method [82]. It is easy to see that the first condition is equivalent to state that \(\gamma\) is parallel to \(\nabla u(x,y)\), as the relation \(\nabla u\cdot\nabla v=0\) holds for the holomorphic function \(G(z)\). Then if \(\nabla u(x,y)\) vanishes along \(\gamma\), it vanishes in the whole \(\mathbb{R}^{2}\) plane, so that any maximum of \(u(x,y)\) along \(\gamma\) is a stationary point of \(u(x,y)\) in \(\mathbb{R}^{2}\).
However, the choice of a suitable \(\gamma\) depends on the value of the energy density \(E\), because \(G(z)\) has a branch-cut on the half-line \(\{z=x|x<pE\}\) (see the expression of \(v(x,y)\) in eq (66)) and the position in the complex plane of the stationary points of \(u(x,y)\), given by
\[z_{\pm}(E)=\frac{p}{2}\bigg{(}E\pm\sqrt{E^{2}-\frac{2(p-1)}{p}}\bigg{)}\, \tag{68}\]
also depends on \(E\). In particular, we can identify four relevant energy windows, which we treat separately:
\[\begin{split} E<E_{th}\equiv-\sqrt{2(p-1)/p}\,& E_{th}<E<0,\\ 0<E<|E_{th}|\,& E>|E_{th}|\end{split} \tag{69}\]
Figure 8: **(Color online)** Color map in the complex plane representing the values assumed by the function \(v(x,y)\) defined in the appendix. Each panel corresponds to a different, fixed value of the energy density \(E\), each representative of one of the four, qualitatively different cases studied in the appendix. Here we plot the results for \(p=3\). The white thick line correponds to the branch-cut of \(v(z)=v(x,y)\) in the complex plane, where \(z=x+iy\), while the black lines represent the integration contour \(\Gamma\) or its deformation \(\Gamma^{+}\cup\Gamma^{-}\). The blue and the red lines correspond to the level curve of \(v(z)\), passing respectively through \(z^{-}(E)\) and \(z^{+}(E)\). **(a)**\(E<E_{th}\).**(b)**\(E_{th}<E<0\).**(c)**\(0<E<|E_{th}|\).**(d)**\(E>|E_{th}|\).
For \(E<E_{th}\), we can take the curve \(\gamma\) as the level curve \(v(x,y)=v(z^{+}(E))=0\)5 (shown in Fig. 8-(a)). It is straightforward to show that \(\gamma\) is the only deformation of \(\Gamma\) along which \(u(x,y)\) displays a maximum, located at \(z=z_{+}(E)\). Thus for \(N\to\infty\) we have \(I_{\Gamma}\simeq\exp\{NG(z_{+}(E))\}\) and \(\overline{\mathcal{N}(E)}\simeq\exp\{N\Sigma(E)\}\), where
Footnote 5: To simplify notation, we will abuse notation by writing \(v(z)\) to represent \(v(\text{Re}(z),\text{Im}(z))\), and similarly for \(u(x,y)\). This allows us to write equations more compactly and avoid cluttering them with repetitive expressions.
\[\Sigma(E)=\frac{z_{+}(E)^{2}}{p(p-1)}+\log(z_{+}(E)-pE)-E^{2}-\frac{1}{2}\log \frac{p}{2}+\frac{1}{2} \tag{70}\]
where we posed the phase \(k(E)=0\) in eq. (64),as \(\overline{\mathcal{N}(E)}\) has to be a positive real number. The physical meaning of having \(k(E)=0\) is that, in this energy range, the integral in Eq. (56) is dominated by local minima of the potential \(V_{J}(\mathbf{\sigma})\), where the Hessian is positive definite.
For \(E_{th}<E<0\), the only suitable deformation of \(\Gamma\) is \(\gamma=\gamma_{+}\cup\gamma_{-}\), where \(\gamma_{+}\) and \(\gamma_{-}\) are respectively the two level curves \(v(z)=v(z_{+}(E))\) and \(v(z)=v(z_{-}(E))\) of \(v(z)\). The two curves intersect respectively the points \(z_{+}(E)\) and \(z_{-}(E)\) in the complex plane (see Fig. 8-(b)), which in turn are maxima of \(u(x,y)\) along each of the two curves. Then, as \(u(z_{+})=u(z_{-})\) and \(v(z_{+})=-v(z_{-})\), the \(N\to\infty\) asymptotic value of \(I_{\Gamma}\) is the sum of two contributions:
\[I_{\Gamma}\simeq\frac{e^{iNv(z_{+})}+e^{-iNv(z_{+})}}{2}e^{Nu(z_{+})} \tag{71}\]
To give a physical interpretation to our result, let us first note that the function \(Nk(E)\), defined in eq.,(61), is a non-negative integer. This is because \(Nk(E)\) is the average number of negative eigenvalues of the Hessian matrix associated with \(V_{J}(\mathbf{\sigma})\). Thus, even though the values of \(k(E)\) become dense in the interval \([0,1]\) as \(N\) approaches infinity, at any finite \(N\) we must always have
\[(-1)^{Nk(E)}=(-1)^{-Nk(E)}. \tag{72}\]
Since \(\overline{\mathcal{N}(E)}\) is a positive real number, any phase obtained from the integral \(I_{\Gamma}\) must compensate for the one coming from \(k(E)\). Therefore, with a bit of lack of rigor, we can conclude that for all physically meaningful values of \(k(E)\), the following equalities hold:
\[e^{iNv(z_{+})}=(-1)^{Nk(E)}=(-1)^{-Nk(E)}=e^{-iNv(z_{+})} \tag{73}\]
and we write
\[I_{\Gamma}\simeq(-1)^{Nk(E)}\exp\{N\Sigma(E)\} \tag{74}\]
with
\[k(E)=\frac{p}{2\pi(p-1)}E\sqrt{E_{th}^{2}-E^{2}}+\frac{1}{\pi}\arctan(\frac{ \sqrt{E_{th}^{2}-E^{2}}}{E}. \tag{75}\]
and \(\Sigma(E)=\text{Re}[G(z_{+}(E))]\). Physically, having \(k(E)>0\) means that for \(E>E_{th}\) the contribution to the integral in eq. (56) is dominated by saddles having an extensive number of unstable direction: this is why in literature \(E_{th}\) is referred to as the _energy threshold_ where the local minima cease to dominate [10, 64].
For \(0<E<|E_{th}|\), the branch-cut crosses the integration path \(\Gamma\) and we can not use the saddle-point method straightforwardly. Instead, we first notice that it exist two curve in the
complex plane, \(\gamma_{+}\) and \(\gamma_{-}\), passing respectively through the saddle points \(z_{+}(E)\) and \(z_{-}(E)\) (both being maxima of \(u(z)\) along such curves) and ending up in a point \(z=x_{1}(E)\) on the branch-cut.
However, we observe that if we split \(\Gamma\) in two curves, \(\Gamma^{+}\) and \(\Gamma^{-}\), defined in such a way that
\[\begin{split} I_{\Gamma^{+}}&=\int_{\Gamma}^{+}dze ^{NG(z)}\equiv\int_{0}^{i\infty}dze^{NG(z)}-\int_{0}^{x_{1}(E)}dxe^{NG(x)}\\ I_{\Gamma^{-}}&=\int_{\Gamma}^{+}dze^{NG(z)}\equiv \int_{-i\infty}^{0}dze^{NG(z)}+\int_{0}^{x_{1}(E)}dxe^{NG(x)}\end{split} \tag{76}\]
then we have that \(I_{\Gamma}=I_{\Gamma}^{+}+I_{\Gamma}^{-}\) and that \(\Gamma^{+}\) and \(\Gamma^{-}\) can be respectively deformed on \(\gamma^{+}\) and \(\gamma^{-}\), as shown in Fig. 8-(c). In this way, we can use the saddle-point method to evaluate \(I_{\Gamma^{+}}\) and \(I_{\Gamma^{-}}\) separately. As the equalities \(u(z_{+})=u(z_{-})\) and \(v(z_{+})=-v(z_{-})\) hold like in the previous case, we find once again that \(I_{\Gamma}\simeq(-1)^{Nk(E)}\exp\{N\Sigma(E)\}\), with \(k(E)\) given by eq. (75), \(\Sigma(E)=\text{Re}[G(z_{+}(E))]\) and for \(E\) in the range \([0,|E_{th}|]\).
Finally, for \(E>|E_{th}|\) both the \(z_{+}(E)\) and \(z_{-}(E)\) lie on the branch-cut, like in Fig. 8-(d). By some algebraic manipulations, one can show that \(u(x,y)\) has an absolute maximum at the point \(z_{-}(E)\) along the level curves \(\gamma^{+}\) and \(\gamma^{-}\) of \(v(x,y)\) that intersect \(z_{-}(E)\), while \(u(x,y)\) has a minimum at the point \(z_{+}(E)\) along the level curves of \(v(x,y)\) that intersect \(z_{+}(E)\). As done for the case \(0<E<|E_{th}|\) we divide once again \(\Gamma\) in the curves \(\Gamma^{+}\) and \(\Gamma_{-}\), both ending in \(z_{-}(E)\) and deform each of them respectively along \(\gamma^{+}\) and \(\gamma^{-}\) (see Fig. 8-(d)). By observing that \(v(z)=\pm\pi\) respectively along \(\gamma^{+}\) and \(\gamma^{-}\), we conclude that \(k(E)=1\) in the energy range taken into exam, meaning that for \(E>|E_{th}|\) the majority of stationary points of \(V_{J}(\boldsymbol{\sigma})\) are local maxima. At the same time, the application of the Laplace method gives us:
\[\Sigma(E)=\frac{z_{+}(E)^{2}}{p(p-1)}+\log(z_{+}(E)-pE)-E^{2}-\frac{1}{2}\log \frac{p}{2}+\frac{1}{2}. \tag{77}\]
In summary, in the whole energy range studied we have
\[\Sigma(E)=\frac{z(E)^{2}}{p(p-1)}+\log(z(E)-pE)-E^{2}-\frac{1}{2}\log\frac{p} {2}+\frac{1}{2} \tag{78}\]
where \(z(E)=z_{+}(E)\) if \(E<|E_{th}|\) and \(z(E)=z_{-}(E)\) if \(E>|E_{th}|\), while the average stability index is given by
\[k(E)=\begin{cases}&0\,\ \text{if}\ E<E_{th}\\ &\frac{p}{2\pi(p-1)}E\sqrt{E_{th}^{2}-E^{2}}+\frac{1}{\pi}\arctan(\frac{ \sqrt{E_{th}^{2}-E^{2}}}{E})\,\ \text{if}\ |E|<|E_{th}|\\ &1\,\ \text{if}\ E>|E_{th}|\end{cases} \tag{79}\]
which is the result plotted in Figure 4 of the main text.
|
2301.05921 | Quantum Geometry of Expectation Values | We propose a novel framework for the quantum geometry of expectation values
over arbitrary sets of operators and establish a link between this geometry and
the eigenstates of Hamiltonian families generated by these operators. We show
that the boundary of expectation value space corresponds to the ground state,
which presents a natural bound that generalizes Heisenberg's uncertainty
principle. To demonstrate the versatility of our framework, we present several
practical applications, including providing a stronger nonlinear quantum bound
that violates the Bell inequality and an explicit construction of the density
functional. Our approach provides an alternative time-independent quantum
formulation that transforms the linear problem in a high-dimensional Hilbert
space into a nonlinear algebro-geometric problem in a low dimension, enabling
us to gain new insights into quantum systems. | Chaoming Song | 2023-01-14T14:01:41Z | http://arxiv.org/abs/2301.05921v2 | # Exact Density Functional and Uncertainty Principle
###### Abstract
The Hohenberg and Kohn theorems state that for many-body systems with fixed interactions, there is a universal density functional of energy. Despite considerable efforts to approximate this functional, searches for exact density functionals such as the Levy-Lieb algorithm, are limited to numerical computations. It is generally believed that the analytical form of the density functional will never be known. In this letter, we generalize Heisenberg's uncertainty principle to relations for expectation values among an arbitrary set of physical observables. We show Hohenberg and Kohn theorems as a special case of our general framework and provide an explicit construction of exact density functional. We demonstrate analytical forms of density functional for systems with several particles and study their geometries in detail. Finally, we show that our framework provides a potential alternative formulation of quantum theory as a vast generalization of the uncertainty principle.
Density functional theory (DFT) is an important constituent of the modern approach to quantum many-body physics [1]. Consider systems with electrons under an external potential \(V\), with the Hamiltonian
\[\hat{H}=\hat{H}_{0}+\sum_{i}V_{i}\hat{n}_{i}, \tag{1}\]
where \(\hat{H}_{0}\) is the Hamiltonian involving kinetic and pair interaction terms, and \(\hat{n}_{i}\) and \(V_{i}\) are the density operator and external potential at site \(i\). Two celebrated Hohenberg and Kohn's (HK's) theorems [2] state that the system energy satisfies
\[E[n]=F[n]+\sum_{i}V_{i}n_{i}, \tag{2}\]
as a unique functional of particle density \(n_{i}\equiv\langle\hat{n}_{i}\rangle\), where \(F[n]\) is the HK functional of the external potential. Furthermore, the energy functional \(E[n]\) is bounded and uniquely saturated by the ground state. The HK functional \(F[n]\) depends only on \(\hat{H}_{0}\), and thus is universal and shared by any electron systems.
HK's original proof was non-constructive. Therefore, the exact form of the density functional \(F[n]\) is unknown. Despite computational advances over more than half a century, the search for the exact functional is largely limited. A numerical algorithm for finding exact density functional was initially proposed by Levy in 1979 [3; 4] and later by Lieb [5]. However, the Levy-Lieb constraint search algorithm does not provide an analytical solution to the exact density functional. In fact, a common belief in the community is that the explicit construction of HK functional will never be found.
The bounded nature of Eq. (2) reminds us of Heisenberg's uncertainty principle introduced in the early days of quantum mechanics [6],
\[\Delta x^{2}\Delta p^{2}\geq\hbar^{2}/4, \tag{3}\]
where \(\Delta x^{2}\equiv\langle x^{2}\rangle-\langle x\rangle^{2}\) and \(\Delta p^{2}\equiv\langle p^{2}\rangle-\langle p\rangle^{2}\). The uncertainty principle (3) provides a non-linear bound of expectation values of physical observables and is saturated by the ground state of the harmonic oscillator. Inspired by this analogy, one wonders about the existence of a similar relation between \(\langle\hat{H}_{0}\rangle\) and \(\langle\hat{n}_{i}\rangle\) for Eq. (1), which in turn determines the density functional \(F[n]\) explicitly.
In this Letter, we provide a definite answer to this postulation by developing a comprehensive theory for systems with a family of Hamiltonians, which was recently introduced by the author as the eigenstate moduli problem [7]. We show the correspondence between any Hamiltonian family and its moduli space of operator expectation values, where the ground states are uniquely determined by the boundary of this moduli space. Furthermore, all eigenstates correspond to singularity sets of the mapping and are determined by a unique equation. This finding unifies Heisenberg's uncertainty principle and the DFT as special cases of our general framework. As a result, we provide constructive proof of HK theorems with an explicit construction of \(F[n]\). We demonstrate the explicit HK functional for systems with several particles and investigate their geometry in detail. Finally, we show our framework does not rely on the introduction of a Hilbert space and provides a potential alternative quantum formulation as a vast generalization of Heisenberg's uncertainty principle.
_Moduli problem.-_ We provide a comprehensive theory of the moduli problem. Consider a Hilbert space \(\mathcal{H}\), and the corresponding real vector space of self-adjoint operators \(\mathcal{L}_{s}(\mathcal{H})\). For simplicity, we assume the Hilbert space \(\mathcal{H}\) has a finite dimension \(N\). We are interested in the operator subspace \(O\subset\mathcal{L}_{s}(\mathcal{H})\) of dimension \(M\), which includes the identity operator \(\hat{I}\in O\) as one of its elements. The operator subspace \(O\) defines a family of systems where Hamiltonians \(H(\lambda)\in O\) are parameterized by a \(M\)-dimensional real vector \(\lambda\). One may express \(\hat{H}(\lambda)=\sum_{i}\lambda_{i}\hat{H}_{i}\) in terms of a linear combination of a set of linearly-independent basis \(\hat{\mathbf{H}}\equiv\{\hat{H}_{i}\}\). The moduli problem aims to address the solution space of the Hamiltonian family \(H(\lambda)\). We start with the Schrodinger
equation
\[\hat{H}(\lambda)\psi=0, \tag{4}\]
where the eigenvalue term \(-E\hat{I}\) has been absorbed by a linear combination of basis. Instead of solving the eigenstate \(\psi\) of a fixed parameter set \(\lambda\), we consider the dual problem of solving \(\lambda\) for a fixed \(\psi\). We rewrite the Schrodinger equation (4) as
\[\lambda^{t}\mathbf{\hat{H}}\psi=0. \tag{5}\]
Since \(\lambda\) is real, we also require its conjugate equation, leading to
\[\lambda^{t}\mathcal{J}(\psi,\psi^{\dagger})=0, \tag{6}\]
where the \(M\times 2N\) rectangular matrix
\[\mathcal{J}(\psi,\psi^{\dagger})\equiv\left[\psi^{\dagger}\hat{H}_{i},\psi^{ \dagger}\hat{H}_{i}^{t}\right], \tag{7}\]
is the row concatenation over \(1\leq i\leq M\). For technical convenience, we will assume \(M<2N\) from now on. Equation (6) implies the existence of a non-trivial cokernel \(\lambda\) for the matrix \(\mathcal{J}\), reflecting the fact that it is dual to the Schrodinger equation (4). This cokernel condition has a geometrical interpretation. Consider a mapping \(\rho\) from \(\mathcal{H}\) to a \(M\)-dimensional moduli space \(\mathcal{M}\) consisting of the expectation values of the operator subspace \(O\), or
\[\rho(\psi,\overline{\psi})\equiv\left(\langle\hat{H}_{1}\rangle,\ldots,\langle \hat{H}_{M}\rangle\right), \tag{8}\]
by choosing a set of coordinates \(\{\langle H_{i}\rangle\}\), where \(\langle\cdot\rangle\equiv\langle\psi|\cdot|\psi\rangle\) denotes the expectation value of an operator. The moduli space \(\mathcal{M}\) is generally a semi-algebraic set (see Fig. 1). Consider the energy functional
\[E[\psi]=\langle\hat{H}(\lambda)\rangle=\lambda^{t}\rho. \tag{9}\]
The variational principle suggests that the stationary points of Eq. (9) correspond to the eigenstates \(\psi\) of \(\hat{H}\), satisfying
\[0=\lambda^{t}\frac{\delta\rho}{\delta\psi}=\lambda^{t}\mathcal{J}, \tag{10}\]
that recovers the cokernel condition (6), where the Jacobian \(\mathcal{J}=\delta\rho/\delta\psi\) is precisely the one defined in Eq. (7). The existence of non-trivial cokernel of the Jacobian implies the eigenstate family corresponds to the singularity set of the moduli space \(\mathcal{M}\)[8], where the parameter \(\lambda\) is the normal vector at the point \(\rho(\psi_{eigen})\). Moreover, all eigenstates form a algebraic variety. To see this, we notice that the cokernel condition (6) is equivalent to the vanishing of all \(M\times M\) minors, as
\[\det M_{I_{1},\ldots,I_{k},J_{1},\ldots,J_{k^{\prime}}}=0 \tag{11}\]
where \(M_{I_{1},\ldots,I_{k},J_{1},\ldots,J_{k^{\prime}}}\) represents the submatrix formed by selecting \(\{I_{1},\ldots,I_{k}\}\)-th \(\psi\) columns and \(\{J_{1},\ldots,J_{k^{\prime}}\}\)-th \(\overline{\psi}\) columns in Eq. (7), with \(k+k^{\prime}=M\) and \(k,k^{\prime}\leq N\). The minor condition (11) generates a homogeneous ideal \(\mathcal{I}\) over the polynomial ring \(\mathbb{C}[\{\psi,\overline{\psi}\}]\). Consequently, it determines the eigenstate spaces \(\mathcal{M}\equiv\mathbb{C}[\{\psi,\overline{\psi}\}]/\mathcal{I}\) as the corresponding homogeneous coordinate ring. Consequently, the minor condition (11) also determines the geometry of their images, i.e., the eigenstate moduli. The explicit equations in terms of the expectation values \(\{\rho_{i}\}\) can be constructed explicitly as
\[\langle\det M_{I_{1},\ldots,I_{k},J_{1},\ldots,J_{k^{\prime}}},\langle\psi| \hat{H}_{i}|\psi\rangle-\rho_{i}\rangle\cap\mathbb{C}[\rho_{i}], \tag{12}\]
which eliminates \(\psi\) with the affine coordinates \(\rho_{i}\). Equation (12) can be solved explicitly by Buchberger's algorithm using the Grobner basis. Moreover, the eigenstate moduli have a codimension one, implying that Eq. (12) reduces to a single algebraic relation
\[f(\rho_{1},\ldots,\rho_{M})=0, \tag{13}\]
in terms of the expectation values \(\rho_{i}\). The ground state (and the largest eigenvalue state) family of Eq. (4), if exists, corresponds to the boundary \(\partial\mathcal{M}\) as a partial solution of Eq. (13). This is due to the fact that the smallest and largest eigenvalues are the global lower and upper bounds of the energy functional (9). In fact, the projective nature of the parameter set \(\lambda\) for \(M>2\) implies that the lower and upper bounds are connected, corresponding to a single closed boundary. Figure 1 illustrates the mapping \(\rho\) from the Hilbert space \(\mathcal{H}\) to the moduli space \(\mathcal{M}\). The boundary \(\partial\mathcal{M}\) offers a bound of the algebraic equation (13), and can be viewed as a generalization of Heisenberg uncertainty principle (3), as we will demonstrate in next section.
The mapping \(\rho\) is not injective, implying that a point in the moduli space \(\mathcal{M}\) might correspond to multiple \(\psi\) in \(\mathcal{H}\). However, if there is no degeneration, the ground state is uniquely determined by the boundary \(\partial M\) and vice versa. For the degenerate case, there exists a center of the operator subspace \(\mathfrak{z}(O)\), where any
Figure 1: Mapping \(\rho\) from the Hilbert space \(\mathcal{H}\) (grey domain) to the moduli space \(\mathcal{M}\) (dark grey domain). The solid and dashed curves in \(\mathcal{H}\) correspond to ground and excited states, where the former maps to the boundary \(\partial\mathcal{M}\).
\(\hat{L}\in\mathfrak{z}(O)\) commutes with Hamiltonian,
\[[\hat{L},\hat{H}(\lambda)]=0, \tag{14}\]
which defines a symmetry group \(Z\equiv\exp(\mathfrak{z}(O))\subset U(N)\). The moduli space \(\mathcal{H}\) is the image of \(\mathcal{H}\) modulus \(Z\). Nevertheless, one may include \(\mathfrak{z}(O)\) into the operator subspace \(O\) to distinguish degenerate ground states.
_Uncertainty Principle._ - Our approach can be further generalized to infinite dimensional Hilbert spaces. Consider a simple example of the harmonic oscillator with the Hamiltonian family,
\[\hat{H}(\lambda)=\lambda_{p2}\hat{p}^{2}+\lambda_{x2}\hat{x}^{2}+\lambda_{p} \hat{p}+\lambda_{x}\hat{x}+\lambda_{-E}\hat{I}. \tag{15}\]
The moduli space \(\mathcal{M}_{ho}\) of Eq. (15) is a four-dimensional semi-algebraic set for expectation values \(\langle p^{2}\rangle\), \(\langle x^{2}\rangle\), \(\langle p\rangle\) and \(\langle x\rangle\), whose geometry is determined by Heisenberg's uncertainty principle (3), as shown in Fig. 2.The ground state of Eq. (15) saturates this inequality, corresponding to the boundary of the moduli space \(\partial\mathcal{M}_{ho}\), as expected. The eigenstates correspond to the singularity set of the mapping \(\rho\), determined by the analytical equation \(f=\prod_{n=0}^{\infty}\left(1-\frac{\Delta x^{2}\Delta p^{2}}{((n+1/2)\hbar)^ {2}}\right)=\cos\left(\frac{\pi\sqrt{\Delta x^{2}\Delta p^{2}}}{\hbar}\right)=0\), an infinite-dimensional analogy to the algebraic equation (13). By further including operator \(\hat{x}\hat{p}+\hat{p}\hat{x}\) to the Hamiltonian family, we end up with Schrodinger-Robertson uncertainty relation [9; 10].
In general, given a Hilbert space \(\mathcal{H}\) and the corresponding real vector space \(\mathcal{L}_{s}(\mathcal{H})\) of self-adjoint operators, we introduce the operator space category \(\mathbf{Op}\), whose object is subspace \(O_{i}\subset\mathcal{L}_{s}(\mathcal{H})\), including the identity operator \(\hat{I}\in O_{i}\). Morphisms are inclusions between subspaces. The category \(\mathbf{Op}\) contains all physical relevant information. It has the initial object \(\mathcal{I}_{Op}=\mathcal{L}_{s}(\mathcal{H})\) and terminal object \(\mathcal{T}_{Op}=\hat{I}\). We next introduce the category \(\mathbf{Ev}\) of the expectation value spaces, whose object \(E_{i}=\langle O_{i}\rangle\) is space formed by the expectation values of \(O_{i}\), generally a real semi-algebraic set. Morphisms are linear projections between objects. There is a natural functor from the operator category \(\mathbf{Op}\) to the expectation value category \(\mathbf{Ev}\). Note that the construction of \(\mathbf{Ev}\) does not necessarily rely on the introduction of the Hilbert space \(\mathcal{H}\). To see this, we first construct the initial object \(\mathcal{I}_{Ev}\). After choosing the coordinates, \(\mathcal{I}_{Ev}\) is an affine variety formed by a Veronese-type embedding \(\psi\rightarrow\overline{\psi}_{i}\psi_{j}\equiv x_{ij}\), whose ideal is generated by a set of quadratic equations \(x_{ij}x_{kl}=x_{il}x_{kj}\) without involving \(\psi\). An object \(E_{i}\) in \(\mathbf{Ev}\) is determined by the linear projection \(\pi_{i}\) from the initial object \(\mathcal{I}_{Ev}\) to it, since the map \(\rho\) factors through \(\pi_{i}\) as \(\rho(\mathcal{H}\to E_{i})=\pi_{i}\circ\rho(\mathcal{H}\rightarrow \mathcal{I}_{Ev})\). All physical measurements can be read directly from \(\mathbf{Ev}\), either from the geometry of an object (ground state expectations) and/or the morphism from \(\mathcal{I}_{Ev}\) to it (excited state expectations). Thus, the category \(\mathbf{Ev}\) contains all time-independent physics without involving the Hilbert space \(\mathcal{H}\). Thus, our approach provides a potential new quantum formulation as a vast generalization of Heisenberg's uncertainty principle. Our framework can be further generalized to thermal ensembles by considering the following partition function
\[\mathcal{Z}(\lambda)\equiv\int e^{-\lambda^{t}\rho}d\mu(f[\rho]), \tag{16}\]
where \(\mu(f)\) is the invariant measure of Eq. (13). We leave the development of the time-dependent theory and generalizations to quantum field theory for future research.
_Density functional.-_ We demonstrate the DFT as a special case of our theory. Consider a system of \(n\) identical particles with the single-particle Hilbert space \(\mathcal{H}^{1}\). For simplicity, we require \(\mathcal{H}^{1}\) to be finite-dimensional with \(q\) states. The Hilbert space of the system \(\mathcal{H}^{\pm}=\text{Sym}^{\pm}\otimes^{n}\mathcal{H}^{1}\), where \(\text{Sym}^{\dagger}\) and \(\text{Sym}^{-}\) are the symmetrization and anti-symmetrization projections, with dimension \(N=\binom{q+n-1}{n}\) and \(\binom{q}{n}\)for bosons and fermions, respectively.
The DFT Hamiltonian family satisfies Eq. (1), with \(\hat{H}_{0}=\sum_{ij}\left(-t_{ij}a_{i}^{\dagger}a_{j}+U_{ij}\hat{n}_{i}\hat{ n}_{j}\right)\), where \(a_{i}^{\dagger}\) and \(a_{i}\) are creation and annihilation operators, and \(\hat{n}_{i}\equiv a_{i}^{\dagger}a_{i}\). Since \(\sum\hat{n}_{i}=n\hat{I}\). Note that the energy term \(E\) can be absorbed into the parameter set by \(V_{i}\to V_{i}-E/n\). The corresponding parameter \(\lambda=(1,V_{1},\ldots,V_{q})\) and Hamiltonian basis \(\hat{\mathbf{H}}\equiv(\hat{H}_{0},\hat{n}_{1},\ldots,\hat{n}_{q})\) with \(M=q+1\), leading to the Jacobian
\[\mathcal{J}_{DFT}(\psi,\psi^{\dagger})\equiv\begin{bmatrix}\psi^{\dagger}\hat{ H}_{0},&\psi^{t}\hat{H}_{0}^{t}\\ \psi^{\dagger}\hat{n}_{1},&\psi^{t}\hat{n}_{1}^{t}\\ \ldots,&\ldots\\ \psi^{\dagger}\hat{n}_{q},&\psi^{t}\hat{n}_{q}^{t}\end{bmatrix}. \tag{17}\]
Introducing \(F\equiv\rho_{0}=\langle\hat{H}_{0}\rangle\) and \(n_{i}\equiv\rho_{i}=\langle\hat{n}_{i}\rangle\) with \(1\leq i\leq q\), Eq. (13) defines the algebraic equation connecting
Figure 2: The moduli space \(\mathcal{M}_{ho}\) for the harmonic oscillator is represented by the grey domain and bounded by Heisenberg’s uncertainty principle. The solid curve represents the ground state moduli, saturating the uncertainty principle. Dashed curves correspond to excited states.
\(F\) to \(n_{i}\), providing an explicit construction of the HK functional.
To demonstrate a concrete example of this construction, we analytically solve Eq. (12) for small \((n,q)\)-systems. We start with the simplest case \(q=2\) for \(n=2\) bosons, with the Hamiltonian
\[\hat{H}_{0}=-t\left(a_{1}^{\dagger}a_{2}+a_{2}^{\dagger}a_{1}\right)+\frac{U}{ 2}\sum_{i=1,2}\hat{n}_{i}(\hat{n}_{i}-1)+U^{\prime}\hat{n}_{1}\hat{n}_{2}, \tag{18}\]
for fixed parameters \(t\), \(U\) and \(U^{\prime}\). This toy model has \(N=3\) states, implying there are three eigenstates. We find the explicit form of Eq. (13) as a homogeneous algebraic equation,
\[f(F,n_{1}\ldots,n_{q})=\sum_{r_{0}+\ldots+r_{q}=r}c_{r_{0},\ldots,r_{q}}n_{1}^ {r_{1}}\ldots n_{q}^{r_{q}}F^{r_{0}}=0, \tag{19}\]
with the degree \(r=6\), where \(c_{r_{0},\ldots,r_{q}}\) are real constants depending only on \(\hat{H}_{0}\) (see SM Eq. S1). Note that for this toy model, the density functional \(F[n]\) is a one-dimensional function since the sum \(n_{1}+n_{2}=2\) is fixed. Figure 3 visualizes Eq. (19) by plotting the density functional \(F[n]\) as a function of \(\Delta n=n_{2}-n_{1}\). The geometry of \(F[n]\) contains three curves, each corresponding to one of the three eigenstates. Note that the ground state and highest excited state correspond to the boundary and are connected smoothly as predicted by our theory. The first excited state corresponds to the inner singular curve with three cusps. The normal vector corresponds to \((2,V_{1}-V_{2})\). To test our theory, we generate 10,000 \((n_{1},n_{2},F)\) points by randomly sampling the wavefunction \(\psi\) and calculating numerically the corresponding expectation values. This allows us to visualize the moduli space \(\mathcal{M}\) numerically, as shown by the grey scatter points in Fig. 3. We find that \(\mathcal{M}\) is tightly bounded by the ground state moduli (blue curve).
To understand better the ground state moduli, we find analytically the lowest functional \(F_{0}=U-\frac{1}{2}\left(\Delta U+\sqrt{\Delta U^{2}+(4t)^{2}}\right)\) with \(\Delta U=U-U^{\prime}\) occurs at \(n_{1}=n_{2}\), corresponding to the normal vector \((1,0)\), i.e., \(V_{1}=V_{2}\). Increasing \(V_{1}\) or \(V_{2}\) drives the ground state to a higher value of \(F[n]\). Expanding \(F[n]=F_{0}+F_{1}n_{1}n_{2}+O((n_{1}n_{2})^{2})\) around \(F_{0}\) in terms of the power series of \(n_{1}n_{2}\), we find
\[F_{1}=\frac{4t^{2}\sqrt{\Delta U^{2}+16t^{2}}}{16t^{2}-2\Delta U\left(\sqrt{ \Delta U^{2}+16t^{2}}-\Delta U\right)}. \tag{20}\]
For a weak coupling \(\Delta U/t\ll 1\), we find \(F_{1}=-t^{2}-\Delta U/2+O(\Delta U^{2})\), which can be interpreted as the kinetic and the mean-field interaction term. However, the exact interaction expectation \(U_{1}-\Delta U\langle\hat{n}_{1}\hat{n}_{2}\rangle\) has a different prefactor, reflecting a hidden correlation \(\langle\hat{n}_{1}\hat{n}_{2}\rangle\neq\langle\hat{n}_{1}\rangle\langle\hat {n}_{2}\rangle\). Compared to the ground state, the excited state has a more interesting geometry, as shown by the green curve in Fig. 3. There are three cusps where the middle one at \(n_{1}=n_{2}\) corresponds to limiting cases \(V_{1}\rightarrow\infty\) or \(V_{2}\rightarrow\infty\). The other two cusps are due to \(F=\langle H_{0}\rangle\) and \(\langle n_{1}-n_{2}\rangle\) of excited states not changing monotonically with the external potential \(V_{1}-V_{2}\). In fact, there a critical values of \(V_{1}-V_{2}=\Delta V_{c}\) when both \(F\) and \(\Delta n\) reach their maximum value (see SM Fig. S1). Expanding around \(\Delta V_{c}\) to the third order, we find \(F\) and \(\Delta n\) satisfy the normal cusp equation \(y^{2}=x^{3}\) after a linear transformation.
What about larger systems? Equation (19) still holds, however, its degree \(r\) increases rapidly with system size. For example, by adding one more particle into the toy model (18), we find an equation of \(r=12\) degree (see SM Eq. S2 and Fig. S2). This observation implies that an explicit form of Eq. (19) may not be of practical use. Not only because it is computationally infeasible to solve Eq. (12) for large systems, but also because of an exponential number of terms occurring in the analytical form of the density functional (19). Nonetheless, our analytical solution will provide promising guidance for future developments in density functional approximations.
In conclusion, we develop a general theory for relations of expectation values among an arbitrary set of physical observables, unifying Heisenberg's uncertainty principle and HK theorems into a general framework. As a result of our theory, we provide an explicit construction of exact density functional. Our approach leaves many challenges and opportunities for future research. For instance, developing the time-dependent theory will
Figure 3: The density functional \(F[n]\) in unit of \(t\) for the two-boson system (18) for a fixed \(U/t=1\) and \(U^{\prime}/t=0\). Curves are determined by Eq. (19), where colors correspond to three different eigenstates. Grey scatter points generate a random sampling of the moduli space \(\mathcal{M}\), which is bounded by \(F[n]\). The black arrow represent the normal vector \((2,V_{1}-V_{2})\).
complete our framework as an alternative formulation of quantum theory. One also wonders about reconstructing Hilbert space from the proposed expectation value category, a question closely related to the recently developed geometrical formulation of quantum theory [11; 12]. In all, our approach provides a new perspective on quantum theory, especially for many-body systems, with potential implications for strongly correlated systems, quantum computational chemistry, and quantum field theory.
|
2305.06707 | A data-driven rutting depth short-time prediction model with
metaheuristic optimization for asphalt pavements based on RIOHTrack | Rutting of asphalt pavements is a crucial design criterion in various
pavement design guides. A good road transportation base can provide security
for the transportation of oil and gas in road transportation. This study
attempts to develop a robust artificial intelligence model to estimate
different asphalt pavements' rutting depth clips, temperature, and load axes as
primary characteristics. The experiment data were obtained from 19 asphalt
pavements with different crude oil sources on a 2.038 km long full-scale field
accelerated pavement test track (RIOHTrack, Road Track Institute) in Tongzhou,
Beijing. In addition, this paper also proposes to build complex networks with
different pavement rutting depths through complex network methods and the
Louvain algorithm for community detection. The most critical structural
elements can be selected from different asphalt pavement rutting data, and
similar structural elements can be found. An extreme learning machine algorithm
with residual correction (RELM) is designed and optimized using an independent
adaptive particle swarm algorithm. The experimental results of the proposed
method are compared with several classical machine learning algorithms, with
predictions of Average Root Mean Squared Error, Average Mean Absolute Error,
and Average Mean Absolute Percentage Error for 19 asphalt pavements reaching
1.742, 1.363, and 1.94\% respectively. The experiments demonstrate that the
RELM algorithm has an advantage over classical machine learning methods in
dealing with non-linear problems in road engineering. Notably, the method
ensures the adaptation of the simulated environment to different levels of
abstraction through the cognitive analysis of the production environment
parameters. | Zhuoxuan Li, Iakov Korovin, Xinli Shi, Sergey Gorbachev, Nadezhda Gorbacheva, Wei Huang, Jinde Cao | 2023-05-11T10:33:36Z | http://arxiv.org/abs/2305.06707v1 | A data-driven rutting depth short-time prediction model with metaheuristic optimization for asphalt pavements based on RIOHTrack
###### Abstract
Rutting of asphalt pavements is a crucial design criterion in various pavement design guides. A good road transportation base can provide security for the transportation of oil and gas in road transportation. This study attempts to develop a robust artificial intelligence model to estimate different asphalt pavements' rutting depth clips, temperature, and load axes as primary characteristics. The experiment data were obtained from 19 asphalt pavements with different crude oil sources on a 2.038 km long full-scale field accelerated pavement test track (RIOHTrack, Road Track Institute) in Tongzhou, Beijing. In addition, this paper also proposes to build complex networks with different pavement rutting depths through complex network methods and the Louvain algorithm for community detection. The most critical structural elements can be selected from different asphalt pavement rutting data, and similar structural elements can be found. An extreme learning machine algorithm with residual correction (RELM) is designed and optimized using an independent adaptive particle swarm algorithm. The experimental results of the proposed method are compared with several classical machine learning algorithms, with predictions of Average Root Mean Squared Error, Average Mean Absolute Error and Average Mean Absolute Percentage Error for 19 asphalt pavements reaching 1.742, 1.363, and 1.94% respectively. The experiments demonstrate that the RELM algorithm has an advantage over classical machine learning methods in dealing with non-linear problems in road engineering. Notably, the method ensures the adaptation of the simulated environment to different levels of abstraction through the cognitive analysis of the production environment parameters. It is a promising alternative method that facilitates the rapid assessment of pavement conditions and could be applied in the future to production processes in the oil and gas industry.
RIOHTrack, Rutting depth, Oil-gas transportation, RELM, Metaheuristic optimization
## I Introduction
Vigorously developing the oil and gas industry, energy transportation, and road transportation is one of the essential strategies to fill the sustainable development of energy [1]. More importantly, the development of road transportation is one of the foundations for the oil and gas industry. The construction of long-life asphalt pavement is conducive to developing road transportation in the oil and gas industry and ensures the flexibility of oil and gas in road transportation [2]. The overall structure of long-life asphalt pavements design is designed for a service life of 40-50 years. One of their main characteristics is that the surface and intermediate layers resist rutting [3].
Structural damage to the pavement has always affected the service life of roads. Rutting is one of the significant defects of asphalt pavements, affecting roadway travel's quality and safety. This deformation is caused by permanent deformation and shear damage of the asphalt mixture under repeated heavy loads during warm or wet seasons [4]. This phenomenon is particularly prevalent on heavily traveled routes. The maintenance of asphalt pavements has now increased considerably due to traffic loads and environmental influences, with rutting being one of the most common defects. Maintenance related to rutting requires a great deal of effort from pavement engineers and road agencies. For this reason, developing asphalt pavements with a long service life has become a severe challenge to researchers. Long-life asphalt pavements have become an important trend in pavement development due to structural damage such as fatigue cracking and permanent deformation of conventional asphalt pavements.
Rutting of asphalt pavements occurs due to accumulated strain, which causes deformation along the wheel tracks of the pavement [5]. The plastic deformation formed along the wheel tracks can cause road users discomfort and reduce the pavement's service life. On the other hand, rutting on the pavement surface reduces the overall strength of the pavement and pavement structure. These pavement diseases can significantly reduce the pavement performance and seriously affect the quality of pavement use and service life [6, 7]. The selection of an appropriate asphalt binder has been reported to improve the resistance of hot mix asphalt (HMA) concrete when subjected to rutting, and fatigue [8, 9, 10]. Furthermore, laboratory testing of asphalt binders for rutting and fatigue requires sophisticated testing equipment that is not
readily available in developing countries in tropical regions. However, governments are burdened with providing high-quality pavement structures on a limited budget. In this case, applying predictive modeling will reduce project costs and provide a high-quality pavement structure. As the RIOHTrack experiments are very expensive, the rutting data for each type of road surface are small. Using the data efficiently for high precision rutting depths is a challenge.
Given the above situation, developing a robust artificial intelligence (AI) model for pavement rutting depth prediction is necessary. The research in this paper aims to develop and design a model called IAPSO-RELM and use an equivalent training set for predicting the rutting depths of 19 asphalt pavements with high accuracy. Due to the difficulty of efficiently collecting and using pavement data, complex network methods can build the graphic network and effectively mine data. Through community detection, the most critical structural elements can be selected from different asphalt pavement rut data, and similar structural elements can be found.
In this paper, we calculate the similarity of rutting depth variation of different asphalt pavements by grey correlation, construct a complex network of asphalt pavement rutting depths, and use the Louvain algorithm for community detection of the complex network of asphalt pavements to construct an equivalent training set. An Extreme Learning Machine (ELM) algorithm using residual correction is proposed for the rutting depth prediction task. The independent adaptive particle swarm algorithm is used to optimize the whole model, jumping out of the local optimum using independently adaptive parameters to obtain higher accuracy. Notably, with the current experimental design of RIOHTrack, the method can reduce the frequency of rutting experiments in the field without compromising its estimation accuracy.
We compare the proposed IAPSO-RELM model with several other methods, including machine learning and deep learning algorithms. The performance of these models is statistically evaluated and validated on RIOHTrack from the Institute of Highway Science, Ministry of Transportation, Beijing [11]. The contributions are summarized as follows:
* This is the first application of the Louvain algorithm to the problem of predicting the depth of rutting in asphalt pavements. A complex network of asphalt pavement rutting is constructed by calculating the similarity of the degree of rutting depth variation of different pavements through grey correlation. The Louvain algorithm is used for community detection to build an equivalent training set for rutting data of asphalt pavement materials of the same category. Compared with the strategy of using a single pavement for training or all kinds of pavements for training, the neural network algorithm improves by discovering the potential characteristics of the asphalt pavement rutting data, proving the effectiveness of the equivalent training set.
* A residual correction method with random initialization settings is proposed for the ELM algorithm. The residuals from the initial ELM model on the training set are learned by another ELM model, thus improving the stability and generalization ability of the ELM algorithm.
* Optimizing the RELM algorithm using the IAPSO algorithm can improve accuracy by over 1200% compared with the unoptimized algorithm. Compared to other widely used PSO algorithms, the RELM algorithm's training accuracy is improved by 20%.
* We apply complex network and artificial intelligence algorithms to predict asphalt pavement rutting depth. It can effectively use asphalt pavement rutting data of different materials to improve prediction accuracy, reduce the number of future tests and provide timely warning of asphalt pavement rutting damage.
The rest of this paper is organized as follows. In Section II, we review the related work. In Section III, the fundamental algorithms are introduced for our work. In Section IV, experimental results are discussed and the prediction performance of the proposed algorithm is analyzed. In Section V, the conclusion of our work is drawn and the future work is prospected.
## II Related Work
The problem of rutting on asphalt pavement has led many scholars to start their research from a data-driven perspective. Dong et al. [12] summarize the data methods commonly used in pavement engineering problems and pointed out that in the face of the increasingly accumulated pavement data in the future, choosing the appropriate model and its combination will be critical research. In order to obtain more comprehensive and reliable road data, a full-scale track called RIOHTrack was built in Beijing, China. The experimental site measured and collected a large number of reliable field data, such as rutting depth data and other pavement service data [13]. Huang et al. [14] use the field data collected by RIOHTrack, and the statistical properties of random variables in asphalt pavement are analyzed. Furthermore, a multiple linear regression algorithm is used to calculate the pavement rutting reliably. Liu et al. [15] propose a complex network method to evaluate the performance of RIOHTrack's longitudinal asphalt pavement by collecting data such as rutting and international roughness index. Liu et al. [16] discuss the applicability of different rutting depth prediction models based on empirical mechanics equations for different asphalt pavement structures based on the field data of the RIOHTrack full-scale track. The data-driven method provides a new method for pavement engineering research to analyze pavement performance and effectively improve calculation efficiency and accuracy. However, it needs the support of a large number of measured data.
Recently, a robust artificial intelligence model has been developed continuously and has preliminary application. Advanced learning algorithms have been introduced to model pavement engineering applications. Zhang et al. [17] develope a field rutting depth prediction model based on the Random Forest algorithm and included four input parameters: Hamburg Wheel Tracking Test (HWTT) rutting depth, pavement age, number of high-temperature hours, and annual average daily truck traffic (AADTT). The model can accurately predict field rutting depth based on a high coefficient of determination and a low standard error of estimation. Huang et al. [14] evaluate
the statistical characteristics of random variables in asphalt pavements using field data collected from asphalt pavement projects and found a unique formula with fairly good accuracy for evaluating the rutting life of asphalt pavements using multiple linear regression. Uwanuakwa et al. [18] propose the Gaussian Process Regression (GPR) algorithm to predict rutting and fatigue parameters for estimating the rutting and fatigue sensitivity of asphalt binders at medium and high pavement temperatures. The results show that the model performs better in estimating rutting parameters than fatigue parameters. Fang et al. [19] denoise the measured rutting depth index (RDI) by wavelet analysis and then applied an autoregressive moving average model (ARMA) from time series analysis to predict RDI. The method that combines wavelet analysis and time series prediction model is referred to as the W-ARMA model in this study. Machine learning algorithms provide a data-driven approach to asphalt pavement rutting depth prediction, which can effectively reduce prediction errors and improve computational efficiency and accuracy but require the support of a large number of actual measurement data. Haddad et al. [20] develop a rutting prediction model with deep neural network (DNN) technology. The predictive capability of the DNN model was compared with the multiple linear regression model fitted using the same dataset. In addition, a generic series of rutting prediction curves corresponding to specific traffic, climate, and performance combinations were developed to make rutting predictions available to all road agencies. Majidifard et al. [4] use a convolution neural network (CNN) to predict rutting depth in asphalt mixtures using deep learning techniques to simulate the HWTT. Gong et al. [21] use a DNN for predicting rutting depth in the MEPDG framework and developed a random forest model to identify the significance of the variables. Yao et al. [22] applied principal component analysis (PCA) to reduce the dimensionality of traffic variables and used neural networks (NN) to develop a rut depth prediction model.
In addition, based on machine learning research in engineering applications [23, 24, 25, 26, 27, 28], some researchers have proposed fusion models that combine meta-heuristic optimization with machine learning models to achieve better results. Researchers have applied this method to oil-gas systems and pavement engineering. Islam et al. [29] review several nature-inspired metaheuristic optimization techniques and their applications in well location optimization in oil-gas systems. Ng et al. [30] establish a data-driven model and integrated the PSO algorithm into the training of Support Vector Regression (SVR) and Multi-layer Perceptron (MLP). The developed model can estimate the oil production of wells in the Volvo oilfield. Qiao et al. [31] combine the MLP neural network technology with five metaheuristic calculation algorithms to accurately estimate the natural gas consumption in the United States. El-Yamany et al. [32] use a metaheuristic optimizer (water cycle optimization technology) to optimize the proposed hybrid photovoltaic/diesel system in the oil-gas industry. Xu et al. [33] propose an artificial neural network optimization method combined with a genetic algorithm. This method is used to optimize the steel-epoxy asphalt pavement structure of the Sutong Yangtze River Bridge. This method has been shown to improve fatigue reliability. Liang et al. [34] combine the newly developed multi-objective PSO algorithm with Gaussian process regression, which can effectively solve the design of the asphalt mixture ratio. Metaheuristic algorithms are combined with machine learning algorithms to improve the prediction accuracy for problems related to pavement engineering and oil-gas systems. It also provides an automated and intelligent design method for pavement engineering and oil and gas systems. The method is more effective for pavement engineering studies than traditional methods.
In contrast to the above work, this paper develops a strong artificial intelligence model of high precision for predicting the rut depth of asphalt pavements. Moreover, it has good applicability to the tested asphalt pavement of RIOHTTrack. The longitudinal nature of rutting data will provide important insight into the short and long-term service performance of asphalt pavements. The Louvain algorithm used in this paper to construct the equivalent training set also provides a method by which the field data collected from the field observatory can be used effectively.
## III Principles of Algorithms
### _Gray relation analysis_
Gray relation degree is a measure that describes the strength, magnitude, and order of relationships among factors of a system [35]. If the shape of the time series curves characterizing these factors is similar, the degree of correlation is high; otherwise, the degree of correlation is low. Since this method neither requires a high sample size nor a typical distribution pattern when analyzing data, and the analysis results are combined with quantitative and qualitative analysis, it is widely used. It has achieved remarkable success in many fields such as economy, society, agriculture, transportation, and mining.
Let \(X_{i}\) be a system factor and its observed data at time \(k\) is \(x_{i}(k)\), then the sequence of factor \(X_{i}\) is \(X_{i}=(x_{i}(1),x_{i}(2),...,x_{i}(n))\). \(X_{0}\) is the reference sequence, \(X_{1}\) is the comparison sequence. The sequences \(X_{0}\) and \(X_{1}\) are normalized, in order to eliminate the magnitudes, by the formula:
\[\hat{x}_{i}(k)=\frac{x_{i}(k)-min(X_{i})}{max(X_{i})-min(X_{i})}. \tag{1}\]
Calculate the relation coefficient between the reference series \(\hat{X}_{0}\) and \(\hat{X}_{1}\) by the formula:
\[r_{01}(k)=\frac{\underset{k}{min}\,\Delta_{1}(k)+\rho\,\underset{k}{max}\, \Delta_{1}(k)}{\Delta_{1}(k)+\rho\,\underset{k}{max}\,\Delta_{1}(k)}, \tag{2}\]
where \(\Delta_{1}(k)=|\hat{X}_{0}(k)-\hat{X}_{1}(k)|\), \(\underset{k}{min}\,\Delta_{1}(k)\) is the minimum value of \(\Delta_{1}(k)\), \(\underset{k}{max}\,\Delta_{1}(k)\) is the maximum value of \(\Delta_{1}(k)\), and \(\rho\) is the discriminant coefficient. The coefficient \(\rho\) is artificially given to attenuate the effect of distortion of too large the maximum absolute difference to improve the significance of the difference between the correlation coefficients and the artificially given coefficient, whose values range from \([0.1,1.0]\). In the present study, the practice of most scholars is taken as \(\rho=0.5\)[35, 36, 37, 38]. Moreover, the gray relation degree
\(\delta_{01}\) between the sequence \(X_{0}\) and the comparison sequence \(X_{1}\) is calculated by the formula:
\[\delta_{01}=\frac{1}{n}\sum_{k=1}^{n}r_{01}(k). \tag{3}\]
### _Louvain Algorithm_
Node clustering of graphs is a powerful tool for extracting network information. The Louvain algorithm is one of the graph clustering algorithms and is a cohesive clustering algorithm based on the modularity theory proposed by Blondel et al. [39]. The modularity is expressed as:
\[Q=\begin{cases}\frac{1}{2m}\sum_{l=1}^{k}\sum_{\alpha\in C,\beta \in C}(A_{\alpha\beta}-\frac{\hat{d}_{\alpha}\hat{d}_{\beta}}{2m}),&c_{\alpha} =c_{\beta},\\ 0,&c_{\alpha}\neq c_{\beta}\end{cases} \tag{4}\]
where \(k\) is the number of detected subgroups, \(m\) is the number of edges in the large-scale network, \(c_{\alpha}\) and \(c_{\beta}\) are the communities to which nodes \(d_{\alpha}\) and \(d_{\beta}\) belong, respectively, and \(\hat{d}_{\alpha}\) and \(\hat{d}_{\beta}\) are the degrees of nodes \(d_{\alpha}\) and \(\hat{d}_{\beta}\) respectively. Modularity increment is expressed as:
\[\Delta Q =\left[\frac{\sum in+2\hat{d}_{\alpha,in}}{2m}-\left(\frac{\sum tot +\hat{d}_{\alpha}}{2m}\right)\right]\] \[-\left[\frac{\sum in}{2m}-\left(\frac{\sum tot}{2m}\right)^{2}- \left(\frac{\hat{d}_{\alpha}}{2m}\right)^{2}\right] \tag{5}\]
where \(\sum in\) is the total degree of all edges in the community \(c\), \(\sum tot\) is the total degree of all edges to nodes in the community \(c\), \(\hat{d}_{\alpha,\text{ in }}\) is the number of edges from \(\hat{d}_{\alpha}\) to all nodes in the community \(c\), and \(m\) is the total number of edges in the network.
The Louvain algorithm process is divided into two main stages:
(1) First, the network is initialized, and each node is considered as its independent community; Then, the whole network is traversed in a particular order, and for each node \(i\), the change \(\Delta Q\) of the module degree in the community after assigning \(i\) to the community of its arbitrary neighbor node \(j\) is considered. Specifically, if \(\Delta Q>0\), node \(i\) is moved to the community that makes the most significant change in \(\Delta Q\) in the community of the node that makes the most significant change in \(\Delta Q\). Otherwise, it remains unchanged. This process is performed for all nodes in the network iteratively until the increment of modularity no longer changes.
(2) The graph structure has been reconstructed based on the initial partitioning in the first stage. The node's weight in the new graph is equal to the sum of the weights of the edges inside the community, and the weight of the edge is equal to the sum of the weights of the edges outside the community. The first stage is repeated for the new graph until the modularity of the entire graph does not change again. The Louvain algorithm is shown in Algorithm 1.
```
0: The Initial Graph Structure: \(G_{0}=(V_{0},E_{0})\);
0: The Graph-Based Community Segmentation: \(G=(V,E)\);
1: Initialize the node's attribute data and add the Community attribute as the community identifier of the node;
2: Calculate the total number of nodes in the graph: \(N\);
3:while Node \(n\) is not the terminating node do
4:for\(i=1\) to \(N\)do
5: Query the neighboring nodes of node \(i\) for their Community attribute. denoted as \(C_{n}\);
6: Calculate the incremental modularity \(\Delta Q\) after node \(i\) has been moved to \(C_{n}\);
7: Record the neighbourhood community \(C_{max}\) that maximises the modularity;
8:if\(C_{i}\) in which node \(i\) is located is not the same as \(C_{max}\)then
9: Update the Community property of the node to \(C_{max}\);
10: Update the current community internal weights;
11:endif
12: Reconstructing the network structure based on the Community property of the nodes;
13:endfor
14:endwhile
15:return\(G=(V,E)\)
```
**Algorithm 1** Louvain Algorithm
### _Relm_
ELM is an algorithm of single hidden layer feedforward neural network (SLFN) [40], which does not need to adjust the training process compared with traditional algorithms such as BP and RNN, and only needs to obtain the optimal solution based on the connection weights and thresholds of the input and hidden layers and setting the number of neurons in the hidden layer. This algorithm has the advantages of simple parameter setting, fast learning speed, and high robustness [41].
Assuming that there are \(N\) sets of training samples \((X_{j},t_{j})\), where \(X_{j}=[x_{j1},x_{j2},...,x_{jn}]\in R^{n}\), \(t_{j}=[t_{j1},t_{j1},...,t_{jm}]\in R^{m},j=1,2,...,N\), the ELM hidden layer contains \(L\) neurons, and the output of the ELM is:
\[f(x)=\sum_{i=1}^{L}[\beta_{i}e(W_{i}X_{j}+b_{i})], \tag{6}\]
where \(e(\cdot)\) is the excitation function; \(W_{i}=[w_{j1},w_{j2},...,w_{jn}]\) is the connection weight between the \(i\)-th neuron of the hidden layer and each input layer neuron; \(\beta_{i}=[\beta_{j1},\beta_{j2},...,\beta_{jn}]^{T}\) is the connection weight between the \(i\)-th neuron of the hidden layer and each output layer neuron; \(b_{i}\) is the threshold of the \(i\)-th neuron in the hidden layer. The ELM structure is shown in Fig. 1, where \(M\) is the the number of nodes in the output layer.
In order to minimize the output error of the single implied layer to achieve the learning objective, Eq.6 can be expressed as:
\[H\beta=T. \tag{7}\]
The ELM model can be expressed as the following optimization problem:
\[\beta^{\dagger}=\underset{\beta}{argmin}(\frac{1}{2}\|\beta\|^{2}+\frac{C}{2}\|H \beta-T\|^{2}), \tag{8}\]
where \(\beta\) is the connection weight matrix of the hidden layer and the output layer; \(\beta^{\dagger}\) is the solution of the Eq.8; \(T\) is the desired output; \(H\) is the output matrix of the hidden layer of the neural network; \(C\) is the regularization parameter. By the optimality condition, the optimal solution of the optimization Eq.8 is:
\[\beta^{\dagger}=\begin{cases}(H^{T}H+\frac{I}{C})^{-1}H^{T}T,&N\geq L,\\ H^{T}(H^{T}H+\frac{I}{C})^{-1}T,&N<L,\end{cases} \tag{9}\]
where \(I\) is the unit matrix.
Since both \(W\) and \(b\) are randomly given in ELM, the recognition ability of unknown input parameters is weak. This paper proposes an RELM network structure to improve the ELM algorithm. The accuracy and stability are improved by another ELM network named residuals ELM, which learns the residuals of the original ELM on the training set. The residuals of the original ELM on the training set \(T_{r}\) is:
\[T_{r}=T-H_{o}\beta^{\dagger}_{o}, \tag{10}\]
where \(H_{o}\) and \(\beta^{\dagger}_{o}\) are parameters of the original ELM. Next, construct the training set \((X,T_{r})\). Similarly the \(\beta^{\dagger}_{r}\) of the ELM model with learning residuals is found by Eqs.7-9. Ultimately, the RELM prediction result is:
\[\hat{T}=H_{o}\beta^{\dagger}_{o}+H_{r}\beta^{\dagger}_{r} \tag{11}\]
where \(\hat{T}\) is the RELM prediction result, \(H_{r}\) and \(\beta^{\dagger}_{r}\) are parameters of the ELM model with learning residuals. The RELM algorithm is shown in Algorithm 2.
```
0: Training dataset \((X,T)\), Testing dataset \(\widetilde{X}\), original ELM and residuals ELM;
0: RELM prediction result \(\hat{T}\);
1: Initialize parameters \(W_{o}\), \(b_{o}\), and \(C_{o}\) of original ELM;
2: Calculate \(\beta^{\dagger}_{o}\) by Eq.9;
3: Calculate \(T_{r}\) by Eq.10;
4: Initialize parameters \(W_{r}\), \(b_{r}\), and \(C_{r}\) of residuals ELM;
5: Calculate \(\beta^{\dagger}_{r}\) by Eq.9;
6: Calculate \(\hat{T}\) by Eq.11
7:return\(\hat{T}\);
```
**Algorithm 2** RELM algorithm
### _Independent Adaptive Particle Swarm Optimization (IAPSO)_
The PSO algorithm is a robust metaheuristic technique proposed by Eberhart and Kennedy (1995) based on the behavior of the particles/social animals, like birds in a swarm [42]. To find the optimal solution, the operation process of the PSO algorithm is summarised [43].
In the initial period of the algorithm, \(n\) particles are first randomly initialized in the D-dimensional search space. After \(k\) iterations, the position \((x^{k}_{j1},x^{k}_{j2},...,x^{k}_{jD})\) and speed \((v^{k}_{j1},v^{k}_{j2},...,v^{k}_{jD})\) of the particle \(j\) move in the search space at a certain speed. Meanwhile, the individual extreme value \(p^{k}_{best}\) and group extreme value \(p^{k}_{best}\) are recorded, and then the states of the population are updated. In each iteration, the velocity and position of particle \(j\) can be updated as:
\[v^{k+1}_{jd}=\omega v^{k}_{jd}+c_{1}r(p^{k}_{best}-x_{jd})+c_{2}R(p^{k}_{best }-x_{jd}) \tag{12}\]
\[x^{k+1}_{jd}=x^{k}_{jd}+v^{k}_{jd} \tag{13}\]
where \(v^{k}_{jd}\) is the velocity of the \(j\)-th particle in the \(d\)-dimensional of the search space at the \(k\)-th iteration, \(v^{k+1}_{jd}\) is the velocity at the \((k+1)\)-th iteration, \(x^{k}_{jd}\) is the position of the \(j\)-th particle in the \(d\)-dimensional of the search space at the \(k\)-th iteration, \(x^{k+1}_{jd}\) is the position at the \((k+1)\)-th iteration, \(\omega\) is the inertial weight, \(c_{1}\) and \(c_{2}\) are acceleration factors, \(r\) and \(R\) are random numbers on [0,1], \(p^{k}_{best}\) is the historical optimal position of the \(j\)-th particle at the \(k\)-th iteration, \(p^{k}_{best}\) is the historical optimal position of the population at the \(k\)-th iteration.
In the evolutionary process, as the evolutionary ability among particles, the overall evolutionary ability of the population, and the problems solved are different, these parameter settings are fully considered in this paper so that the parameters can be adaptively adjusted in different situations.
\(\bullet\) Evolutionary Capacity of the Particle
In the evolutionary process, the evolutionary capacity of the particle is defined as the ability of a particle to find a better solution compared to other particles, which is calculated as follows:
\[E^{t}_{i}=\left(p^{t}_{lworst}-f^{t}_{i}\right)/\left(p^{t}_{lworst}-p^{t}_{ lbest}\right) \tag{14}\]
where \(f^{t}_{i}\) is the fitness value of the \(i\)-th particle in generation \(t\); \(p^{t}_{lworst}\) is the worst fitness of all particles at the value
Fig. 1: The structure of ELM.
of generation \(t\). Therefore, \(E_{i}^{t}\) is the evolutionary ability of particle \(i\) in generation \(t\).
\(\bullet\) Evolutionary Capacity of the Population
In the evolutionary process, the evolutionary capacity of the population is defined as the ability of all particles in the aggregate to find a better solution than the current optimal solution, given as follows:
\[E_{g}^{t}=p_{gbest}^{t}-p_{gbest}^{t-1} \tag{15}\]
where \(E_{g}^{t}\) is the evolutionary capacity of the \(t\)-th generation of the population.
\(\bullet\) Evolution rate
In particle evolution, the evolution rate of a particle is defined as the magnitude of the particle's ability to evolve in a population. The evolutionary rate formula for the evolutionary capacity of a particle in a population is as follows:
\[R_{i}^{(t+1)}=1/\sqrt{(E_{g}^{t}+E_{i}^{t})^{2}+(1-2E_{g}^{t}E_{i}^{t})}, \tag{16}\]
where \(R_{i}^{(t+1)}\) is the evolution rate of particle \(i\) at the \((t+1)\)-th generation.
If both the evolutionary capacity of the particle and the evolutionary capacity of the population are strong, and the particle evolution rate is low, then the particle will inherit the evolutionary ability of the previous generation of particles more in the next generation.
In the PSO algorithm, the setting of inertia weights determines the exploration and searchability of the particles, which plays a crucial role in the algorithm's performance. The size of the particle's global and local search functions can be adjusted to achieve a balance. If \(\omega\) is large, the particle has a robust global search capability; otherwise, the particle has a robust local search capability. The inertia weight is usually set by the strategy of linearly decreasing in a specific interval with the increase of the number of iterations, which is calculated as follows:
\[\omega_{i}^{t+1}=(1-R_{i}^{(t+1)})\omega_{max}+R_{i}^{(t+1)}\omega_{min} \tag{17}\]
where \(\omega_{max}\) and \(\omega_{min}\) are the upper and lower limits of the inertia weights, respectively.
The learning factors are two parameters that reflect the historical best learning ability of the particle and the historical best learning ability of the population, respectively. In this paper, \(c_{1}\) is designed as a decreasing function, and \(c_{2}\) is designed as an increasing function to adjust the learning factor of each particle in each generation according to the rate of change of the evolutionary ability of the particles. Compared with other classical learning factors, this setting method can ensure the particles' learning ability at the beginning of the iteration and enhance global searchability. The adaptive learning factors guarantee the social learning of the particles in the later stages of the iteration, which facilitates accurate local search and can also be performed by independent particles. The learning factor is adjusted using different evolutionary rates so that the particles adjust their learning patterns according to their conditions. The adjustment formula for learning factors of the particle \(i\) is as follows:
\[\left\{\begin{array}{l}c_{1i}^{t+1}=c_{1}^{\max}-\frac{c_{1}^{\min}\cdot \sin\left(M_{i}^{t+1}\frac{\pi}{2}\right)\cdot t}{T}\\ c_{2i}^{t+1}=c_{2}^{\max}+\frac{c_{2}^{\min}\cdot\sin\left(M_{i}^{t+1}\frac{ \pi}{2}\right)\cdot t}{T}\end{array}\right. \tag{18}\]
where \(c_{1}^{\max}\) and \(c_{1}^{\min}\) are the maximum and minimum values of the corresponding learning factors, respectively. \(c_{1i}^{t+1}\) is the learning factor \(c_{1}\) of the \(i\)-th particle in \((t+1)\)-th iteration. The particle should have a strong self-learning ability at the beginning of the iteration. In this case, the value of \(c_{1}\) is larger and \(c_{2}\) is smaller. If the evolution rate of particle \(i\) is small, the value of \(c_{1}\) of the particle will be a little larger than other particles, and \(c_{2}\) will be smaller, which is more conducive to the global search of the particle. As the number of iterations increases, the particle should have a strong social learning ability. At this point, the value of \(c_{1}\) becomes smaller, while the value of \(c_{2}\) becomes larger. If the evolution rate of particle \(i\) is larger, the particle has smaller \(c_{1}\) and larger \(c_{2}\) than other particles, which is more favorable to the local search of the particle. In this paper, the learning factor of each particle is adaptively adjusted according to the rate of change of the evolutionary ability of the particle.
In general, Mean Square Error (MSE) can be used as the fitness function to evaluate the degree of evolution of particles which is defined as follow:
\[E=\frac{1}{M}\sum_{i=1}^{M}(Y_{i}-y_{i}^{*})^{2} \tag{19}\]
where \(M\) represents the number of samples, \(Y_{i}\) is the actual value of the sample data, and \(y_{i}^{*}\) is the predicted value of the sample.
```
0: Algorithm iteration number \(T\), population size \(M\), initial maximum position \(x_{max}\), initial maximum velocity \(v_{max}\), \(w_{\max}\), \(w_{\min}\), \(c_{1}^{\max}\), \(c_{2}^{\max}\), \(c_{1}^{\min}\), \(c_{2}^{\min}\), training dataset \(data_{train}\) and test dataset \(data_{test}\);
0: Optimal position G, optimal fitness function value \(E\);
1:while\(t<T\)do
2:for\(i\in[1,M]\)do
3: Decode \(x_{i}\) and initialize the RELM algorithm parameters according to the decoding result of \(x_{i}\);
4: Input training dataset \(data_{train}\) for training the RELM algorithm:
5: Evaluate the fitness function value of \(x_{i}\), and update \(p_{i}^{t}\), \(p_{gbest}^{t}\), \(p_{horst}^{t}\);
6: According to Eq.14-18 update \(E_{i}^{t}\), \(E_{g}^{t}\), \(M_{i}^{t+1}\), \(\omega_{i}^{t+1}\), \(c_{1i}^{t+1}\), \(c_{2i}^{t+1}\);
7: According to Eq.12-13, update the position and the velocity of each particle;
8:endfor
9:endwhile
10:return G, \(E\)
```
**Algorithm 3**IAPSO-RELM Algorithm
### _Iapso-Relm_
The RELM algorithm has several main parameters, such as the connection weight between the hidden layer and input layer in the original ELM \(W^{(o)}\), the threshold of the hidden layer in the original ELM \(\textbf{b}^{(o)}\), the connection weight between the hidden layer and input layer in the residuals ELM \(W^{(r)}\), the threshold of hidden layer in the residuals ELM \(\textbf{b}^{(r)}\).The values of these parameters have a significant impact on the model's performance. Adjusting the parameters usually depends on empirical judgments and traversal experiments. The traditional methods are not effective and lack a theoretical basis. Therefore, this paper optimizes the parameters based on the IAPSO algorithm with MSE as the fitness function and retains the optimal population solution and the individual optimal solution for each iteration. The interaction of these two types of information will evolve toward the global optimum. Since the PSO may suffer from poor convergence accuracy, each particle in IAPSO has independent parameters, thus exhibiting better global optimization capability than the PSO. The IAPSO-RELM Algorithm is shown in Algorithm 3.
## IV Experiment and Analysis
The measured data of rutting depth come from the full-scale pavement test loop road project of the Ministry of Transport. It is located in the southwest corner of the Beijing Highway Traffic Test Field. It has a 2.038-km-long full-scale field accelerated pavement testing track named as RIOHTrack. The full-scale ring road includes 25 types of asphalt pavement structures. The layout of the pavement structure is shown in Fig.2.
Nineteen main test pavement structures named by STR1-19 were set up in the test ring road to study and compare the long-term performance and evolution of asphalt pavement structures and materials with different combinations of structural rigidity. The thickness of the asphalt concrete structure layer of these pavement structures includes 12, 18, 24, 28, 36, 48 cm (or 52 cm), which basically covers the thickness of all asphalt concrete structure layers of China's high-grade highways, as well as flexible base thickness of thick asphalt pavement. From the perspective of the type of base structure, it includes four typical structures: rigid base structure, semi-rigid base structure, flexible base structure, and full-thick asphalt pavement structure [11].
This paper uses measured data from 19 asphalt pavements with full-size pavement structures as the data source. The objective is to make short-time predictions of rut depths for different asphalt pavements. Data from the RIOHTrack experiment for the years 2017-2020 are used for training and data from 2021 for testing. RMSE, MAE, and MAPE are used to evaluate the performance of the algorithms. These metrics are used to reflect the difference between the true and predicted values. Furthermore, they are the most commonly used performance indices in regression tasks. The smaller the index, the more accurate the prediction. These accuracy indices are defined in Table.I.
### _Model Building_
The framework of the rutting depth prediction model is shown in Fig.3, which contains five parts: data processing, constructing complex networks with rutting depth, constructing equivalent training set by community detection, and data fitting.
The initial data of 19 kinds of asphalt pavement mainly include cumulative load axis times, temperature, and rutting depth. New features such as characteristic load axis time, rate of change of load axis time, rate of change of temperature, and the difference in temperature change are generated by calculation. Sliding windows with the time-step of 5 generate rutting depth clips. Take STR4 asphalt pavement as an example. The specific characteristics are shown in Tabel.II. Smoothing of the rutting depth data is used to remove noise from the data. In this paper, the Loess smoothing algorithm [44] with a span of 0.05 is used, and the smoothing results are shown in the Fig.4.
\begin{table}
\begin{tabular}{l l} \hline \hline Index & Formula \\ \hline MAE & \(\frac{1}{M}\sum_{m=1}^{M}\lvert y_{m}-\hat{y}_{m}\rvert\) \\ RMSE & \(\frac{1}{M}\sum_{m=1}^{M}\lvert y_{m}-\hat{y}_{m}\rvert^{2}\) \\ MAPE & \(\frac{100\%}{M}\sum_{m=1}^{M}\lvert\frac{y_{m}-\hat{y}_{m}}{y_{m}}\rvert\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Indicator description
Fig. 3: Flow chart of rutting depth prediction.
Fig. 2: Structure of RIOHTrack.
### _Analysis of community detection of routing depth in asphalt pavements_
commonly used metrics to assess the quality of clustering. Due to the data-driven nature of this task, there are no grouped asphalt pavement real-world labels. We focus on intrinsic metrics [49]: silhouette coefficient (SC), Calinski-Harabaz index (CHI) and Davies-Bouldin Index (DBI).The SC is suitable for situations where the actual class information is unknown. For a sample set, its silhouette coefficient is the average of all sample silhouette coefficients. The value range of the silhouette coefficient is \([-1,1]\). The closer the samples of the same category are and the farther the samples of different categories are, the greater the value of the silhouette coefficient. The DBI, also known as the classification fitness index, calculates the average similarity between a group and the mean of all other groups. The smaller the value, the better the clustering result. CHI, this evaluation index is simple and straightforward to calculate. The smaller the covariance of the data within the category, the larger the covariance between the categories, the better, the higher the CHI. The performance of the clustering algorithm is shown in Table IV.
As shown in the Table IV, the method proposed in this paper has the best performance on SC and CHI indicators. Compared with the K-means algorithm, only the DBI is slightly higher, but the K-means algorithm needs to pre-set the number of cluster categories. The method proposed in this paper does not need to pre-set the number of clustering categories and can achieve an excellent clustering effect.
#### Iv-B4 Performance of equivalent training set for the same class after community detection
To verify the improvement in prediction accuracy of equivalent training set after community detection, the training set is constructed with a single kind of asphalt pavement data, and the training is constructed with all kinds of asphalt pavement data for comparison respectively. In this paper, IAPSO-RELM, RELM, ELM, ELM with RBF Kernel (KELM-RBF) [50], Long short-term memory (LSTM) [51], MLP [52], Gated Recurrent Unit (GRU) [53] are used as test algorithms for 19 kinds of asphalt pavements. The performance of algorithms is shown in the Tables V-VII.
As shown in Tables V-VII, it is not difficult to find that for neural network algorithms, among the three different dataset settings, using the equivalent dataset has the highest accuracy. Compared with the single-type asphalt truting dataset, the Average MAPE trained with the equivalent dataset has the most significant improvement, the minor improvement, and the average improvement of 84.97%, 30.68%, and 56.94%, respectively. Average RMSE maximum improvement, minimum improvement, and average improvement are 84.69%, 26.68%, and 53.26%, respectively. Average MAE maximum improvement, minimum improvement, and average improvement are 87.61%, 31.74%, and 58.69%, respectively. Compared with various asphalt truting datasets, the Average MAPE trained
\begin{table}
\begin{tabular}{l l l l} \hline \hline Algorithm & Average MAPE & Average RMSE & Average MAE \\ \hline MLP & 8.73\% & 6.592604 & 6.224704 \\ IAPSO-RELM & **3.66\%** & **3.078302** & **2.650691** \\ ELM & 9.84\% & 7.898636 & 7.501865 \\ KELM-RBF & 6.55\% & 5.25545 & 4.27216 \\ RELM & 7.83\% & 6.207217 & 5.751715 \\ LSTM & 23.45\% & 17.71548 & 17.46174 \\ GRU & 15.13\% & 11.48212 & 11.19007 \\ \hline \hline \end{tabular}
\end{table} TABLE V: The performance comparison on the training set which is constructed with all kinds of asphalt pavement data
\begin{table}
\begin{tabular}{l l l l} \hline \hline Algorithm & Average MAPE & Average RMSE & Average MAE \\ \hline MLP & 4.13\% & 3.748924 & 2.7051608 \\ IAPSO-RELM & **1.94\%** & **1.7420248** & **1.3625498** \\ ELM & 4.73\% & 4.024188 & 3.6019975 \\ KELM-RBF & 4.54\% & 3.8532062 & 2.3950194 \\ RELM & 3.61\% & 2.9645706 & 2.5723887 \\ LSTM & 3.53\% & 2.7119437 & 2.1632274 \\ GRU & 2.71\% & 2.5059797 & 1.8047246 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The performance of the clustering algorithm
\begin{table}
\begin{tabular}{l l l l} \hline \hline Algorithm & Average MAPE & Average RMSE & Average MAE \\ \hline MLP & 4.27\% & 3.8522825 & 2.841063 \\ IAPSO-RELM & **2.45\%** & **2.2968817** & **1.630076** \\ ELM & 5.22\% & 4.3113605 & 3.8856632 \\ KELM-RBF & 5.36\% & 4.6498991 & 4.0058338 \\ RELM & 3.97\% & 3.2541017 & 2.8225649 \\ LSTM & 4.75\% & 3.6816538 & 3.1584351 \\ GRU & 4.95\% & 3.9618938 & 3.2667651 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The performance comparison on the training set which is constructed with all kinds of asphalt pavement data
Fig. 7: The structure of complex network after community detection.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Algorithm & Average MAPE & Average RMSE & Average MAE \\ \hline MLP & 8.73\% & 6.592604 & 6.224704 \\ IAPSO-RELM & **3.66\%** & **3.078302** & **2.650691** \\ ELM & 9.84\% & 7.898636 & 7.501865 \\ KELM-RBF & 6.55\% & 5.25545 & 4.27216 \\ RELM & 7.83\% & 6.207217 & 5.751715 \\ LSTM & 23.45\% & 17.71548 & 17.46174 \\ GRU & 15.13\% & 11.48212 & 11.19007 \\ \hline \hline \end{tabular}
\end{table} TABLE V: The performance comparison on the training set which is constructed with a single kind of asphalt pavement data
with the equivalent dataset has the most major improvement; the minimum improvement and the average improvement are 45.15%, 3.22%, and 18.38%, respectively. Average RMSE maximum improvement, minimum improvement, and average improvement are 36.75%, 2.68%, and 17.52%, respectively. Average MAE maximum improvement, minimum improvement, and average improvement are 44.75%, 4.78%, and 18.77%, respectively. Training with equivalent datasets can effectively improve the accuracy of neural network algorithms. The complex asphalt pavement network is effectively clustered by the Louvain algorithm, which enlarges the training set and reduces the noise of the training set.
Taking STR2 as an example, the prediction results of IAPSO-RELM and GRU on different training set construction methods are shown in Fig. 8. As shown in Fig. 8, when using the equivalent training set for putting depth prediction of asphalt pavement STR2, the proposed algorithm can achive higher accuracy and better prediction of putting depth.
If the rutting depth short-time prediction model is satisfactory, the distribution of residuals should be expected with zero as the mean. Therefore, the test of residuals includes the test of normal distribution of residuals and the test of normal overall mean distribution. The frequency histogram of the residual distribution of the IAPSO-RELM rutting depth short-time prediction model predicts 19 pavement runs using an equivalent training set, and the histogram of the residuals of the prediction results are shown in Fig.9. The probability density plot of the residuals is shown in Fig.10.
Fig.10 shows that the prediction residuals of the model in this paper are close to the normal distribution. In order to better test the distribution of the model prediction residuals, the K-S test [54] was used to test whether the model's residuals in this paper were average to verify the model's reliability. Given the significance level \(a=0.05\), the statistic calculated by the K-S test is 0.06775, and the p-value is 0.0694 (\(>0.05\)) does not reject the original hypothesis, which means that the predicted residuals of this model obey a normal distribution.
### _Performance of IAPSO-RELM in Uncertainty_
Discussing the uncertainty of machine learning models can help to understand the model's performance and help to create believable artificial intelligence. The uncertainty in the short-time prediction of rut depth for the model proposed in this
Fig. 8: Rutting depth prediction results of STR2 with IAPSO-RELM and GRU.
Fig. 9: Histogram of the predicted residuals of the model in this paper
paper mainly stems from the random initialization of the ELM algorithm for the weights, which leads to uncertainty in the prediction. For model uncertainty, predictions are made by conducting multiple experiments with different model initializations. The variance of multiple prediction results is calculated to represent the uncertainty of the model, and the calculation method is as follows:
\[\overline{\delta^{2}}=\frac{1}{M\times T}\sum_{i}^{M}\sum_{j}^{T}(x_{i}^{j}-\mu^ {i})^{2} \tag{20}\]
where \(T\) is the number of trials, \(M\) is the number of predicted samples, \(x_{i}^{j}\) is the predicted value of the \(i\)-th sample in the \(j\)-th trial, and \(\mu^{i}\) is the average value of the predicted value of the \(i\)-th sample.
To verify the improvement of the certainty of the IAPSO-RELM model, the MLP, ELM, RELM, and IAPSO-RELM models are trained and predicted 100 times with different initial weights, respectively. The uncertainty of the models on the train and test sets was calculated by Eq.20. We will predict the rutting depth of asphalt pavements in Group I. The lower the value, the lower the uncertainty of the model. The detailed performance is shown in the Table VIII.
Table VIII shows that the weights of the ELM algorithm are randomly generated, resulting in relatively high uncertainty on both the training and test sets. The ELM algorithm with residual correction shows a significant improvement in uncertainty. Using IAPSO to optimize the RELM algorithm, the uncertainty of the model is greatly improved, and the prediction is more stable. In the follow-up work, the uncertainty indicator will be used as an optimization indicator to improve the algorithm.
### _Performance of IAPSO_
We will predict the rutting depth of asphalt pavements in Group I. Firstly, the performance of IAPSO is analyzed. For IAPSO optimization, the initial parameters are set as follows: maximum iteration \(T=100\), number of population \(M=30\), \(x_{max}=1\), \(v_{max}=1\), \(w_{\max}=0.9\), \(w_{\min}=0.4\), \(c_{1}^{\max}=2\), \(c_{2}^{\max}=2\), \(c_{1}^{\min}=0.8\) and \(c_{2}^{\min}=0.8\). Moreover, the IAPSO is compared with some existing PSO algorithms with fixed weight [42], linear decreasing weight [55] and chaotic optimization [56] in the same initial particle distribution. Classical evolutionary algorithms such as Genetic Algorithm [57], Differential Evolution Algorithm [58], and Whale Optimization Algorithm [59] are also used as comparison algorithms. The corresponding fitness graph of different algorithms is shown in Fig.11. In the iterative process of these algorithms, we make some of the particles in the swarm mutate and escape from the initial range, to avoid falling into the local optimum. The performance comparison is shown in Table IX.
The results show that the IAPSO algorithm proposed in this paper has the smallest fitness function value for all particles in the last iteration compared with other PSO algorithms. It uses an independent adaptive parameter update method in the search to jump out of the local optimum and obtain a lower fitness function value. IAPSO demonstrates the effectiveness of the IAPSO algorithm by improving the training accuracy by 20% over other PSO algorithms.
\begin{table}
\begin{tabular}{l l} \hline \hline Algorithm & MSE \\ \hline Fixed weight & 10.678230310134028 \\ Linear decreasing weight & 10.482436613266115 \\ Chaotic optimization & 10.7283101256131255 \\ IAPSO in this paper & **8.39581284204803** \\ Differential Evolution Algorithm & 9.8806362595563475 \\ Genetic Algorithm & 12.2193002334303883 \\ Whale Optimization Algorithm & 10.795870458149949 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Optimization comparison among different evolutionary algorithms
Fig. 11: Fitness graph of different evolutionary algorithms.
\begin{table}
\begin{tabular}{l l l} \hline \hline Algorithm & \(\overline{\delta^{2}}\) of test set & \(\overline{\delta^{2}}\) of train set \\ \hline MLP & 25.9547 & 6.7201 \\ ELM & 25.1153 & 4.3990 \\ RELM & 19.3632 & 2.4814 \\ IAPSO-RELM & **3.7318** & **0.2592** \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: The performance comparison on uncertainty of different Algorithms
Fig. 10: The predicted residual probability distribution of the model in this paper
### _Rutting Depth Prediction for Different Types of Asphalt Pavements_
To illustrate the generality of our algorithm, we considered the prediction of rutting depth under load tests on 19 asphalt pavements. To validate the effectiveness of the IAPSO-RELM algorithm with the equivalent training set after community detection in the short-time prediction task of rutting depth, we compare it with commonly used models for predicting rutting depth, such as XGBoost [60], SVR [61], K-Nearest Neighbor (KNN) [62], Random Forest (RF) [63], Decision Tree (DT) [64], RELM, ELM, KELM-RBF, LSTM, MLP, and GRU as baseline models for comparison. The baseline models using machine learning algorithms are implemented using Scikit learn [65] library for python, with the default, recommended parameters. For LSTM and GRU, we set up nine input nodes, 128 hidden nodes, and one output node. The model is then trained with the same training dataset with a learning rate of 0.01. We solved the model parameters using a batch size of 128 and a maximum epoch of 500. The stochastic gradient descent (SGD) algorithm is used for solving the parameters of models. All comparison algorithms are also tested using the same equivalent training set.
In the paper, the performances of the IAPSO-RELM algorithm with the equivalent training set after community detection, XGBoost, SVR, KNN, RF, RELM, ELM, KELM-RBF, LSTM, MLP, and GRU on RMSE, MAE, and MAPE are compared as shown in Fig.12-14.
Fig. 14: Boxplots of MAPE with different algorithms
Fig. 12: Boxplots of RMSE with different algorithms
Fig. 13: Boxplots of MAE with different algorithms
The comparative results of the algorithms are shown in Table X, which indicates that the algorithm proposed has the best performance among all the conducted algorithms in RMSE, MAE, and MAPE for the rutting depth short time prediction task on 19 asphalt pavement datasets. RELM algorithm outperforms the XGBOOST, KNN, SVR, ELM, KELM-RBF, MLP, and RF algorithms in terms of prediction accuracy. The IAPSO algorithm is used to optimize the parameters of the RELM algorithm, and the accuracy is further improved. Compared with traditional machine learning algorithms and deep learning algorithms such as LSTM and GRU, IAPSO-RELM algorithm has improved the RMSE by a maximum of 78.30%, a minimum of 30.49% and an average of 54.54%; it has improved the MAE by a maximum of 81.05%, a minimum of 24.50% and an average of 55.94%; it has improved the MAPE by a maximum of 79.05%, a minimum of 28.38% and 55.62% on average.
## V Conclusion
In response to the low rutting data of single asphalt pavement in the RIOHTrack test, an algorithm based on complex network community detection was proposed to construct an equivalent data set and improve the prediction accuracy of neural network-like algorithms. Concerning the weak generalization ability and low prediction accuracy of ELM-based prediction models in asphalt pavement rutting prediction, an ELM algorithm using residual correction, RELM, was proposed, and the parameters of the RELM algorithm were optimized using the IAPSO algorithm, named the IAPSO-RELM algorithm.
In this study, the predictive performance of the IAPSO-RELM model was tested using an equivalent dataset using 19 asphalt pavement materials. These data and works were collected and supported by the National Key Research and Development Program of China (2020YFA0714300). The algorithm outperformed other comparative algorithms in terms of RMSE, MAE, and MAPE for all asphalt pavement materials. As the training sample size has a significant impact on the prediction performance of computational intelligence methods, their prediction performance can be improved by increasing the number of training data. The test results show that the IAPSO-RELM model using equivalent data is effective in predicting the rutting depth of asphalt pavements.
By introducing prior knowledge and establishing a strong artificial intelligence model, the accuracy of short-term prediction of asphalt pavement rutting can be improved. To construct a rutting depth complex network of asphalt pavements, different levels of abstraction analyses are carried out for asphalt pavement materials that cannot be measured by themselves. Through community detection and analysis of rutting depth variation of 19 kinds of asphalt pavements, the similarity of different pavement structures was excavated.
In future work, we will focus on considering the combination of metaheuristic optimization and machine learning algorithms for oil and gas systems. Treated like a rutting depth prediction, a credible prediction of key oil and gas system parameters takes into account the simulated environment where multiple environmental parameters are coupled with internal system changes. It provides the basis for the subsequent full-environment simulation and intelligent scheduling of oil and gas systems.
|
2303.02823 | On group rings of virtually abelian groups | Let $\Gamma$ be a finitely generated torsion-free group. We show that the
statement of $\Gamma$ being virtually abelian is equivalent to the statement
that the $*$-regular closure of the group ring $\mathbb{C}[\Gamma]$ in the
algebra of (unbounded) operators affiliated to the group von Neumann algebra is
a central division algebra. More generally, for any field $k$, it is shown that
$k[\Gamma]$ embeds into a central division algebra in case $\Gamma$ is
virtually abelian. We take advantage of this result in order to develop a
criterion for existence of units in the group ring $k[\Gamma]$. We develop this
criterion in the particular case of $\Gamma$ being the Promislow's group. | Joan Claramunt, Lukasz Grabowski | 2023-03-06T01:28:10Z | http://arxiv.org/abs/2303.02823v1 | # On group rings of virtually abelian groups
###### Abstract.
Let \(\Gamma\) be a finitely generated torsion-free group. We show that the statement of \(\Gamma\) being virtually abelian is equivalent to the statement that the \(*\)-regular closure of the group ring \(\mathbb{C}[\Gamma]\) in the algebra of (unbounded) operators affiliated to the group von Neumann algebra is a central division algebra. More generally, for any field \(k\), it is shown that \(k[\Gamma]\) embeds into a central division algebra in case \(\Gamma\) is virtually abelian. We take advantage of this result in order to develop a criterion for existence of units in the group ring \(k[\Gamma]\). We develop this criterion in the particular case of \(\Gamma\) being the Promislow's group.
2010 Mathematics Subject Classification: Primary 16S34, 20C07; Secondary 16E50 Both authors were supported by the ERC Starting Grant "Limits of Structures in Algebra and Combinatorics" No. 805495.
In the case of the Promislov's group, the division algebra \(\mathcal{R}(\Gamma)\) is 16-dimensional over its center, and the center is a 2-dimensional extension of the field of rational functions in 3 variables. First let us note that, irrespectively of the unit conjecture, it is interesting to ask which groups \(\Gamma\) have the property that \(\mathcal{R}(\Gamma)\) is a central division algebra. The following theorem answers this question.
**Theorem** (Theorem 3.1).: _If \(\Gamma\) is a finitely generated torsion-free group, then \(\Gamma\) is virtually abelian if and only if the \(*\)-regular closure \(\mathcal{R}(\Gamma):=\mathcal{R}(\mathbb{C}[\Gamma],\mathcal{U}(\Gamma))\) of \(\mathbb{C}[\Gamma]\) in \(\mathcal{U}(\Gamma)\) is a central division algebra._
Let us now briefly describe how we develop a criterion for the existence of units in \(\mathbb{C}[\Gamma]\) in the general case when \(\Gamma\) is an arbitrary virtually abelian group. The details are contained in Section 4. Suppose we are given a polynomial \(p(x)\) of degree \(d\geq 2\) with coefficients in \(\mathrm{Z}(k[\Gamma])\), the center of \(k[\Gamma]\). Suppose further that there exists a root of \(p(x)\) in \(\mathrm{Z}(k[\Gamma])\), call it \(\kappa\). We can then write \(p(x)\) as the product \((x-\kappa)q(x)\) for some polynomial \(q(x)\) with coefficients also in \(\mathrm{Z}(k[\Gamma])\). Then if \(T\in k[\Gamma]\) is any element such that \(p(T)=1\), we obtain
\[1=(T-\kappa)q(T),\]
so \(T-\kappa\) is a unit in \(k[\Gamma]\), which may be trivial or not.
Let us focus on the case when \(\Gamma\) is the Promislow's group \(G\). First, let us make some preliminary observations about units in \(k[G]\) when \(\mathrm{char}(k)\neq 2\). As we show in Propositions 5.14 and 5.15, the non-trivial units in \(k[G]\) can be classified into two types. Let us write \(\mathcal{Z}\) for the center of \(\mathcal{R}(\Gamma)\) and let \(U\) be a non-trivial unit in \(k[G]\). Write \(\omega^{U}=a+bs+ct+du\in k[G]\), where \(a,b,c,d\in k[\mathbb{Z}^{3}]\), and \(s,t,u\) are the standard generator of \(G\). Then both of the fields \(\mathcal{Z}(a+bs)\) and \(\mathcal{Z}(ct+du)\) have the same degree over \(\mathcal{Z}\) and this common degree is equal to either 2 or 4. We say that \(U\) is of type 2 in the first case and of type 4 in the second case. A direct check shows that the units found by Murray in [8] when \(\mathrm{char}(k)>2\) are of type 2 (Proposition 5.21), and as such it makes sense to ask about criteria for existence of units of type 2 when \(\mathrm{char}(k)=0\).
With this in mind, let us state our criterion for the existence of units of type 2 in \(\mathbb{C}[G]\).
**Theorem** (Theorem 5.24).: _Let \(G\) be the Promislow group. Then there exist non-trivial units of type 2 in \(\mathbb{C}[G]\) if and only if the following system of quadratic equation in \(9\) variables with coefficients in \(\mathcal{Z}\) has non-trivial solutions in \(\mathcal{Z}\):_
\[\begin{split}&\diamond\ (A^{2}-1)\omega_{x}^{2}+C_{1}^{2}(\kappa_{x}+ 1)(\kappa_{y}-1)-C_{2}^{2}(\kappa_{x}-1)(\kappa_{y}+1)+C_{3}^{2}(\kappa_{x}+1) (\kappa_{y}+1)-C_{4}^{2}(\kappa_{x}-1)(\kappa_{y}-1)\\ &\qquad\qquad\qquad+2D_{1}^{2}(\kappa_{z}-1)-2D_{2}^{2}\omega_{x }^{2}(\kappa_{z}+1)+2D_{3}^{2}(\kappa_{z}+1)-2D_{4}^{2}\omega_{x}^{2}(\kappa_ {z}-1)=0,\\ &\diamond\ (\kappa_{x}-1)C_{2}C_{4}-(\kappa_{x}+1)C_{1}C_{3}=0,\\ &\diamond\ (\kappa_{x}-1)C_{2}D_{2}+C_{3}D_{3}=0,\\ &\diamond\ (\kappa_{x}+1)C_{1}D_{2}+C_{4}D_{3}=0,\\ &\diamond\ (\kappa_{x}-1)C_{4}D_{4}+C_{1}D_{1}=0,\\ &\diamond\ (\kappa_{x}+1)C_{3}D_{4}+C_{2}D_{1}=0.\end{split}\]
_Here \(\kappa_{g}=\frac{1}{2}(g+g^{-1})\in\mathbb{C}[G]\) and \(\omega_{g}=\frac{1}{2}(g-g^{-1})\in\mathbb{C}[G]\) for \(g\in G\)._
This criterion should be compared with the determinant condition stated in [2], which gives a single polynomial equation of degree 4 with 16 variables. In this sense, our criterion gives a'simpler' polynomial equation than the determinant condition.
**Remark**.: It is not known if the group ring \(\mathbb{C}[G]\) of the Promislow's group contains any non-trivial unitaries. In Theorem 5.26 we state an if-and-only-if criterion similar to the theorem above for the existence of unitaries.
The paper is organized as follows. In Section 2 we collect the definitions of von Neumann regular and \(*\)-regular rings, together with the definition of the \(*\)-regular closure of a subring in a \(*\)-regular ring. We also recall the construction of a crossed product of a ring with a group, from which the group ring of a ring is a particular case of it. In Section 3 we present and prove the first of our main results, namely the characterization of virtually abelian groups \(\Gamma\) in terms of algebraic properties of the \(*\)-regular closure of the group ring \(\mathbb{C}[\Gamma]\) inside \(\mathcal{U}(\Gamma)\), the algebra of (unbounded) operators affiliated to the group von Neumann algebra \(\mathcal{N}(\Gamma)\). We use part of this characterization, in Section 4, to briefly present a general procedure for finding units in group algebras of virtually abelian groups. We specialize, and further analyze, this procedure in the case \(\Gamma\) is the Promislow's group \(G\), culminating in the criterias presented in Theorems 5.24 and 5.26, which give necessary and sufficient conditions for finding non-trivial units (resp. unitaries) in the group ring of \(G\). This is done in Section 5.
## 2. Preliminaries
### Central division algebras
Let \(k\) be any field. Throughout this section, and for the rest of the paper, a \(k\)-algebra will be a unital, associative algebra \(A\) over the field \(k\). Its center will be denoted by \(\mathrm{Z}(A)\). It always holds that \(k\subseteq\mathrm{Z}(A)\). A \(k\)-algebra \(A\) is called _central_ if this inclusion is in fact an equality, that is \(k=\mathrm{Z}(A)\).
A \(k\)-algebra \(A\) is a _division algebra_ if every non-zero element \(a\in A\) is two-sided invertible, meaning that there are elements \(b,c\in A\) such that \(ab=1\) and \(ca=1\). In this case necessarily \(b=c\) and this common element is the unique element satisfying the two equations \(ax=1,ya=1\). We will denote it by \(a^{-1}\), called the _inverse_ of \(a\). So a division algebra over \(k\) is a division ring which is also an algebra over \(k\).
In this paper we adopt the following convention.
**Definition 2.1**.: A \(k\)-algebra \(A\) will be called a _central division algebra_ if
1. \(A\) is central;
2. \(A\) is a division algebra;
3. \(A\) is _finite-dimensional_ over \(\mathrm{Z}(A)=k\).
We will denote by \([A:k]\) the dimension of \(A\) over \(k\).
More generally, if \(\mathcal{R}\) denotes any division ring and if \(V\) is a left \(\mathcal{R}\)-module, then \([V:\mathcal{R}]\) will denote the dimension of \(V\) over \(\mathcal{R}\).
### Virtually abelian groups
A group \(\Gamma\) is said to be _virtually abelian_ if there exists an abelian subgroup of finite index. Equivalently, if there exists a normal abelian subgroup of finite index.
In this paper we will work with finitely generated groups only. In this case, a finitely generated virtually abelian group \(\Gamma\) contains a finitely generated normal abelian subgroup \(N\) of finite index. If moreover \(\Gamma\) is assumed to be torsion-free, then necessarily \(N\) is isomorphic to \(\mathbb{Z}^{n}\) for some \(n\geq 1\).
### Von Neumann regular and \(*\)-regular rings
A unital ring \(R\) is _(von Neumann) regular_ if for every element \(x\in R\) there exists \(y\in R\) such that \(xyx=x\). Note that the element \(e=xy\) is an idempotent in \(R\), that is \(e^{2}=e\).
A \(*\)_-regular ring_ is a regular ring \(R\) endowed with a proper involution \(*\colon R\to R\), proper meaning that \(x^{*}x=0\) implies \(x=0\). It is not difficult to show that in a \(*\)-regular ring, the following hold. For every element \(x\in R\) there exist _unique_ projections \(e,f\in R\) (that is elements \(p\in R\) satisfying \(p^{2}=p^{*}=p\)) such that \(xR=eR\) and \(Rx=Rf\), and moreover there exists a unique element \(y\in fRe\), called the _relative inverse_ of \(x\) and denoted by \(\overline{x}\), such that \(xy=e\) and \(yx=f\). We denote by \(\mathrm{LP}(x)\) the projection \(e\), termed the _left projection_ of \(x\), and by \(\mathrm{RP}(x)\) the projection \(f\), termed the _right projection_ of \(x\).
For any subset \(S\) of a \(*\)-regular ring \(R\), there exists a smallest \(*\)-regular subring of \(R\) containing \(S\), denoted by \(\mathcal{R}(S,R)\) and termed the \(*\)-regular closure of \(S\) in \(R\)[1, Proposition 6.2]. It always contains the division closure \(\mathcal{D}(S,R)\), which we recall it is defined as the smallest subring of \(R\) containing \(S\) and closed under taking inverse of elements, when they exist in \(R\) (so if \(x\in\mathcal{D}(S,R)\) is invertible in \(R\) with inverse \(x^{-1}\), then \(x^{-1}\in\mathcal{D}(S,R)\)).
### Crossed products and group algebras
Let \(R\) be a unital ring and \(\Gamma\) be any countable group. A _crossed product_, denoted by \(R*\Gamma\), is a unital ring in which every element can be uniquely written as a finite sum
\[\sum_{g\in\Gamma}r_{g}g\]
with \(r_{g}\in R\). The sum is given componentwise, and the product is described by the rule
\[(rg)\cdot(sh)=r\ \sigma_{g}(s)\ \tau(g,h)\ gh\]
for \(r,s\in R\) and \(g,h\in\Gamma\), where \(\tau:\Gamma\times\Gamma\to R^{\times}\) and \(\sigma:\Gamma\to\mathrm{Aut}(R)\) are maps satisfying the following properties:
1. \(\sigma_{1}=\mathrm{id}_{R}\), and \(\tau(1,h)=\tau(g,1)=1\) for all \(g,h\in\Gamma\);
2. \(\tau(g,h)\tau(gh,f)=\sigma_{g}(\tau(h,f))\tau(g,hf)\) for all \(g,h,f\in\Gamma\) (so \(\tau\) is a \(2\)-cocycle for \(\sigma\));
3. \(\tau(g,h)\sigma_{gh}(r)=\sigma_{g}(\sigma_{h}(r))\tau(g,h)\) for all \(g,h\in\Gamma\), \(r\in R\).
Here \(R^{\times}\) is the group of units of \(R\).
In the particular case that \(\tau\) and \(\sigma\) are the trivial maps, we obtain the _group ring_\(R[\Gamma]\). We will we particularly interested in the case that \(R=k\) is a field. If moreover \(R=k\) is a field endowed with an involution \(*\), it linearly extends to an involution on \(k[\Gamma]\) by the rule
\[(\lambda g)^{*}=\lambda^{*}g^{-1}\]
for \(\lambda\in k,g\in\Gamma\).
**Definition 2.2**.: An element \(u\in k[\Gamma]\) is called a _unit_ if there exists \(v\in k[\Gamma]\) such that \(1=uv=vu\). It will be called a _trivial_ unit if \(u=\lambda g\) for some \(\lambda\in k\backslash\{0\}\) and \(g\in\Gamma\), otherwise it is called a _non-trivial_ unit.
Take now \(k=\mathbb{C}\) with complex conjugation as involution. Let \(l^{2}(\Gamma)\) denote the Hilbert space with orthonormal basis the elements of \(\Gamma\). Then \(\mathbb{C}[\Gamma]\) acts faithfully on \(l^{2}(\Gamma)\) by left multiplication, and so we may identify \(\mathbb{C}[\Gamma]\) as a subset of \(\mathcal{B}(l^{2}(\Gamma))\), i.e. as bounded operators on \(l^{2}(\Gamma)\). The _group von Neumann algebra of \(\Gamma\)_, denoted by \(\mathcal{N}(\Gamma)\), is defined to be the weak closure of \(\mathbb{C}[\Gamma]\) in \(\mathcal{B}(l^{2}(\Gamma))\). Algebraically, it is characterized as follows: \(\mathcal{N}(\Gamma)\) consists of all bounded operators \(T\in\mathcal{B}(l^{2}(\Gamma))\) which commute with the action of \(\mathbb{C}[\Gamma]\) on \(l^{2}(\Gamma)\) given by _right_ multiplication.
Also, we denote by \(\mathcal{U}(\Gamma)\) the algebra of (possibly unbounded) operators affiliated to the von Neumann algebra \(\mathcal{N}(\Gamma)\). Again, we can give an algebraic characterization of it: \(\mathcal{N}(\Gamma)\) is an Ore domain, and \(\mathcal{U}(\Gamma)\) coincides with the classical ring of quotients of \(\mathcal{N}(\Gamma)\)[9, Proposition 2.8].
We denote by \(\mathcal{D}(\Gamma):=\mathcal{D}(\mathbb{C}[\Gamma],\mathcal{U}(\Gamma))\) the division closure of \(\mathbb{C}[\Gamma]\) in \(\mathcal{U}(\Gamma)\), and by \(\mathcal{R}(\Gamma):=\mathcal{R}(\mathbb{C}[\Gamma],\mathcal{U}(\Gamma))\) the \(*\)-regular closure of \(\mathbb{C}[\Gamma]\) in \(\mathcal{U}(\Gamma)\). The latter exists since \(\mathcal{U}(\Gamma)\) is \(*\)-regular [9, Note 2.11]. We have the inclusions
\[\mathbb{C}[\Gamma]\subseteq\mathcal{D}(\Gamma)\subseteq\mathcal{R}(\Gamma) \subseteq\mathcal{U}(\Gamma).\]
Note that for \(H\leq\Gamma\) a subgroup of \(\Gamma\), the inclusion \(\mathbb{C}[H]\hookrightarrow\mathbb{C}[\Gamma]\) induces natural embeddings \(\mathcal{N}(H)\hookrightarrow\mathcal{N}(\Gamma)\), \(\mathcal{U}(H)\hookrightarrow\mathcal{U}(\Gamma)\) and \(\mathcal{R}(H)\hookrightarrow\mathcal{R}(\Gamma)\).
## 3. Central division algebras and virtually abelian groups
Let \(\Gamma\) be a finitely generated torsion-free group. In this section we give a necessary and sufficient condition on the \(*\)-regular closure \(\mathcal{R}(\Gamma)\) in order to guarantee that \(\Gamma\) be virtually abelian. The main theorem is the following.
**Theorem 3.1**.: _Let \(\Gamma\) be a finitely generated torsion-free group. The following statements are equivalent._
1. \(\Gamma\) _is virtually abelian._
2. \(\mathcal{R}(\Gamma)\) _is a central division algebra._
The rest of the section is devoted to prove Theorem 3.1 above, although some tangential results will also be obtained.
### From virtually abelian groups to central division algebras
Let us start by taking \(\Gamma\) to be a finitely generated, torsion-free, virtually abelian group. Our convention for conjugation is that \(h^{g}:=ghg^{-1}\) for \(g,h\in\Gamma\).
Let \(N\leq\Gamma\) be a maximal normal abelian subgroup of finite index, so \(N\cong\mathbb{Z}^{n}\) for some \(n\geq 1\). Let \(F:=\Gamma/N\), which is finite. We have a natural action of \(F\) on \(N\) given by conjugation \(n\mapsto n^{f}\), \(f\in F\), which linearly extends to an action of \(F\) on \(k[N]\), also denoted by \(p\mapsto p^{f}\), \(f\in F\). Take \(\tilde{F}\) to be a fixed right transversal for \(N\) i \(\Gamma\) and, for simplicity, we take the representative of \(N\) to be the unit element \(1\in\Gamma\), that is \(\tilde{1}=1\). If \(\pi:\Gamma\to F\) denotes the natural quotient map, the image of an element \(\tilde{f}\in\tilde{F}\) will be simply denoted by \(f\), that is \(f=\pi(\tilde{f})\). Therefore, for \(p\in k[N]\) and \(\tilde{f}\in\tilde{F}\),
\[p^{f}=p^{f}.\]
Fix now any field \(k\) endowed with an involution \(*\). Note that every element of the group algebra \(k[\Gamma]\) can be uniquely written as
\[\sum_{f\in F}p_{\tilde{f}}\tilde{f}\]
for some Laurent polynomials \(p_{\tilde{f}}\in k[N]\cong k[x_{1}^{\pm 1},...,x_{n}^{\pm 1}]\). Therefore, \(k[\Gamma]\) is obtained as a crossed product
\[k[\Gamma]=k[N]*\tilde{F}\]
whose multiplicative structure is given by
\[(pf\tilde{)}\cdot(q\tilde{g})=p\ q^{f}\ \tau(f,g)\ \widetilde{fg},\]
where \(\tau:F\times F\to N\) is the \(2\)-cocycle given by \(\tau(f,g)=\tilde{f}\ \widetilde{g}\ \widetilde{f}g^{-1}\). In particular, we see that the map \(F\rightarrow\operatorname{Aut}(k[N]),f\mapsto.^{f}\) is a group homomorphism.
Since \(\Gamma\) is virtually abelian, the group ring \(k[\Gamma]\) satisfies Kaplansky's zero-divisor conjecture, i.e. it is a domain [6, Theorem 1.4]. In fact it is an _Ore domain_. We denote by \(\mathcal{Q}:=\mathcal{Q}(k[\Gamma])\) its Ore field of fractions, which is a division algebra over \(k\).
The following notation will be used throughout the section.
**Notation 3.2**.:
1. We will denote by \(\mathcal{Z}\) the center of \(\mathcal{Q}\), which is a field.
2. For a subset \(A\subseteq\mathcal{Q}\), we denote by \(\mathcal{Z}(A)\) the minimal division algebra over \(k\) containing \(\mathcal{Z}\) and the elements of \(A\).
3. For any division ring \(\mathcal{R}\) and any \(\alpha,\beta\in\mathcal{R}\) with \(\beta\neq 0\), conjugation of \(\alpha\) by \(\beta\) will be denoted by \(\alpha^{\beta}:=\beta\alpha\beta^{-1}\).
We start by computing, in the case \(k=\mathbb{C}\), the division closure \(\mathcal{D}(\Gamma)\) and the \(*\)-regular closure \(\mathcal{R}(\Gamma)\) in terms of \(\mathcal{Q}\). It turns out that these three rings coincide.
**Proposition 3.3**.: _For \(k=\mathbb{C}\), we have_
\[\mathcal{Q}=\mathcal{D}(\Gamma)=\mathcal{R}(\Gamma).\]
Proof.: The inclusions \(\mathcal{Q}\subseteq\mathcal{D}(\Gamma)\subseteq\mathcal{R}(\Gamma)\) are obvious. Now, \(\mathcal{Q}\) is a division algebra, so it is von Neumann regular. It is also \(*\)-closed and contains \(k[\Gamma]\), so \(\mathcal{R}(\Gamma)\subseteq\mathcal{Q}\), finishing the proof of the proposition.
Let us return to the general setting where \(k\) is an arbitrary field with involution \(*\). We define \(\mathcal{L}\) to be the field of fractions of \(k[N]\cong k[x_{1}^{\pm 1},...,x_{n}^{\pm 1}]\), which can be seen as a subfield of \(\mathcal{Q}\). In fact, it can be identified with \(k(x_{1},...,x_{n})\), the field of rational functions in \(n\) variables \(x_{1},...,x_{n}\), and we will do so throughout the paper. Each element \(f\in F\) acts on \(\mathcal{L}\) by conjugation, so it naturally induces an element of \(\operatorname{Autl}(\mathcal{L}/k)\), which we will denote by \(\alpha_{f}\). It is easy to see, using maximality of the abelian subgroup \(N\), that the map \(\alpha\colon F\to\operatorname{Aut}(\mathcal{L}/k),f\mapsto\alpha_{f}\) is an injective group homomorphism, so we identify \(F\) with the subgroup \(H:=\operatorname{im}(\alpha)\).
**Proposition 3.4**.: _The following statements hold true._
1. _The ring_ \(\mathcal{Q}\) _is Artinian, and moreover_ \[\mathcal{Q}=\mathcal{L}*\tilde{F}\] _for a suitable crossed product. In particular,_ \(\tilde{F}\) _is a basis of_ \(\mathcal{Q}\) _over_ \(\mathcal{L}\)_, where_ \(\mathcal{L}\)_-multiplication is given from the left. Therefore_ \([\mathcal{Q}:\mathcal{L}]=|F|\)_._
2. _If we denote by_ \(\mathcal{L}^{H}\) _the fixed subfield of_ \(\mathcal{L}\) _associated with the subgroup_ \(H\leq\operatorname{Aut}(\mathcal{L}/k)\)_, then_ \[\mathcal{Z}=\mathcal{L}^{H}.\] _In particular,_ \(\mathcal{Q}\) _is finite-dimensional over_ \(\mathcal{Z}\)_, and in fact_ \([\mathcal{Q}:\mathcal{Z}]=|F|^{2}\)_._
Proof.: (i) Denote by \(\mathcal{A}\) the subring of \(\mathcal{Q}\) generated by \(\mathcal{L}\) and \(\Gamma\). Every element of \(\mathcal{A}\) is a finite sum of monomials
\[x_{1}x_{2}\cdots x_{r}\]
where either \(x_{i}\in\mathcal{L}\) or \(x_{i}\in\tilde{F}\). Note that if \(x\in\mathcal{L}\) and \(g=n\tilde{f}\in\Gamma\) for unique elements \(n\in N,\tilde{f}\in\tilde{F}\), then
\[gx=n\tilde{f}x=n\tilde{f}x\tilde{f}^{-1}\tilde{f}=nx^{\tilde{f}}\tilde{f}\in \sum_{\tilde{f}\in\tilde{F}}\mathcal{L}\tilde{f}.\]
With this we deduce that \(\mathcal{A}=\sum_{\tilde{f}\in\tilde{F}}\mathcal{L}\tilde{f}\). To prove that this sum is direct, suppose that we have
\[\sum_{\tilde{f}\in\tilde{F}}x_{\tilde{f}}\tilde{f}=\sum_{\tilde{f}\in\tilde{F}} y_{\tilde{f}}\tilde{f} \tag{3.1}\]
for some \(x_{\tilde{f}},y_{\tilde{f}}\in\mathcal{L}\). Write
\[x_{\tilde{f}}=p_{\tilde{f}}q_{\tilde{f}}^{-1},\quad y_{\tilde{f}}=r_{\tilde{f} }s_{\tilde{f}}^{-1}\]
where each \(p_{\tilde{f}},q_{\tilde{f}},r_{\tilde{f}},s_{\tilde{f}}\in k[N]\), \(q_{\tilde{f}},s_{\tilde{f}}\neq 0\). By multiplying (3.1) on the left with the element \(h=\prod_{\tilde{f}\in\tilde{F}}q_{\tilde{f}}s_{\tilde{f}}\neq 0\) we get
\[\sum_{\tilde{f}\in\tilde{F}}x_{\tilde{f}}h\tilde{f}=\sum_{\tilde{f}\in\tilde{F} }y_{\tilde{f}}h\tilde{f},\]
which is an equality in \(k[\Gamma]\). But \(k[\Gamma]=k[N]*\tilde{F}\), so necessarily \(x_{\tilde{f}}h=y_{\tilde{f}}h\) for every \(\tilde{f}\in\tilde{F}\), which implies \(x_{\tilde{f}}=y_{\tilde{f}}\) as \(h\neq 0\). This proves that the sum \(\sum_{\tilde{f}\in\tilde{F}}\mathcal{L}\tilde{f}\) is direct, and so \(\mathcal{A}=\mathcal{L}*\tilde{F}\) for a suitable crossed product. In particular, \(\mathcal{A}\) is Artinian, and we deduce that \(\mathcal{A}=\mathcal{Q}\). Indeed, given \(ab^{-1}\in\mathcal{Q}\) with \(a,b\in k[\Gamma]\), \(b\neq 0\), then the chain of ideals
\[b\mathcal{A}\supseteq b^{2}\mathcal{Q}\supseteq\cdots\supseteq b^{n}\mathcal{ A}\supseteq\cdots\]
stabilizes, so we can find a positive integer \(k\geq 1\) and \(c\in\mathcal{A}\) such that \(b^{k}=b^{k+1}c\). This implies that \(b^{-1}=c\in\mathcal{A}\), and so \(ab^{-1}\in\mathcal{A}\), proving the equality \(\mathcal{Q}=\mathcal{A}=\mathcal{L}*\tilde{F}\).
The second part now easily follows.
1. Let us first show the inclusion \(\mathcal{L}^{H}\subseteq\mathcal{Z}\). Given \(x\in\mathcal{L}^{H}\), it is clear that \(x\) commutes with any \(n\in N\). Take now \(\tilde{f}\in\tilde{F}\). We compute \[\tilde{f}x\tilde{f}^{-1}=x^{\tilde{f}}=x^{f}=\alpha_{f}(x)=x,\] so \(x\) also commutes with \(\tilde{F}\). Therefore \(x\) is central in \(k[\Gamma]\), and so also central in \(\mathcal{Q}\). For the other inclusion \(\mathcal{Z}\subseteq\mathcal{L}^{H}\), we first note the following. Due to maximality of \(N\trianglelefteq\Gamma\), we have that \(\ker(N\stackrel{{\cdot}}{{\longrightarrow}}N)\neq N\) for any \(f\in F\backslash\{1\}\). This says that, for any \(\tilde{f}\in\tilde{F}\backslash\{1\}\), there exists \(n\in N\) such that \(n^{\tilde{f}}\neq n\). Take \(x\in\mathcal{Z}\). Using part (i) we can uniquely write it as \[x=\sum_{\tilde{f}\in\tilde{F}}l_{\tilde{f}}\tilde{f}\] for some \(l_{\tilde{f}}\in\mathcal{L}\). Fix \(\tilde{g}\in\tilde{F}\backslash\{1\}\), and consider \(n\in N\) such that \(n^{g}\neq n\). Then \(x\) commutes with \(n\), so we have the equality \[\sum_{\tilde{f}\in\tilde{F}}l_{\tilde{f}}n^{\tilde{f}}\tilde{f}=\sum_{\tilde{f }\in\tilde{F}}l_{\tilde{f}}n\tilde{f}.\] By uniqueness, necessarily \[l_{\tilde{f}}(n^{f}-n)=0\] for all \(\tilde{f}\in\tilde{F}\). In particular, \(l_{\tilde{g}}=0\). We thus conclude that \(x=l_{\tilde{1}}\tilde{1}=l_{1}\in\mathcal{L}\). Now, given any \(f\in F\), we compute \[\alpha_{f}(x)=\tilde{f}x\tilde{f}^{-1}=x\] as \(x\in\mathcal{Z}\). This says that \(x\in\mathcal{L}^{H}\), and hence establishes the other inclusion. The last statement now follows from part (i) and the fact that \(\mathcal{L}/\mathcal{L}^{H}\) is a finite Galois extension with Galois group \(H\).
As a consequence of part (ii) of the above proposition, we see that \(\mathcal{Q}\) is finite-dimensional over its center \(\mathcal{Z}\). This turns \(\mathcal{Q}\) into a central division algebra when considered as an algebra over \(\mathcal{Z}\). Now one of the implications of Theorem 3.1 is immediate.
Proof of \((1)\implies(2)\) of Theorem 3.1.: Just use Proposition 3.3 and the conclusions above.
### From central division algebras to virtually abelian groups
We start the section by stating the following theorem.
**Theorem 3.5**.: _Let \(\Gamma\) be a finitely generated torsion-free group with \(\mathcal{R}(\Gamma)\) being a semisimple ring. Suppose that \(\mathcal{R}(\Gamma)\) is finitely generated as a \(\mathrm{Z}(\mathcal{R}(\Gamma))\)-module. Then \(\Gamma\) is virtually abelian._
Before proving it, we need a couple of technical lemmas.
**Lemma 3.6**.: _Let \(G\) be a group with \(\mathcal{R}(G)\) being a semisimple ring. Let \(H\leq G\) be a subgroup, and define \(\mathcal{K}:=\mathcal{R}(G)\cap\mathcal{U}(H)\). Then \(\mathcal{K}\) is a regular ring._
_Moreover, \(\mathcal{R}(G)\) becomes a \(\mathcal{K}\)-module, and if \(\mathcal{R}(G)\) is finitely generated as a \(\mathcal{K}\)-module, then \([G:H]<+\infty\)._
Proof.: First, note that \(\mathcal{K}\) can be considered as a subring of \(\mathcal{U}(G)\) because of the embedding \(\mathcal{U}(H)\hookrightarrow\mathcal{U}(G)\). Also, note that all three rings \(\mathcal{R}(G),\mathcal{U}(H)\) and \(\mathcal{U}(G)\) are \(*\)-regular.
Let us first prove that \(\mathcal{K}\) is regular. Let \(x\in\mathcal{K}\) be given. Since \(x\in\mathcal{R}(G)\subseteq\mathcal{U}(G)\) and \(x\in\mathcal{U}(H)\), there exist unique projections \(e_{1},f_{1}\in\mathcal{R}(G)\), \(e_{2},f_{2}\in\mathcal{U}(G)\) and \(e_{3},f_{3}\in\mathcal{U}(H)\) such that
\[x\mathcal{R}(G)=e_{1}\mathcal{R}(G),\quad x\mathcal{U}(G)=e_{2}\mathcal{U}(G), \quad x\mathcal{U}(H)=e_{3}\mathcal{U}(H)\]
and
\[\mathcal{R}(G)x=\mathcal{R}(G)f_{1},\quad\mathcal{U}(G)x=\mathcal{U}(G)f_{2}, \quad\mathcal{U}(H)x=\mathcal{U}(H)f_{3},\]
and moreover there exist unique elements \(\overline{x}_{1}\in\mathcal{R}(G),\overline{x}_{2}\in\mathcal{U}(G)\) and \(\overline{x}_{3}\in\mathcal{U}(H)\) such that \(x\overline{x}_{i}=e_{i}\) and \(\overline{x}_{i}x=f_{i}\) for all \(i=1,2,3\). In particular, note that \(e_{i}x=x=xf_{i}\) for all \(i=1,2,3\). But now we have
\[e_{1}\mathcal{U}(G)=x\overline{x}_{1}\mathcal{U}(G)\subseteq x\mathcal{U}(G)=e _{1}x\mathcal{U}(G)\subseteq e_{1}\mathcal{U}(G),\]
\[e_{3}\mathcal{U}(G)=x\overline{x}_{3}\mathcal{U}(G)\subseteq x\mathcal{U}(G)=e _{3}x\mathcal{U}(G)\subseteq e_{3}\mathcal{U}(G),\]
and so \(e_{1}\mathcal{U}(G)=e_{3}\mathcal{U}(G)=x\mathcal{U}(G)=e_{2}\mathcal{U}(G)\).By uniqueness, \(e:=e_{1}=e_{2}=e_{3}\). Similarly, \(f:=f_{1}=f_{2}=f_{3}\). But now by uniqueness of the elements \(\overline{x}_{i}\) we also conclude that \(\overline{x}:=\overline{x}_{1}=\overline{x}_{2}=\overline{x}_{3}\in\mathcal{ R}(G)\cap\mathcal{U}(H)=\mathcal{K}\). This proves that \(\mathcal{K}\) is a regular ring, since
\[x\overline{x}x=ex=x.\]
It is clear that \(\mathcal{R}(G)\) is a (right) \(\mathcal{K}\)-module. Suppose now that \(\mathcal{R}(G)\) is generated by \(n\) elements as a \(\mathcal{K}\)-module, say by \(\{r_{1},...,r_{n}\}\subseteq\mathcal{R}(G)\). This gives us a surjective \(\mathcal{K}\)-module map
\[\alpha\colon\mathcal{K}^{n}\twoheadrightarrow\mathcal{R}(G),\quad(u_{1},...,u_{n})\mapsto\sum_{i=1}^{n}r_{i}u_{i}.\]
Suppose, by way of contradiction, that \([G:H]=+\infty\), and take \(S=\{s_{1},...,s_{n+1}\}\) a set of transversals of \(H\) consisting of \(n+1\) elements. This set gives us a \(\mathcal{K}\)-module map
\[\beta\colon\mathcal{K}^{n+1}\to\mathcal{R}(G),\quad(u_{1},...,u_{n+1}) \mapsto\sum_{i=1}^{n+1}s_{i}u_{i}.\]
But since free modules are projective, we can lift \(\beta\) to a \(\mathcal{K}\)-module map \(\gamma\colon\mathcal{K}^{n+1}\to\mathcal{K}\) such that
If it were the case that \(\beta\) is injective, then so would be \(\gamma\). But regular rings are absolutely flat, so tensoring \(\gamma\) with \(\operatorname{id}_{\mathcal{R}(G)}\) would yield an injective \(\mathcal{R}(G)\)-module map
\[\gamma\otimes\operatorname{id}_{\mathcal{R}(G)}\colon\mathcal{R}(G)^{n+1} \to\mathcal{R}(G)^{n}.\]
This is impossible as \(\mathcal{R}(G)\) is a semisimple ring, hence Artinian.
We conclude that \(\beta\) is not injective, which means that there exist elements \(u_{1},...,u_{n+1}\in\mathcal{K}\) not all zero such that
\[u:=\sum_{i=1}^{n+1}s_{i}u_{i}=0.\]
We now think of each \(u_{i}\in\mathcal{K}\subseteq\mathcal{U}(H)\) as a densely defined closed operator \(u_{i}\colon D_{i}\to l^{2}(H)\) commuting with the right action of \(H\), with \(D_{i}\subseteq l^{2}(H)\) dense in \(l^{2}(H)\) and invariant under the right action of \(H\). The operator \(u\colon D\to l^{2}(H)\) is defined on \(D:=\bigcap_{i=1}^{n+1}D_{i}\), which is dense in \(l^{2}(H)\) because each \(D_{i}\) is essentially dense (see [7, Lemma 8.3]). Now for each \(l\in D\), we compute
\[0=u(l)=\sum_{i=1}^{n+1}s_{i}u_{i}(l),\]
with each \(s_{i}u_{i}(l)\in s_{i}l^{2}(H)\subseteq l^{2}(G)\). But since \(S=\{s_{1},...,s_{n+1}\}\) is a set of transversals of \(H\) in \(G\), the Hilbert subspaces \(s_{1}l^{2}(H),...,s_{n+1}l^{2}(H)\) are pairwise orthogonal. We thus deduce that each \(s_{i}u_{i}(l)=0\), that is \(u_{i}(l)=0\). We conclude that each operator \(u_{i}\) is zero on the dense subspace \(D\). Since each \(u_{i}\) is closed, this implies that \(u_{i}\equiv 0\). This contradicts the fact that at least one of the elements \(u_{1},...,u_{n+1}\) is non-zero. Therefore necessarily \([G:H]<+\infty\), and in fact we have proved the more stronger result \([G:H]\leq n\).
**Lemma 3.7**.: _Let \(G\) be a group and consider \(\Delta(G)\) to be the subgroup of \(G\) consisting of all elements whose conjugacy class in \(G\) is finite. Then_
\[\operatorname{Z}(\mathcal{U}(G))\subseteq\mathcal{U}(\Delta(G)).\]
Proof.: Take \(x\in\operatorname{Z}(\mathcal{U}(G))\), so we can write it as \(x=ab^{-1}\) for some \(a,b\in\mathcal{N}(G)\). In fact, by using the polar decomposition and the spectral theorem for elements of \(\mathcal{U}(G)\), we can even take \(a,b\) to be in the center of \(\mathcal{N}(G)\). Thus in order to prove the lemma it is enough to show the inclusion \(\operatorname{Z}(\mathcal{N}(G))\subseteq\mathcal{N}(\Delta(G))\). For this, we will use the identification of \(\mathcal{N}(G)\) as a subspace of \(l^{2}(G)\) given by
\[\mathcal{N}(G)\hookrightarrow l^{2}(G),\quad T\mapsto T(1).\]
So take \(T\in\operatorname{Z}(\mathcal{N}(G))\) and write it as \(T=\sum_{g\in G}\lambda_{g}g\in l^{2}(G)\). In particular \(T\) commutes with every \(h\in G\), and so we deduce that \(\lambda_{hgh^{-1}}=\lambda_{g}\) for all \(g,h\in G\). So if \(g\) has infinite conjugacy class in \(G\), the \(l^{2}\)-summability of \(T\) implies that necessarily \(\lambda_{g}=0\). This shows that \(T\) is supported on elements in \(\Delta(G)\), which is what we wanted to prove.
We are now ready to prove Theorem 3.5.
Proof of Theorem 3.5.: Write \(\mathcal{Z}:=\mathrm{Z}(\mathcal{R}(\Gamma))\) for the center of \(\mathcal{R}(\Gamma)\). Let \(\Delta(\Gamma)\) be the subgroup of \(\Gamma\) consisting of all elements \(g\in G\) whose conjugacy class in \(\Gamma\) is finite. Consider
\[\mathcal{K}:=\mathcal{R}(\Gamma)\cap\mathcal{U}(\Delta(\Gamma)).\]
We claim that \(\mathcal{Z}\subseteq\mathcal{K}\). For this, let \(z\in\mathcal{Z}=\mathrm{Z}(\mathcal{R}(\Gamma))\). In particular, \(z\) commutes with every \(g\in\Gamma\), and hence it commutes with every element of \(\mathcal{N}(\Gamma)\). Since \(\mathcal{U}(\Gamma)\) is the classical ring of quotients of \(\mathcal{N}(\Gamma)\), we conclude that \(z\) also commutes with every element of \(\mathcal{U}(\Gamma)\). This shows that
\[\mathcal{Z}\subseteq\mathcal{R}(\Gamma)\cap\mathrm{Z}(\mathcal{U}(\Gamma)).\]
The claim follows due to Lemma 3.7.
Now since \(\mathcal{R}(\Gamma)\) is finitely generated as a \(\mathcal{Z}\)-module, it is also finitely generated as a \(\mathcal{K}\)-module since \(\mathcal{Z}\subseteq\mathcal{K}\). Thus Lemma 3.6 tells us that \([\Gamma:\Delta(\Gamma)]<+\infty\). In particular, since \(\Gamma\) is finitely generated and \(\Delta(\Gamma)\) is a subgroup of finite index, then so is \(\Delta(\Gamma)\). Pick \(S=\{g_{1},..,g_{r}\}\) to be a finite generating subset of \(\Delta(\Gamma)\), and consider the centralizer of \(S\) in \(\Delta(\Gamma)\), that is
\[C_{\Delta(\Gamma)}(S)=\{h\in\Delta(\Gamma)\mid hgh^{-1}=g\text{ for all }g\in S\}.\]
Note that
\[C_{\Delta(\Gamma)}(S)=\bigcap_{i=1}^{r}C_{\Delta(\Gamma)}(\{g_{i}\}).\]
For each \(g_{i}\in S\), since its conjugacy class is finite, say with \(n_{i}\in\mathbb{N}\) elements, we can pick \(h_{i1},...,h_{in_{i}}\in\Gamma\) such that the conjugacy class of \(g_{i}\) consists of the elements
\[\{h_{i1}g_{i}h_{i1}^{-1},...,h_{in_{i}}g_{i}h_{in_{i}}^{-1}\}.\]
In this case, the set \(J_{i}:=\{h_{i1},...,h_{in_{i}}\}\) contains a transversal of \(C_{\Delta(\Gamma)}(\{g_{i}\})\) in \(\Delta(\Gamma)\). In particular, \([\Delta(\Gamma):C_{\Delta(\Gamma)}(\{g_{i}\})]<+\infty\). Since the intersection of finite index subgroups is a finite index subgroup, we conclude that
\[[\Delta(\Gamma):C_{\Delta(\Gamma)}(S)]<+\infty.\]
But being \(S\) a generating set for \(\Delta(\Gamma)\) implies that the subgroup \(C_{\Delta(\Gamma)}(S)\) is central, and therefore abelian. Finally,
\[[\Gamma:C_{\Delta(\Gamma)}(S)]=[\Gamma:\Delta(\Gamma)]\cdot[\Delta(\Gamma):C_{ \Delta(\Gamma)}(S)]<+\infty,\]
and so \(C_{\Delta(\Gamma)}(S)\) is an abelian subgroup of \(\Gamma\) with finite index. This proves precisely that \(\Gamma\) is virtually abelian, as required.
We can finally prove the other implication of Theorem 3.1.
Proof of \(2)\implies 1)\) of Theorem 3.1.: This is a special case of Theorem 3.5.
## 4. On the existence of units in group rings of virtually abelian groups
In this section we present a general procedure which can help in the understanding of units in group rings of virtually abelian groups. This procedure will be applied to the case of the Promislow's group in Section 5. We will follow the same notation as in Section 3.1. Since we have already showed that \(\mathcal{Q}\) is a central division algebra, the following result from the theory of central division algebras will prove useful for us in the sequel.
**Theorem 4.1**.: _Let \(\mathcal{R}\) be a division algebra, finite-dimensional over its center \(\mathrm{Z}(\mathcal{R})\). The following statements hold._
1. \([\mathcal{R}:\mathrm{Z}(\mathcal{R})]=n^{2}\) _for some_ \(n\in\mathbb{N}\)_. Furthermore, if_ \(\mathcal{K}\subseteq\mathcal{R}\) _is a maximal subfield, then_ \([\mathcal{K}:\mathrm{Z}(\mathcal{R})]=n\)_._
2. _(Skolem-Noether) Let_ \(\mathcal{A}_{1},\mathcal{A}_{2}\subseteq\mathcal{R}\) _be two isomorphic_ \(\mathrm{Z}(\mathcal{R})\)_-subalgebras. Then for any_ \(\mathrm{Z}(\mathcal{R})\)_-algebra homomorphism_ \(\phi\colon\mathcal{A}_{1}\to\mathcal{A}_{2}\) _there is an element_ \(b\in\mathcal{R}\) _such that_ \(\phi\) _is given by conjugation by_ \(b\)_. In symbols,_ \[\phi(a)=a^{b}\] _for all_ \(a\in\mathcal{A}_{1}\)_._
Proof.: Part (1) follows from [4, Theorem 3.10 and Corollary 3.17]. Part (2) is a special case of [4, Theorem 3.14].
An immediate observation, due to Theorem 4.1 (1) and Proposition 3.4 (ii), is that \(\mathcal{L}\subseteq\mathcal{Q}\) is a maximal subfield of \(\mathcal{Q}\) which contains its center \(\mathcal{Z}=\mathcal{L}^{H}\).
We now expose the key idea on constructing units in \(k[\Gamma]\). Let \(p(x)\in\mathds{Z}(k[\Gamma])[x]\) be a polynomial with coefficients in the ring \(\mathds{Z}(k[\Gamma])\subseteq\mathcal{Z}\), and assume that \(1-p(x)\) is _not_ irreducible over \(\mathds{Z}(k[\Gamma])\), so we can write
\[1-p(x)=q_{1}(x)q_{2}(x)\]
for some non-trivial polynomials \(q_{1}(x),q_{2}(x)\in\mathds{Z}(k[\Gamma])[x]\). We then observe that, if \(T\in k[\Gamma]\) is a root of the polynomial \(p(x)\), then
\[1=q_{1}(T)q_{2}(T),\]
so \(q_{1}(T)\) is a unit in \(k[\Gamma]\), which can be either trivial or non-trivial.
We will concentrate in polynomials of the form
\[P_{\kappa,d}(x)=x^{d}-\kappa^{d}+1\]
for some \(\kappa\in\mathds{Z}(k[\Gamma])\) and \(d\in\mathbb{N}\), for which in this case
\[1-P_{\kappa,d}(x)=\kappa^{d}-x^{d}=(\kappa-x)(x^{d-1}+\kappa x^{d-2}+\cdots+ \kappa^{d-2}x+\kappa^{d-1}).\]
**Definition 4.2**.: Let \(\omega\in k[\Gamma]\) be a root of \(P_{\kappa,d}(x)\). An element \(T\in\mathcal{Q}\) is said to be an _\(\omega\)-conjugate of degree \(d\)_ if \(T^{d}=\omega^{d}\). We call such a \(T\)_trivial_ if it is of the form \(\xi\omega\) where \(\xi\in k\) is a \(d^{\text{th}}\) root of unity. If \(T\) is not trivial, we say that it is _non-trivial_.
For \(\omega\neq 0\), Theorem 4.1 (2) implies that every \(\omega\)-conjugate is conjugate, in \(\mathcal{Q}\), to \(\omega\). Hence the above definition is consistent.
The following proposition summarizes the previous idea on finding units in \(k[\Gamma]\).
**Proposition 4.3**.: _Let \(\omega\in k[\Gamma]\) be a non-zero root of \(P_{\kappa,d}(x)\), and let \(T\in\mathcal{Q}\) be an \(\omega\)-conjugate of degree \(d\)._
1. _The element_ \(U_{T}:=\kappa-T\) _is a unit in_ \(\mathcal{Q}\)_, with inverse_ \(U_{T}^{-1}=T^{d-1}+\kappa T^{d-2}+\cdots+\kappa^{d-2}T+\kappa^{d-1}\)_._
2. \(T\in k[\Gamma]\) _if and only if_ \(U_{T}\in k[\Gamma]\)_, in which case_ \(U_{T}^{-1}\) _also belongs in_ \(k[\Gamma]\)_._
_Conversely, let \(U\in\mathcal{Q}\backslash\{0\}\), and define \(T_{U}:=\omega^{U}=U\omega U^{-1}\)._
1. _The element_ \(T_{U}\) _is an_ \(\omega\)_-conjugate of degree_ \(d\)_._
2. _If moreover_ \(U,U^{-1}\in k[\Gamma]\)_, then_ \(T_{U}\in k[\Gamma]\)_._
Proof.: The first part is a simple computation:
\[U_{T}U_{T}^{-1}=(\kappa-T)(T^{d-1}+\kappa T^{d-2}+\cdots+\kappa^{d-2}T+\kappa ^{d-1})=\kappa^{d}-T^{d}=\kappa^{d}-\omega^{d}=1.\]
Similarly \(U_{T}^{-1}U_{T}=1\). Moreover, from the definition \(U_{T}=\kappa-T\) it is clear that \(T\in k[\Gamma]\) if and only if \(U_{T}\in k[\Gamma]\), in which case
\[U_{T}^{-1}=T^{d-1}+\kappa T^{d-2}+\cdots+\kappa^{d-2}T+\kappa^{d-1}\in k[\Gamma]\]
as well.
Conversely, let \(U\in\mathcal{Q}\backslash\{0\}\) and define \(T_{U}:=\omega^{U}\). We compute
\[T_{U}^{d}=U\omega^{d}U^{-1}=\omega^{d}\]
as \(\omega^{d}=\kappa^{d}-1\in\mathcal{Z}\). Hence \(T_{U}\) is an \(\omega\)-conjugate of degree \(d\). Since \(\omega\in k[\Gamma]\), it is clear that if \(U,U^{-1}\in k[\Gamma]\) then \(T_{U}\in k[\Gamma]\) as well.
Proposition 4.3 does not give conditions about the non-triviality of the units \(U_{T},U_{T}^{-1}\) in case they belong to \(k[\Gamma]\). Nevertheless, we will show in the next section that, in case \(\Gamma\) is the Promislow's group, the non-triviality of \(U_{T},U_{T}^{-1}\in k[\Gamma]\) is equivalent to the non-triviality of the \(\omega\)-conjugate \(T\), and analogously for the non-triviality of \(T_{U}\) in terms of the non-triviality of \(U,U^{-1}\in k[\Gamma]\).
## 5. The case of the Promislow's group
For a group \(\Gamma\), it will be convenient for us to denote by \(\overline{g}\) the inverse of \(g\in\Gamma\).
In this section we will apply the general procedure displayed in Section 4 to the case of the Promislow's group. This group, which will be denoted by \(G\), is given by the presentation
\[G=\langle s,t\ |\ ts^{2}\overline{t}=\overline{s}^{2},st^{2}\overline{s}= \overline{t}^{2}\rangle.\]
We define \(u:=st\) together with \(x:=s^{2},y:=t^{2}\) and \(z:=u^{2}\). We can then write the above presentation of the group as
\[G=\langle s,t,x,y\ |\ x^{t}=\overline{x},y^{s}=\overline{y},x=s^{2},y=t^{2}\rangle.\]
Using this, it is clear that \(G\) is given by the amalgam \(G_{1}*_{H}G_{2}\), with \(G_{1}=\langle t,x\mid x^{t}=\overline{x}\rangle\), \(G_{2}=\langle s,y\mid y^{s}=\overline{y}\rangle\) and \(H=\langle x,t^{2}\rangle=\langle s^{2},y\rangle\). We note that \(H\cong\mathbb{Z}^{2}\) and \(G_{1}\cong G_{2}\cong\mathbb{Z}\rtimes\mathbb{Z}\), where the automorphism implementing the crossed product is the inversion automorphism. This shows that \(G\) is a torsion-free group.
Another important property of \(G\) is that it fits in a non-split short exact sequence
\[1\to\mathbb{Z}^{3}\to G\to\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z} \to 1,\]
where \(\mathbb{Z}^{3}\) is identified with the maximal abelian normal subgroup \(N=\langle x,y,z\rangle\), and \(F=G/N=\{N,Ns,Nt,Nu\}\cong\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}\). In particular, \(G\) is a virtually abelian group. The natural action of \(F\) on \(N\) by conjugation is explicitly described in Table 1 below.
We will fix the right transversal of \(N\) in \(G\) to be \(\tilde{F}=\{1,s,t,u\}\). The associated \(2\)-cocycle is explicitly written down in Table 2.
For the following discussion, we take \(k\) to be any field with involution \(*\) with characteristic different from \(2\). For \(g\in\{x,y,z\}\), we let
\[\kappa_{g}:=\frac{1}{2}(g+\overline{g}),\ \omega_{g}:=\frac{1}{2}(g-\overline{g}) \in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}].\]
We note that
\[\kappa_{g}^{2}-\omega_{g}^{2}=1. \tag{5.1}\]
Following Sections 3.1 and 4, we denote by \(\mathcal{Q}\) the Ore field of fractions of \(k[G]\), by \(\mathcal{L}\) the Ore field of fractions of \(k[N]\), namely \(\mathcal{L}=k(x,y,z)\), and the group \(F\) is identified, as a subgroup of \(\operatorname{Aut}(\mathcal{L}/k)\), with \(H=\{\operatorname{id},\alpha_{s},\alpha_{t},\alpha_{u}\}\). Apart from the automorphisms in \(H\), we have three natural automorphisms of \(\mathcal{L}\), namely
\[\iota_{x}:\mathcal{L}\to\mathcal{L},\quad\iota_{x}(x)=\overline{x},\iota_{x}(y)=y,\iota_{x}(z)=z;\] \[\iota_{y}:\mathcal{L}\to\mathcal{L},\quad\iota_{y}(x)=x,\iota_{y} (y)=\overline{y},\iota_{y}(z)=z;\] \[\iota_{z}:\mathcal{L}\to\mathcal{L},\quad\iota_{z}(x)=x,\iota_{z} (y)=y,\iota_{z}(z)=\overline{z}.\]
That is, for \(g\in\{x,y,z\}\), \(\iota_{g}\) is the automorphism induced by the inversion map with respect to \(g\). The next proposition is an extension of Proposition 3.4 in the particular case of the Promislow's group.
**Proposition 5.1**.: _The following statements hold true._
1. _The set_ \(\{1,s,t,u\}\) _is a basis of_ \(\mathcal{Q}\) _over_ \(\mathcal{L}\)_, where_ \(\mathcal{L}\)_-multiplication is given from the left. Therefore_ \([\mathcal{Q}:\mathcal{L}]=4\)_._
2. _When writing an element_ \(\alpha\in\mathcal{Q}\) _as_ \(\alpha=a+bs+ct+du\) _with_ \(a,b,c,d\in\mathcal{L}\)_, we get the formulas_ \[a=\frac{1}{4}(\alpha+\alpha^{\omega_{x}}+\alpha^{\omega_{y}}+ \alpha^{\omega_{z}}),\qquad bs=\frac{1}{4}(\alpha+\alpha^{\omega_{x}}-\alpha^{ \omega_{y}}-\alpha^{\omega_{z}}),\] \[ct=\frac{1}{4}(\alpha-\alpha^{\omega_{x}}+\alpha^{\omega_{y}}- \alpha^{\omega_{z}}),\qquad du=\frac{1}{4}(\alpha-\alpha^{\omega_{x}}-\alpha^{ \omega_{y}}+\alpha^{\omega_{z}}).\]
3. _The fixed field of the subgroup_ \(H\) _is exactly_ \(k(\kappa_{x},\kappa_{y},\kappa_{z},\xi)\)_, where_ \(\xi:=\omega_{x}\omega_{y}\omega_{z}\)_, which is an extension of_ \(\mathcal{K}:=k(\kappa_{x},\kappa_{y},\kappa_{z})\) _of order_ \(2\)_. In particular,_ \(\mathcal{Z}=\mathcal{K}(\xi)\)_._
4. _We have_ \([\mathcal{Q}:\mathcal{Z}]=16\)_, and_ \(\mathcal{L}\) _is a maximal subfield of_ \(\mathcal{Q}\) _which contains_ \(\mathcal{Z}\)
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline conj & \(x\) & \(y\) & \(z\) \\ \hline \hline \(N\) & \(x\) & \(y\) & \(z\) \\ \hline \(Ns\) & \(x\) & \(\overline{y}\) & \(\overline{z}\) \\ \hline \(Nt\) & \(\overline{x}\) & \(y\) & \(\overline{z}\) \\ \hline \(Nu\) & \(\overline{x}\) & \(\overline{y}\) & \(z\) \\ \hline \end{tabular}
\end{table}
Table 1. The first column elements act on the first row elements by conjugation.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline \(\tau(a,b)\) & \(1\) & \(s\) & \(t\) & \(u\) \\ \hline \hline \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \hline \(s\) & \(1\) & \(x\) & \(1\) & \(x\) \\ \hline \(t\) & \(1\) & \(\bar{x}y\bar{z}\) & \(y\) & \(\bar{x}\bar{z}\) \\ \hline \(u\) & \(1\) & \(\bar{y}z\) & \(\bar{y}\) & \(z\) \\ \hline \end{tabular}
\end{table}
Table 2. The first column corresponds to the different values of \(a\), and the first row corresponds to the different values of \(b\).
Proof.:
1. This is part (i) of Proposition 3.4.
2. We conjugate \(\alpha=a+bs+ct+du\) with the elements \(\omega_{x},\omega_{y},\omega_{z}\) to get \[\alpha^{\omega_{x}} =a+bs-ct-du,\] \[\alpha^{\omega_{y}} =a-bs+ct-du,\] \[\alpha^{\omega_{z}} =a-bs-ct+du.\] From here we obtain the displayed formulas.
3. Clearly the elements \(\kappa_{x},\kappa_{y},\kappa_{z}\) and \(\xi=\omega_{x}\omega_{y}\omega_{z}\) are fixed by \(H\), so \(\mathcal{K}(\xi)=k(\kappa_{x},\kappa_{y},\kappa_{z},\xi)\subseteq\mathcal{L} ^{H}\). Consider now the field extensions \[\mathcal{K}\subseteq\mathcal{K}(\omega_{x})\subseteq\mathcal{K}(\omega_{x}, \omega_{y})\subseteq\mathcal{K}(\omega_{x},\omega_{y},\omega_{z})=\mathcal{L}.\] Since \(\omega_{g}^{2}=\kappa_{g}^{2}-1\in\mathcal{K}\) for any \(g\in\{x,y,z\}\), all three inclusions are of degree at most \(2\). We prove that the first one is exactly \(2\) by showing that \(\omega_{x}\notin\mathcal{K}\). But this is immediate, as \(\iota_{x}(\omega_{x})=-\omega_{x}\), and \(\iota_{x}\) restricts to the identity over \(\mathcal{K}\). The other inclusions are analogously proven to be strict by using the other automorphisms \(\{\iota_{y},\iota_{z}\}\). Thus \([\mathcal{L}:\mathcal{K}]=8\). On the other hand, we observe that \(\mathcal{K}(\xi)\) is an extension of \(\mathcal{K}\) of order \(2\), since \(\xi^{2}=(\kappa_{x}^{2}-1)(\kappa_{y}^{2}-1)(\kappa_{z}^{2}-1)\in\mathcal{K}\) but, for example, \(\iota_{x}(\xi)=-\xi\), so \(\xi\notin\mathcal{K}\). We now deduce that \([\mathcal{L}:\mathcal{K}(\xi)]=4\). But \([\mathcal{L}:\mathcal{L}^{H}]=|H|=4\), hence \(\mathcal{L}^{H}=\mathcal{K}(\xi)\). The statement now follows from part (ii) of Proposition 3.4, which tells us that \(\mathcal{Z}=\mathcal{L}^{H}\).
4. This is part (ii) of Proposition 3.4, together with Theorem 4.1 (1).
For \(g,h,j\in\{x,y,z\}\) we will denote by \(\iota_{gh}\) the composition \(\iota_{g}\iota_{h}\), and by \(\iota_{ghj}\) the composition \(\iota_{g}\iota_{h}\iota_{j}\). We note that conjugation by \(s\) is equal to \(\iota_{yz}\) on \(\mathcal{L}\), i.e. \(\alpha_{s}=\iota_{yz}\). In fact, \(\alpha_{t}=\iota_{xz}\) and \(\alpha_{u}=\iota_{xy}\), so \(H=\{\mathrm{id},\iota_{xy},\iota_{xz},\iota_{yz}\}\) (see Table 1).
**Corollary 5.2**.: _The set \(\{1,\omega_{x},\omega_{y},\omega_{z}\}\) is a \(\mathcal{Z}\)-basis for \(\mathcal{L}\)._
Proof.: Since \(\mathcal{Z}=\mathcal{L}^{H}\) by Proposition 3.4, we have \([\mathcal{L}:\mathcal{Z}]=[\mathcal{L}:\mathcal{L}^{H}]=|H|=4\) and \(\mathrm{Gal}(\mathcal{L}/\mathcal{Z})=H=\{\mathrm{id},\iota_{xy},\iota_{xz}, \iota_{yz}\}\). Now, linear independence of the set \(\{1,\omega_{x},\omega_{y},\omega_{z}\}\) follows by applying the Galois automorphisms to a \(\mathcal{Z}\)-linear combination \(a+b\omega_{x}+c\omega_{y}+d\omega_{z}=0\), hence deducing \(a=b=c=d=0\).
Thus given an element \(\alpha\in\mathcal{L}\), it can be uniquely written as
\[\alpha=a+b\omega_{x}+c\omega_{y}+d\omega_{z} \tag{5.2}\]
with the coefficients \(a,b,c,d\in\mathcal{Z}\) explicitly given by
\[\begin{split}& a=\frac{1}{4}(\alpha+\iota_{xy}(\alpha)+\iota_{xz}( \alpha)+\iota_{yz}(\alpha)),\quad b=\frac{1}{4\omega_{x}}(\alpha-\iota_{xy}( \alpha)-\iota_{xz}(\alpha)+\iota_{yz}(\alpha)),\\ & c=\frac{1}{4\omega_{y}}(\alpha-\iota_{xy}(\alpha)+\iota_{xz}( \alpha)-\iota_{yz}(\alpha)),\quad d=\frac{1}{4\omega_{z}}(\alpha+\iota_{xy}( \alpha)-\iota_{xz}(\alpha)-\iota_{yz}(\alpha)).\end{split} \tag{5.3}\]
In the next lemma we compute the center of \(k[G]\).
**Lemma 5.3**.: _We have \(\mathrm{Z}(k[G])=k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\)._
Proof.: We remark that by Proposition 5.1 (iv) we have \(\mathrm{Z}(k[G])=\mathcal{K}(\xi)\cap k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\), which looks like "almost" what we claim, but nevertheless we do not use Proposition 5.1 in this proof.
We note that, given \(p\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\), the combination
\[p+p^{s}+p^{t}+p^{u}=p+\iota_{yz}(p)+\iota_{xz}(p)+\iota_{xy}(p)=\sum_{\gamma \in\mathrm{Gal}(\mathcal{L}/\mathcal{Z})}\gamma(p)\]
belongs to \(\mathrm{Z}(k[G])\). We thus consider the surjective linear map
\[P\colon k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\to\mathrm{Z}(k[G]),\quad P(p)=\frac{1}{4} \sum_{\gamma\in\mathrm{Gal}(\mathcal{L}/\mathcal{Z})}\gamma(p).\]
It then suffices to show that the image of \(P\) equals \(k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\). We start by checking that \(P(m)\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) when \(m\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) is a monomial. Furthermore, since \(P\) commutes with \(\iota_{x}\), \(\iota_{y}\), \(\iota_{z}\), we may assume that \(m\in k[x,y,z]\).
Let \(\Gamma_{+}\) be the subgroup of \(\operatorname{Aut}(\mathcal{L}/k)\) generated by \(\iota_{x},\iota_{y},\iota_{z}\), and \(\Gamma_{-}\) be the group generated by the linear maps \(-\iota_{x},-\iota_{y},-\iota_{z}\). Let
\[Q(m):=\frac{1}{8}\sum_{\gamma\in\Gamma_{+}}\gamma(m),\quad R(m):=\frac{1}{8} \sum_{\gamma\in\Gamma_{-}}\gamma(m),\]
and note that \(P(m)=Q(m)+R(m)\).
We first consider the case when \(m\) depends on a single variable, for example \(x\), thus \(m=x^{d}\) with \(d\geq 1\). We proceed by strong induction on the exponent. For \(d=1\) we have \(P(m)=\kappa_{x}\) and the claim is true. Now assume that the result holds for all degrees up to \(d\geq 1\), and we compute
\[2^{d+1}\kappa_{x}^{d+1}=\sum_{k=0}^{d+1}\binom{d+1}{k}x^{2k-(d+1)}=\sum_{k=0}^{ d+1}\binom{d+1}{k}x^{d+1-2k}.\]
By adding these two expressions and dividing by \(2\), we get
\[2^{d+1}\kappa_{x}^{d+1} =\sum_{k=0}^{d+1}\binom{d+1}{k}P(x^{d+1-2k})\] \[=2P(x^{d+1})+\sum_{k=1}^{d}\binom{d+1}{k}P(x^{d+1-2k})\] \[=2P(x^{d+1})+\sum_{k=1}^{\lfloor\frac{d+1}{k}\rfloor}\binom{d+1}{ k}P(x^{d+1-2k})+\sum_{k=\lfloor\frac{d+1}{k}\rfloor+1}^{d}\binom{d+1}{k} \iota_{x}P(x^{2k-d-1}).\]
We now use strong induction to deduce that \(P(x^{d+1})\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\). The same argument works _mutatis mutandis_ for the variables \(y\), \(z\).
Let us now consider the case when \(m\) depends on two variables, for example \(x\) and \(y\). Thus take \(m=x^{d}y^{e}\) with \(d,e\geq 1\). Then
\[P(m)=\frac{1}{4}(x^{d}y^{e}+x^{d}\bar{y}^{e}+\bar{x}^{d}y^{e}+\bar{x}^{d}\bar{y }^{e})=\frac{1}{4}(x^{d}+\bar{x}^{d})(y^{e}+\bar{y}^{e})=P(x^{d})P(y^{e}),\]
and thus we are done by the case of a single variable. The cases for the variables \(x,z\) and \(y,z\) are treated analogously.
Finally let us consider the case of three variables \(m=x^{d}y^{e}z^{f}\), with \(d,e,f\geq 1\). We note that
\[Q(m)=\frac{1}{8}(x^{d}+\bar{x}^{d})(y^{e}+\bar{y}^{e})(z^{f}+\bar{z}^{f})=P(x^ {d})P(y^{e})P(z^{f}),\quad R(m)=\frac{1}{8}(x^{d}-\bar{x}^{d})(y^{e}-\bar{y}^ {e})(z^{f}-\bar{z}^{f}).\]
Since \(P(m)=Q(m)+R(m)\), we only need to deal with \(R(m)\). We note that the quotient
\[\alpha:=\frac{x^{d}-\bar{x}^{d}}{x-\bar{x}}\frac{y^{e}-\bar{y}^{e}}{y-\bar{y} }\frac{z^{f}-\bar{z}^{f}}{z-\bar{z}}\]
is invariant under the whole group \(\operatorname{Gal}(\mathcal{L}/\mathcal{K})=\Gamma_{+}\), and so belongs to \(\mathcal{K}=k(\kappa_{x},\kappa_{y},\kappa_{z})\). In fact, this quotient can be explicitly computed to be
\[\alpha=\sum_{i=-d+1}^{d-1}x^{i}\sum_{j=-e+1}^{e-1}y^{j}\sum_{k=-f+1}^{f-1}z^{ k}=\Big{(}1+2\sum_{i=1}^{d-1}P(x^{i})\Big{)}\Big{(}1+2\sum_{j=1}^{e-1}P(y^{j}) \Big{)}\Big{(}1+2\sum_{k=1}^{f-1}P(z^{k})\Big{)}\in k[\kappa_{x},\kappa_{y}, \kappa_{z},\xi].\]
Therefore \(R(m)=\xi\alpha\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\).
This analysis proves the inclusion \(\operatorname{im}(P)\subseteq k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\). The other inclusion is clear, since \(\kappa_{x},\kappa_{y},\kappa_{z},\xi\in\mathcal{Z}\cap k[G]\).
**Remark 5.4**.: An alternative proof of Lemma 5.3 can be obtained by noting that \(\kappa_{x},\kappa_{y},\kappa_{z},\xi\) are invariant functions and then using Molien's formula to check that they generate the whole algebra of invariant functions.
We note the following simple corollary of Lemma 5.3.
**Lemma 5.5**.: _Let \(f,g\in k[\kappa_{x},\kappa_{y},\kappa_{z}]\) be such that \(\frac{f}{g}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\). Then \(\frac{f}{g}\in k[\kappa_{x},\kappa_{y},\kappa_{z}]\). Informally, we may say that if \(g|f\) in \(k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) then \(g|f\) in \(k[\kappa_{x},\kappa_{y},\kappa_{z}]\)._
Proof.: Since \(\frac{f}{g}\in\mathcal{K}\subseteq\mathcal{Z}\) we deduce that \(\frac{f}{g}\in\mathrm{Z}(k[G])\), so by Lemma 5.3 we have \(\frac{f}{g}=p+q\xi\) with \(p,q\in k[\kappa_{x},\kappa_{y},\kappa_{z}]\). Note that \(\frac{f}{g}=\iota_{x}(\frac{f}{g})=p-q\xi\) which shows that \(q=0\). Thus \(\frac{f}{g}=p\in k[\kappa_{x},\kappa_{y},\kappa_{z}]\) and the proof is complete.
Let us gather together some information from the previous lemmas.
**Proposition 5.6**.: _We have the equalities_
\[k[x^{\pm 1},y^{\pm 1},z^{\pm 1}] =k(x,y,z)\cap k[G];\] \[k[\kappa_{x},\kappa_{y},\kappa_{z},\xi] =\mathrm{Z}(k[G])=k(\kappa_{x},\kappa_{y},\kappa_{z})(\xi)\cap k[ G];\] \[k[\kappa_{x},\kappa_{y},\kappa_{z}] =k(\kappa_{x},\kappa_{y},\kappa_{z})\cap k[x^{\pm 1},y^{\pm 1},z^{ \pm 1}].\]
Proof.: The first displayed equality is clear, the second one is the content of Lemma 5.3 and the third one follows from Lemma 5.5.
### The \(\omega\)-degree of an element
Regarding Section 4, we will concentrate in the study of \(\omega_{x}\)-conjugates. For brevity, we will denote by \(\omega\) the element \(\omega_{x}=\frac{1}{2}(x-\overline{x})\), and by \(\kappa\) the element \(\kappa_{x}=\frac{1}{2}(x+\overline{x})\). Note that the key relation here is Formula (5.1), namely
\[\kappa^{2}-\omega^{2}=1.\]
We specialize Definition 4.2 in this context.
**Definition 5.7**.: An element \(T\in\mathcal{Q}\) is called an _\(\omega\)-conjugate_ if \(T^{2}=\omega^{2}\). We say that \(T\) is _non-trivial_ if \(T\neq\pm\omega\).
By Theorem 4.1 (2), we have that \(T\) is an \(\omega\)-conjugate if and only if \(T=\omega^{U}\) for some \(U\in\mathcal{Q}\backslash\{0\}\); thus the name _\(\omega\)-conjugate_ is appropriate.
The following proposition is an extension of Proposition 4.3 in the particular case of the Promislow's group.
**Proposition 5.8**.: _Let \(T\in\mathcal{Q}\) be an \(\omega\)-conjugate, and define \(U_{T}:=\kappa-T\)._
1. _The element_ \(U_{T}\) _is a unit in_ \(\mathcal{Q}\)_, with inverse_ \(U_{-T}=\kappa+T\)_._
2. \(T\in k[G]\) _if and only if_ \(U_{T}\in k[G]\)_, in which case_ \(U_{-T}\) _also belongs in_ \(k[G]\)_._
3. \(T\) _is a non-trivial_ \(\omega\)_-conjugate if and only if_ \(U_{T}\) _is a non-trivial unit._
4. _If the involution of_ \(k\) _restricts to the identity on its prime subfield, then_ \(T^{*}=-T\) _if and only if_ \(U_{T}\) _is a unitary._
_Conversely, let \(U\in\mathcal{Q}\backslash\{0\}\), and define \(T_{U}:=\omega^{U}=U\omega U^{-1}\)._
1. _The element_ \(T_{U}\) _is an_ \(\omega\)_-conjugate._
2. _If both_ \(U,U^{-1}\in k[G]\)_, then_ \(T_{U}\in k[G]\)_._
3. _If_ \(T_{U}\) _is a non-trivial_ \(\omega\)_-conjugate, then_ \(U\) _is a non-trivial unit. The converse is also true if we assume_ \(U\in k[G]\)_._
4. _If_ \(U\) _is a unitary, then_ \(T_{U}^{*}=-T_{U}\)_._
This is mostly Proposition 4.3. Nevertheless, since Proposition 5.8 gives necessary and sufficient conditions for the non-triviality of the units, we prefer to give a full detailed proof of it.
Proof of Proposition 5.8.: Since \(\kappa\in\mathcal{Z}\) we have
\[U_{T}U_{-T}=(\kappa-T)(\kappa+T)=\kappa^{2}-T^{2}=\kappa^{2}-\omega^{2}=1,\]
which shows that \(U_{T}\) is a unit in \(\mathcal{Q}\) with inverse \(U_{-T}\). Moreover, since \(\kappa\in k[G]\), it is clear that \(T\in k[G]\) if and only if \(U_{T}\in k[G]\), in which case \(U_{-T}\in k[G]\) as well. Furthermore, if the involution \(*\) is the identity over the prime subfield of \(k\), then a computation shows that \(U_{T}^{*}=\kappa-T^{*}\), and so \(U_{T}^{*}=U_{T}^{-1}\) if and only if \(T^{*}=-T\).
Let us show now that \(U_{T}\) is a non-trivial unit in case \(T\) is a non-trivial \(\omega\)-conjugate. Suppose, by way of contradiction, that \(U_{T}=\lambda g\) for some \(\lambda\in k\backslash\{0\}\) and \(g\in G\). After isolating \(T\) in the equation \(U_{T}=\lambda g\) and squaring, we get the equation
\[\lambda^{2}g^{2}-\lambda xg-\lambda\overline{x}g+1=0.\]
This equation is inconsistent unless \(g\) equals either \(x\) or \(\overline{x}\). But in such cases we get \(\lambda=1\), so \(U_{T}=x^{\pm 1}\). This leads to \(T=\pm\omega\), which are the trivial \(\omega\)-conjugates, a contradiction. Conversely, if \(T=\pm\omega\), then
\[U_{T}=\kappa-T=\kappa\mp\omega=x^{\mp 1},\]
so \(U_{T}\) is a trivial unit.
For the second part, we define \(T_{U}:=U\omega U^{-1}\) and compute
\[T_{U}^{2}=U\omega^{2}U^{-1}=\omega^{2}\]
as \(\omega^{2}\in\mathcal{Z}\). Hence \(T_{U}\) is an \(\omega\)-conjugate. It is clear that if both \(U,U^{-1}\in k[G]\) then \(T_{U}\in k[G]\). Furthermore, if \(U\) is a unitary, then \(T_{U}=U\omega U^{*}\), and since \(\omega^{*}=-\omega\) we get
\[T_{U}^{*}=U\omega^{*}U^{*}=-U\omega U^{*}=-T_{U}.\]
Let us show now that \(U\) is a non-trivial unit in case \(T_{U}\) is a non-trivial \(\omega\)-conjugate. Suppose, by way of contradiction, that \(U=\lambda g\) for some \(\lambda\in k\backslash\{0\}\) and \(g\in G\). Write \(g=n\tilde{f}\) for \(n\in N\) and \(\tilde{f}\in\tilde{F}=\{1,s,t,u\}\). Then
\[T_{U}=U\omega U^{-1}=n\omega^{\tilde{f}}\overline{n}\in\{\omega,-\omega\}\]
as \(\omega^{\tilde{f}}\in\{\omega,-\omega\}\). Thus \(T_{U}\) is a trivial \(\omega\)-conjugate, a contradiction. Conversely, suppose that \(U\in k[G]\) and that \(T_{U}=\pm\omega\). Write \(U\) as
\[U=a+bs+ct+du\]
with \(a,b,c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\). The case \(T_{U}=\omega\) implies \(U^{\omega}=U\), and so \(U=a+bs\). The same holds for \(U^{-1}\), and so \(U\) turns out to be a unit in \(k[x^{\pm 1},y^{\pm 1},z^{\pm 1}][s]\). This latter ring is a subring of \(k[\mathbb{Z}^{2}\rtimes\mathbb{Z}]\), and such groups are known not to admit non-trivial units. Hence \(U\) is a trivial unit, a contradiction.
The case \(T_{U}=-\omega\) implies \(U^{\omega}=-U\), and so \(U=ct+du=(c+ds)t\). This implies that \(c+ds\) is again a unit in \(k[x^{\pm 1},y^{\pm 1},z^{\pm 1}][s]\), and by the same argument as above \(U\) must be a trivial unit, again a contradiction. This finishes the proof.
As a corollary, we get:
**Corollary 5.9**.: _The existence of non-trivial units in the group ring \(k[G]\) is equivalent to the existence of non-trivial \(\omega\)-conjugates in \(k[G]\)._
_If moreover the involution of \(k\) restricts to the identity on its prime subfield, then the existence of non-trivial unitaries in \(k[G]\) is equivalent to the existence of non-trivial \(\omega\)-conjugates in \(k[G]\) which are anti-selfadjoint, i.e. an \(\omega\)-conjugate \(T\in k[G]\) satisfying \(T^{*}=-T\)._
Corollary 5.9 gives a condition equivalent to the existence of units, which might conceivably be considered simpler than the determinant condition proved in [2], in the following sense: the determinant condition written as an equation over \(\mathcal{Z}\) is a single equation of degree \(4\) with \(16\) variables, whereas the condition for being an \(\omega\)-conjugate is an equation of degree \(2\) with \(16\) variables.
**Remark 5.10**.: Suppose that \(U\) is a non-trivial unit in \(k[G]\). Then by the above proposition we get that \(T_{U}\) is a non-trivial \(\omega\)-conjugate in \(k[G]\). Therefore \(V:=\kappa-T_{U}\) is again a non-trivial unit in \(k[G]\). Moreover, its inverse is given by \(V^{-1}=\kappa+T_{U}=2\kappa-V\). Thus \(V\) satisfies
\[2\kappa V-V^{2}=1.\]
From this we obtain that there are non-trivial units in \(k[G]\) if and only if there are non-trivial units in \(k[G]\) satisfying the equation
\[X^{2}-2\kappa X+1=0.\]
We will now study \(\omega\)-conjugates more closely. Take \(\phi:\mathcal{Q}\to\mathcal{Q}\) to be any \(k\)-automorphism of \(\mathcal{Q}\) of order \(2\) fixing \(\omega^{2}\), that is \(\phi(\omega^{2})=\omega^{2}\). Then any element \(\alpha\in\mathcal{Q}\) can be uniquely written as
\[\alpha=\alpha_{1}+\alpha_{-1}\]
where \(\alpha_{1},\alpha_{-1}\in\mathcal{Q}\) are eigenvectors of \(\phi\) with eigenvalues \(1\) and \(-1\), respectively. In fact, they are explicitly given by
\[\alpha_{1}=\frac{1}{2}(\alpha+\phi(\alpha)),\quad\alpha_{-1}=\frac{1}{2}( \alpha-\phi(\alpha)).\]
Moreover \(\phi\) preserves \(\omega\)-conjugacy, in the sense that it \(T\in\mathcal{Q}\) is an \(\omega\)-conjugate, then \(\phi(T)\) is also an \(\omega\)-conjugate, simply because
\[\phi(T)^{2}=\phi(T^{2})=\phi(\omega^{2})=\omega^{2}.\]
**Definition 5.11**.: The \(\phi\)_-degree_ of an element \(\alpha\in\mathcal{Q}\) is the pair of natural numbers
\[\deg_{\phi}(\alpha):=([\mathcal{Z}(\alpha_{1}):\mathcal{Z}],[\mathcal{Z}( \alpha_{-1}):\mathcal{Z}])\in\mathbb{N}\times\mathbb{N},\]
where \(\alpha_{1},\alpha_{-1}\) are the elements determining the \(\phi\)-decomposition of \(\alpha\).
We note that, for any \(D\in\mathcal{Q}\), the division algebra \(\mathcal{Z}(D)\) is in fact a subfield of \(\mathcal{Q}\) containing \(\mathcal{Z}\). Thus we may use Theorem 4.1 (1) to conclude that \([\mathcal{Z}(D):\mathcal{Z}]\leq 4\), where we recall that \([\mathcal{Q}:\mathcal{Z}]=16\). Moreover, since
\[\mathcal{Z}\subseteq\mathcal{Z}(D)\subseteq\mathcal{Q}\]
necessarily \([\mathcal{Z}(D):\mathcal{Z}]\) must be divisible by \([\mathcal{Q}:\mathcal{Z}]=16\). We thus get a finite number of options for the \(\phi\)-degree of an element \(\alpha\in\mathcal{Q}\), namely
\[\deg_{\phi}(\alpha)\in\{(1,1),(1,2),(1,4),(2,1),(2,2),(2,4),(4,1),(4,2),(4,4)\}.\]
In Proposition 5.14 we further restrict the \(\phi\)-degree of an \(\omega\)-conjugate \(T\in\mathcal{Q}\) in case \(T_{1},T_{-1}\neq 0\). First, we establish a technical lemma.
**Lemma 5.12**.: _Let \(T\in\mathcal{Q}\) be an \(\omega\)-conjugate, \(\phi:\mathcal{Q}\to\mathcal{Q}\) a \(k\)-automorphism fixing \(\omega^{2}\), and let \(T=T_{1}+T_{-1}\) be its \(\phi\)-decomposition. Then \(T_{1}T_{-1}=-T_{-1}T_{1}\), and moreover if both \(T_{1},T_{-1}\neq 0\) we also have the equalities_
\[\mathrm{Z}(\mathcal{Z}[T,T_{-1}])=\mathcal{Z}(T_{1}^{2})=\mathcal{Z}(T_{-1}^{ 2})\]
_and_
\[[\mathcal{Z}(T_{1}):\mathcal{Z}]=[\mathcal{Z}(T_{-1}):\mathcal{Z}].\]
_Here \(\mathcal{Z}[T_{1},T_{-1}]\) is the subalgebra of \(\mathcal{Q}\) generated by \(T_{1}\) and \(T_{-1}\) over \(\mathcal{Z}\). As a consequence, if both \(T_{1},T_{-1}\neq 0\), then_
\[\deg_{\phi}(T)\in\{(1,1),(2,2),(4,4)\}.\]
Proof.: Let us write \(\mathcal{R}:=\mathcal{Z}[T_{1},T_{-1}]\). We compute
\[T_{1}T_{-1}+T_{-1}T_{1} =\frac{1}{4}\big{[}(T+\phi(T))(T-\phi(T))+(T-\phi(T))(T+\phi(T)) \big{]}\] \[=\frac{1}{2}(T^{2}-\phi(T)^{2})\] \[=\frac{1}{2}(\omega^{2}-\omega^{2})=0.\]
In particular, \(T^{2}=T_{1}^{2}+T_{-1}^{2}=\omega^{2}\in\mathcal{Z}\), hence \(\mathcal{Z}(T_{1}^{2})=\mathcal{Z}(T_{-1}^{2})\).
Now we assume that \(T_{1},T_{-1}\neq 0\), so in particular \(\mathcal{R}\) is not commutative. A direct computation shows that \(T_{1}^{2}\) commutes with \(T_{-1}\), and \(T_{-1}^{2}\) commutes with \(T_{1}\), hence \(T_{1}^{2},T_{-1}^{2}\in\mathrm{Z}(\mathcal{R})\). This shows the inclusion
\[\mathcal{Z}(T_{1}^{2})\subseteq\mathrm{Z}(\mathcal{R}). \tag{5.4}\]
**Claim 5.13**.: If either \(T_{1}^{2}\in\mathcal{Z}\) or \(T_{-1}^{2}\in\mathcal{Z}\), then \([\mathcal{R}:\mathcal{Z}]\leq 4\).
Proof of Claim.: Due to the relation \(T_{1}^{2}+T_{-1}^{2}=\omega^{2}\in\mathcal{Z}\), we see that both \(T_{1}^{2},T_{-1}^{2}\in\mathcal{Z}\). It follows that every element of \(\mathcal{R}\) can be written as a combination
\[z_{1}+z_{2}T_{1}+z_{3}T_{-1}+z_{4}T_{1}T_{-1},\quad z_{1},z_{2},z_{3},z_{4}\in \mathcal{Z}.\]
The claim follows.
Now \(\mathcal{Z}\subseteq\mathcal{R}\subseteq\mathcal{Q}\), so the only possibilities for \([\mathcal{R}:\mathcal{Z}]\) are in fact \(1,2,4,8\) and \(16\). Let us check the statement of the lemma case by case.
1. If \([\mathcal{R}:\mathcal{Z}]=16\), then \(\mathcal{R}=\mathcal{Q}\), and so \(\mathrm{Z}(\mathcal{R})=\mathcal{Z}\subseteq\mathcal{Z}(T_{1}^{2})\). We are done by (5.4).
2. If \([\mathcal{R}:\mathcal{Z}]=8\), then by Theorem 4.1 (1) the quantity \([\mathcal{R}:\mathrm{Z}(\mathcal{R})]\) must be a square dividing \(8\), thus it is either \(1\) or \(4\). But \(\mathcal{R}\) is not commutative, so necessarily \([\mathcal{R}:\mathrm{Z}(\mathcal{R})]=4\) and so \([\mathrm{Z}(\mathcal{R}):\mathcal{Z}]=2\). Thus by (5.4) it is enough to argue that \(T_{1}^{2}\notin\mathcal{Z}\). But this follows from Claim 5.13.
3. If \([\mathcal{R}:\mathcal{Z}]=4\), then again Theorem 4.1 (1) implies that \(\mathcal{Z}(\mathcal{R})=\mathcal{Z}\subseteq\mathcal{Z}(T_{1}^{2})\). We are again done by (5.4).
4. The cases \([\mathcal{R}:\mathcal{Z}]=1\) and \(2\) lead to the conclusion that \(\mathcal{R}\) is commutative, which is a contradiction.
This analysis finishes the first equality. For the second one, simply note that
\[[\mathcal{Z}(T_{1}):\mathcal{Z}]=[\mathcal{Z}(T_{1}):\mathcal{Z}(T_{1}^{2})][ \mathcal{Z}(T_{1}^{2}):\mathcal{Z}]=[\mathcal{Z}(T_{1}):\mathcal{Z}(T_{1}^{2}) ][\mathrm{Z}(\mathcal{R}):\mathcal{Z}],\]
\[[\mathcal{Z}(T_{-1}):\mathcal{Z}]=[\mathcal{Z}(T_{-1}):\mathcal{Z}(T_{-1}^{2})][ \mathcal{Z}(T_{-1}^{2}):\mathcal{Z}]=[\mathcal{Z}(T_{-1}):\mathcal{Z}(T_{-1}^{2}) ][\mathrm{Z}(\mathcal{R}):\mathcal{Z}],\]
and moreover \([\mathcal{Z}(T_{1}):\mathcal{Z}(T_{1}^{2})]=[\mathcal{Z}(T_{-1}):\mathcal{Z}(T_ {-1}^{2})]=2\), since both \(\mathcal{Z}(T_{1}^{2})=\mathcal{Z}(T_{-1}^{2})=\mathrm{Z}(\mathcal{R})\) and \(T_{1}\) does not commute with \(T_{-1}\). Therefore we conclude that
\[[\mathcal{Z}(T_{1}):\mathcal{Z}]=[\mathcal{Z}(T_{-1}):\mathcal{Z}].\]
The statement of the \(\phi\)-degrees now follows.
The first part of the following proposition shows that non-trivial \(\omega\)-conjugates fall naturally into two groups, assuming that both \(T_{1},T_{-1}\neq 0\).
**Proposition 5.14**.: _Let \(T\in\mathcal{Q}\) be an \(\omega\)-conjugate, \(\phi:\mathcal{Q}\to\mathcal{Q}\) a \(k\)-automorphism fixing \(\omega^{2}\), and let \(T=T_{1}+T_{-1}\) be its \(\phi\)-decomposition. Assume that \(T_{1},T_{-1}\neq 0\)._
_Then either \(T\) is a trivial \(\omega\)-conjugate or \(\deg_{\phi}(T)\in\{(2,2),(4,4)\}\). Moreover, in the particular case that \(T\) is non-trivial, the following are equivalent._
1. \(\deg_{\phi}(T)=(2,2)\)_._
2. \(T_{1}^{2},T_{-1}^{2}\in\mathcal{Z}\)_._
3. \([\mathcal{Z}[T_{1},T_{-1}]:\mathcal{Z}]=4\)_._
Proof.: By Lemma 5.12, \(\deg_{\phi}(T)\in\{(1,1),(2,2),(4,4)\}\). Therefore to establish the first part of the proposition it is enough to argue that \(\deg_{\phi}(T)\neq(1,1)\) in case \(T\) is a non-trivial \(\omega\)-conjugate. However, if \(\deg_{\phi}(T)=(1,1)\), then \(T\in\mathcal{Z}\) and the equation \(X^{2}=\omega^{2}\) would have more than two solutions in the field \(\mathcal{Z}(\omega)\), namely \(\{\pm\omega,\pm T\}\), which cannot happen. This concludes the first part of the proposition.
For the second part of the proposition, suppose that \(T\) is a non-trivial \(\omega\)-conjugate. We first prove (i) \(\Rightarrow\) (ii). We note that \(\deg_{\phi}(T)=(2,2)\) implies that we can find \(\alpha,\beta\in\mathcal{Z}\) such that \(T_{1}^{2}=\alpha T_{1}+\beta\). Since \(T_{1}^{2}\) commutes with \(T_{-1}\), we deduce that
\[\alpha T_{1}+\beta=(\alpha T_{1}+\beta)^{T_{-1}}=-\alpha T_{1}+\beta,\]
i.e. \(T_{1}^{2}=\beta\in\mathcal{Z}\). Therefore \(T_{-1}^{2}=\omega^{2}-T_{1}^{2}\in\mathcal{Z}\) also, and this proves (ii).
To establish (ii) \(\Rightarrow\) (iii), let us assume that both \(T_{1}^{2},T_{-1}^{2}\in\mathcal{Z}\). By Claim 5.13 we get \([\mathcal{Z}[T_{1},T_{-1}]:\mathcal{Z}]\in\{1,2,4\}\). Since \(T_{1},T_{-1}\neq 0\) by assumption, \(\mathcal{Z}[T_{1},T_{-1}]\) is not commutative. Hence necessarily \([\mathcal{Z}[T_{1},T_{-1}]:\mathcal{Z}]=4\).
Finally we establish (iii) \(\Rightarrow\) (i), so assume \([\mathcal{Z}[T_{1},T_{-1}]:\mathcal{Z}]=4\). Then by Lemma 5.12 we have
\[4=[\mathcal{Z}[T_{1},T_{-1}]:\mathcal{Z}]=[\mathcal{Z}[T_{1},T_{-1}]:\mathcal{ Z}(\mathcal{Z}[T_{1},T_{-1}])][\mathcal{Z}(T_{1}^{2}):\mathcal{Z}].\]
Since \(\mathcal{Z}[T_{1},T_{-1}]\) is not commutative, using Theorem 4.1 (1) we deduce that \([\mathcal{Z}(T_{1}^{2}):\mathcal{Z}]=1\), i.e. \(T_{1}^{2}\in\mathcal{Z}\). Since \(T_{1}\notin\mathcal{Z}\), necessarily \([\mathcal{Z}(T_{1}):\mathcal{Z}]=2\) and thus, by the first part of the proposition, we must have \(\deg_{\phi}(T)=(2,2)\). This finishes the proof.
Let us note an important improvement of Proposition 5.14 in a particular case. Recall that conjugation by \(\omega\) is an automorphism of \(\mathcal{Q}\) of order 2 which preserves \(k[G]\). Explicitly, we have
\[(a+bs+ct+du)^{\omega}=a+bs-ct-du\]
for \(a,b,c,d\in\mathcal{L}=k(x,y,z)\). We will denote by \(\omega\) itself the automorphism of \(\mathcal{Q}\) given by conjugation by \(\omega\). The \(\omega\)-decomposition of \(T=a+bs+ct+du\) is given explicitly by \(T_{1}=a+bs\) and \(T_{-1}=ct+du\).
**Proposition 5.15**.: _Let \(T\in\mathcal{Q}\) be an \(\omega\)-conjugate and \(T=T_{1}+T_{-1}\) be its \(\omega\)-decomposition. If \(T\) is non-trivial, then necessarily \(T_{1},T_{-1}\neq 0\)._
Combining Propositions 5.15 and 5.14, we conclude that an \(\omega\)-conjugate \(T\in\mathcal{Q}\) is either trivial or \(\deg_{\omega}(T)\in\{(2,2),(4,4)\}\), in which case the following equivalences apply:
1. \(\deg_{\omega}(T)=(2,2)\).
2. \(T_{1}^{2},T_{-1}^{2}\in\mathcal{Z}\).
3. \([\mathcal{Z}[T_{1},T_{-1}]:\mathcal{Z}]=4\).
Proof of Proposition 5.15.: If we denote by \(\overline{k}\) the algebraic closure of \(k\), then the inclusion \(k\subseteq\overline{k}\) gives an inclusion \(k[G]\hookrightarrow\overline{k}[G]\), which in turn extends to an inclusion of the corresponding Ore field of fractions. Therefore we may assume, without loss of generality, that \(k\) is algebraically closed.
We first show that we can never have \(T_{1}=0\). By way of contradiction, let us assume that \(T_{1}=0\). We have \(\omega T_{-1}=-T_{-1}\omega\) and \(T_{-1}^{2}=\omega^{2}\), thus \(k(\omega^{2})[\omega,T_{-1}]\) is a division algebra with \([k(\omega^{2})[\omega,T_{-1}]:k(\omega^{2})]=4\), and whose center is \(k(\omega^{2})\). However, Tsen's Theorem (see [3]) shows that the field of rational functions in one variable over an algebraically closed field cannot be the center of a finite-dimensional division algebra. This contradiction shows that \(T_{1}\neq 0\).
Now, in case \(T_{-1}=0\) and \(T=T_{1}\neq\pm\omega\), then \(\mathcal{Z}[\omega,T_{1}]\) is a field in which the equation \(X^{2}=\omega^{2}\) has at least 4 different solutions, namely \(\{\pm\omega,\pm T_{1}\}\), which is a contradiction. This concludes the proof of the proposition.
**Definition 5.16**.: A non-trivial unit \(U\in k[G]\) is said to be of _type 2_ if \(\deg_{\omega}(\omega^{U})=(2,2)\). Otherwise it is said to be of type 4, i.e. in case \(\deg_{\omega}(\omega^{U})=(4,4)\).
**Remark 5.17**.:
1. Let us see that there are \(\omega\)-conjugates in \(\mathcal{Q}\) whose \(\omega\)-degree is \((4,4)\). Indeed, let \(k=\mathbb{Q}\) and let \(U:=(1+x)+u\). Then a direct check shows that \[U^{-1}=\frac{1+\bar{x}}{2+x+\bar{x}-z}-\frac{1}{2+x+\bar{x}-z}u,\] and so \[T:=\omega^{U}=\frac{2+x+\bar{x}+z}{2+x+\bar{x}-z}\omega-\frac{2(1+x)\omega}{2+ x+\bar{x}-z}u.\] Note that here, if \(T=T_{1}+T_{-1}\) denotes the \(\omega\)-decomposition of \(T\), we obtain \[T_{1}=\frac{2+x+\bar{x}+z}{2+x+\bar{x}-z}\omega\] and another computation shows that \(T_{1}^{2}\) is not invariant under \(\iota_{z}\), so that \(T_{1}^{2}\notin\mathcal{Z}\).
2. On the other hand, there are no non-trivial \(\omega\)-conjugates in \(k[G]\) of the form \(T=a+bt\), where \(a,b\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\), by the familiar argument using the unique product property. Indeed, we can replace every occurrence of \(y\) with \(t^{2}\) to express \(T\) as a polynomial in \(t\) with coefficients in \(k[x^{\pm 1},z^{\pm}]\), and it is easy to argue that \(T^{2}\in k[x^{\pm 1}]\) implies that in fact \(T\in k[x^{\pm 1}]\), which then implies that \(T=\pm\omega\).
3. In Proposition 5.21 below we use the results from [8] to find non-trivial \(\omega\)-conjugates of \(\omega\)-degree \((2,2)\) in \(k[G]\), whenever \(\operatorname{char}(k)>2\). However, we were not able to find \(\omega\)-conjugates in \(k[G]\) with \(\omega\)-degree \((4,4)\).
For the purpose of searching for \(\omega\)-conjugates it is convenient to assume some extra hypotheses about the coefficients. The following proposition shows that, if there are non-trivial \(\omega\)-conjugates in \(k[G]\) with \(\omega\)-degree \((2,2)\), then there are also non-trivial \(\omega\)-conjugates with the same \(\omega\)-degree, but whose \(\omega\)-decomposition \(T_{1}+T_{-1}\) satisfies \(T_{1}\omega\in\mathcal{Z}\).
**Proposition 5.18**.: _Suppose that \(T\in\mathcal{Q}\) is an \(\omega\)-conjugate with \(\deg_{\omega}(T)=(2,2)\), and consider the unit \(U_{T}=\kappa-T\) given in Proposition 5.8. Then the element_
\[S:=\omega^{U_{T}}\]
_has the following properties:_
1. \(S\) _is an_ \(\omega\)_-conjugate with_ \(\deg_{\omega}(S)=(2,2)\)_._
2. _If_ \(S=S_{1}+S_{-1}\) _is the_ \(\omega\)_-decomposition of_ \(S\)_, then_ \(S_{1}=\alpha\omega\) _for some_ \(\alpha\in\mathcal{Z}\)_. If moreover_ \(T\in k[G]\) _then we can take_ \(\alpha\in k[G]\)_._
3. _If the involution of_ \(k\) _restricts to the identity on its prime subfield and_ \(T^{*}=-T\)_, then also_ \(S^{*}=-S\)_._
Proof.: Let \(T=T_{1}+T_{-1}\) be the \(\omega\)-decomposition of \(T\). We have
\[S_{1} =\frac{1}{2}(S+S^{\omega})\] \[=\frac{1}{2}(\omega^{U_{T}}+\omega^{\omega U_{T}})\] \[=\frac{1}{2}\Big{(}(\kappa-T_{1}-T_{-1})(\kappa+T_{1}-T_{-1})+( \kappa-T_{1}+T_{-1})(\kappa+T_{1}+T_{-1})\Big{)}\omega\] \[=(\kappa^{2}-T_{1}^{2}+T_{-1}^{2})\omega\] \[=(\kappa^{2}-\omega^{2}+2T_{-1}^{2})\omega=(1+2T_{-1}^{2})\omega\]
and
\[S_{-1} =\frac{1}{2}(S-S^{\omega})\] \[=\frac{1}{2}(\omega^{U_{T}}-\omega^{\omega U_{T}})\] \[=\frac{1}{2}\Big{(}(\kappa-T_{1}-T_{-1})(\kappa+T_{1}-T_{-1})-( \kappa-T_{1}+T_{-1})(\kappa+T_{1}+T_{-1})\Big{)}\omega\] \[=2(T_{1}T_{-1}-\kappa T_{-1})\omega=2(T_{1}-\kappa)T_{-1}\omega.\]
Since \(\deg_{\omega}(T)=(2,2)\), we see that \(T_{1}-\kappa\neq 0\), hence \(S_{-1}\neq 0\) and therefore \(\deg_{\omega}(S)\) is either \((2,2)\) or \((4,4)\). By Propositions 5.14 and 5.15, \(1+2T_{-1}^{2}\in\mathcal{Z}\), and hence \([\mathcal{Z}(S_{1}):\mathcal{Z}]\leq 2\). This establishes both (i) and (ii) with \(\alpha=1+2T_{-1}^{2}\). Observe that if \(T\in k[G]\) then also \(T^{\omega}\in k[G]\), so \(T_{-1}\in k[G]\) and \(\alpha\in k[G]\).
As for (iii), we recall (see the proof of Corollary 5.9) that if \(T\) is anti-selfadjoint then \(U_{T}\) is a unitary. In this case, we compute
\[S^{*}=(\omega^{U_{T}})^{*}=(U_{T}\omega U_{T}^{*})^{*}=U_{T}\omega^{*}U_{T}^{*}.\]
Since \(\omega^{*}=-\omega\), this concludes the proof that \(S^{*}=-S\), and the proof of the proposition.
As corollaries, we get the following results.
**Corollary 5.19**.: _The following hold true:_
1. _There exists a non-trivial unit_ \(U\in k[G]\) _of type_ \(2\) _if and only if there exists a non-trivial_ \(\omega\)_-conjugate_ \(S\in k[G]\) _with_ \(\deg_{\omega}(S)=(2,2)\) _of the form_ \[S=\alpha\omega+ct+du,\] _with_ \(\alpha\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) _and_ \(c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\)_._
2. _Assuming that the involution of_ \(k\) _restricts to the identity on its prime subfield, there exists a non-trivial unitary_ \(U\in k[G]\) _of type_ \(2\) _if and only if there exists a non-trivial anti-selfadjoint_ \(\omega\)_-conjugate_ \(S\in k[G]\) _with_ \(\deg_{\omega}(S)=(2,2)\) _of the form_ \[S=\alpha\omega+ct+du,\] _with_ \(\alpha\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) _and_ \(c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\)_._
Proof.: Given such a unit \(U\) of type \(2\), we let \(T:=\omega^{U}\), which has \(\omega\)-degree \((2,2)\) by assumption. In particular \(T\in k[G]\) is non-trivial, and so the unit \(U_{T}:=\kappa-T\) is a non-trivial unit in \(k[G]\) by Proposition 5.8. Thus \(S:=\omega^{U_{T}}\) is also a non-trivial \(\omega\)-conjugate in \(k[G]\) by the same proposition, with \(\omega\)-degree \((2,2)\) and of the required form by Propositions 5.18 and 5.6.
Conversely, let \(S\in k[G]\) be a non-trivial \(\omega\)-conjugate of the stated form, with \(\omega\)-degree \((2,2)\). Then the associated unit \(U_{S}:=\kappa-S\) is a non-trivial unit in \(k[G]\) of type \(2\) by Propositions 5.8 and 5.18. This proves part (a).
Part (b) is completely analogous, taking into account the corresponding statements relating unitarity of \(U_{T}\) and anti-selfadjointness of \(T\).
**Corollary 5.20**.: _There exists a non-trivial unit \(U\in k[G]\) of type \(2\) if and only if there exists a non-trivial unit \(V\in k[G]\) of the form_
\[V=(\kappa-\alpha\omega)-ct-du\]
_where \(\alpha\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) and \(c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\), whose inverse is \(V^{-1}=(\kappa+\alpha\omega)+ct+du\)._
Proof.: If \(U\in k[G]\) is a non-trivial unit of type \(2\), then by Corollary 5.19 there is a non-trivial \(\omega\)-conjugate \(S\in k[G]\) of the form \(S=\alpha\omega+ct+du\) with \(\alpha\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) and \(c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\). By Proposition 5.8, the unit \(V:=\kappa-S\) is non-trivial in \(k[G]\) with inverse \(V^{-1}=\kappa+S\), which is of the desired form.
Conversely, suppose that \(V=(\kappa-\alpha\omega)-ct-du\) is a non-trivial unit in \(k[G]\) with inverse \(V^{-1}=(\kappa+\alpha\omega)+ct+du\). We compute
\[\omega^{V} =V\omega V^{-1}\] \[=((\kappa^{2}-\alpha^{2}\omega^{2})+cc^{t}y+dd^{u}z)\omega+(cd^{ t}\overline{x}+c^{u}d\overline{y})\omega s+2(\kappa-\alpha\omega)\omega ct+2( \kappa-\alpha\omega)\omega dt\] \[=(1+2\omega^{2}(1-\alpha^{2}))\omega+2(\kappa-\alpha\omega)\omega ct +2(\kappa-\alpha\omega)\omega dt,\]
where we have used that \(VV^{-1}=1\). From here we conclude that \((\omega^{V})_{1}^{2}\in\mathcal{Z}\), and so \(\deg_{\omega}(\omega^{V})=(2,2)\) by Proposition 5.15.
The following proposition, which uses the existence of non-trivial units in \(k[G]\) for the case \(\mathrm{char}(k)\neq 0\) recently proved in [5, 8], proves existence of non-trivial \(\omega\)-conjugates of the first class, namely having \(\omega\)-degree \((2,2)\).
**Proposition 5.21**.: _Let \(\Gamma=G\) be the Promislow's group. If \(\mathrm{char}(k)>2\) then there exists a non-trivial unit in \(k[G]\) of type \(2\)._
Proof.: Write \(p:=\mathrm{char}(k)>2\). In [8, Theorem 3] the author constructs non-trivial units in \(k[G]\). Explicitly, these units are given as follows.
For any choice of parameters \(m,n\in\mathbb{Z}\), define \(h:=(1-z^{1-2n})^{p-2}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) and elements \(a,b,c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) by
\[a :=(1+x)(1+y)(z^{n}+z^{1-n})h,\] \[b :=z^{m}[(1+x)(\overline{x}+\overline{y})+(1+\overline{y})(1+z^{2n- 1})]h,\] \[c :=z^{m}[(1+\overline{y})(x+y)z^{n}+(1+x)(z^{n}+z^{1-n})]h,\] \[d :=z^{2n-1}+(4+x+\overline{x}+y+\overline{y})h.\]
Then \(U:=a+bs+ct+du\) is a non-trivial unit in \(k[G]\), with inverse given by
\[U^{-1}=a^{\prime}+b^{\prime}s+c^{\prime}t+d^{\prime}u,\]
where
\[a^{\prime}=\overline{x}a^{s},\quad b^{\prime}=-\overline{x}b,\quad c^{\prime}=- \overline{y}c,\quad d^{\prime}=\overline{z}d^{s}.\]
By Proposition 5.8, the element \(T:=\omega^{U}\) is a non-trivial \(\omega\)-conjugate in \(k[G]\). We will show that \(\deg_{\omega}(T)=(2,2)\), which will prove the proposition. We first obtain the \(\omega\)-decomposition of \(T\):
\[T=\omega^{U}=U\omega U^{-1}=(a+bs+ct+du)(a^{\prime}+b^{\prime}s-c^{\prime}t-d ^{\prime}u)\omega,\]
and so
\[T_{1}=\frac{1}{2}(T+T^{\omega})= \Big{[}(aa^{\prime}+xb(b^{\prime})^{s}-yc(c^{\prime})^{t}-zd(d^{ \prime})^{u})\] \[+(ab^{\prime}+b(a^{\prime})^{s}-\overline{x}\ \overline{z}c(d^{ \prime})^{t}-\overline{y}d(c^{\prime})^{u})s\Big{]}\omega,\] \[T_{-1}=\frac{1}{2}(T-T^{\omega})= \Big{[}(-ac^{\prime}-xb(d^{\prime})^{s}+c(a^{\prime})^{t}+ \overline{y}zd(b^{\prime})^{u})t\] \[+(-ad^{\prime}-b(c^{\prime})^{s}+\overline{x}y\overline{z}c(b^{ \prime})^{t}+d(a^{\prime})^{u})u\Big{]}\omega.\]
Since \((a^{\prime})^{s}=\overline{x}a\), \((c^{\prime})^{u}=-\bar{x}yc\) and \((d^{\prime})^{t}=zd\), we see that
\[ab^{\prime}+b(a^{\prime})^{s}=-\overline{x}ab+\overline{x}ba=0\]
and also
\[\overline{x}\ \overline{z}c(d^{\prime})^{t}+\overline{y}d(c^{\prime})^{u}= \overline{x}cd-\overline{x}dc=0.\]
Hence the expression accompanying the element \(s\) in the expression of \(T_{1}\) is exactly \(0\), which leads to
\[T_{1}=(aa^{\prime}+xb(b^{\prime})^{s}-yc(c^{\prime})^{t}-zd(d^{\prime})^{u}) \omega=:\alpha\omega.\]
We compute the expression (5.2) of \(\alpha\in\mathcal{L}\) as a \(\mathcal{Z}\)-linear combination using the formulas (5.3). In fact, a direct computation shows that
\[\alpha-\iota_{xy}(\alpha)-\iota_{xz}(\alpha)+\iota_{yz}(\alpha) =0,\] \[\alpha-\iota_{xy}(\alpha)+\iota_{xz}(\alpha)-\iota_{yz}(\alpha) =0,\] \[\alpha+\iota_{xy}(\alpha)-\iota_{xz}(\alpha)-\iota_{yz}(\alpha) =0.\]
We thus conclude that \(\alpha\) is a central element in \(\mathcal{Q}\). Therefore \(T_{1}^{2}=\alpha^{2}\omega^{2}\in\mathcal{Z}\), and this already implies that \(\deg_{\omega}(T)=(2,2)\) by Proposition 5.15. This finishes the proof.
It is an interesting question whether we can find non-trivial units in \(k[G]\) of type \(2\) in the case \(\operatorname{char}(k)=0\). Also, when the involution of \(k\) restricts to the identity on its prime subfield, it is also interesting to ask about non-trivial unitaries in \(k[G]\) of type \(2\). These questions are handled in the next section.
### Criteria for existence of \(\omega\)-conjugates with \(\omega\)-degree \((2,2)\)
In this section we provide criteria for existence of non-trivial units in \(k[G]\) of type \(2\). They will be stated for an arbitrary field \(k\) with \(\operatorname{char}(k)\neq 2\), although in view of Proposition 5.21 they are interesting only when \(\operatorname{char}(k)=0\).
In view of Corollary 5.19, we will study non-trivial \(\omega\)-conjugates \(T\in\mathcal{Q}\) of the form
\[T=a\omega+ct+du, \tag{5.5}\]
where \(a\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\subseteq\mathcal{Z}\) and \(c,t\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\subseteq\mathcal{L}\). Non-triviality of \(T\) implies that both \(c,d\neq 0\). Indeed, if \(c=0\) then \(T=a\omega+du\) and \(U_{T}:=\kappa-T=(\kappa-a\omega)-du\) would be a non-trivial unit in \(k[x^{\pm 1},y^{\pm 1},z^{\pm 1}][u]\). A similar argument as the one given in the proof of Proposition 5.8 implies that \(U_{T}\) could not be non-trivial, giving a contradiction. Similarly, we can prove that \(d\neq 0\). The criteria are obtained by considering the equation
\[T^{2}=\omega^{2}\]
and writing it using explicit bases of \(\mathcal{Q}\) over \(\mathcal{Z}\) which are convenient for computations. Let us start by introducing such bases.
The first column of the following table introduces a basis of \(\mathcal{L}\) over \(\mathcal{Z}\) convenient for studying the coefficient \(c\).
The columns \(2\)-\(4\) of Table 3 show how conjugation acts on the elements of the basis, e.g. \(\gamma_{1}^{s}=-y\cdot\gamma_{1}\). Note that those conjugation relations actually prove that \(\gamma_{i}\) form a basis. The final two columns are for a future reference. We briefly note that \(y,\bar{x}\) and \(\bar{x}y\) are _not_ central elements.
The second basis, for studying the coefficient \(d\), is presented in the following table.
Let us also note the following relations for future reference.
\[\begin{split}\frac{\gamma_{2}\delta_{2}}{\gamma_{3}\delta_{3}}& =\frac{\gamma_{4}\delta_{4}}{\gamma_{1}\delta_{1}}=\frac{2\omega_{x}(1-x)}{(1 +x)}=\frac{2\omega_{x}(1-x^{2})}{1+2x+x^{2}}=\frac{-4\omega_{x}^{2}}{2\kappa_{ x}+2}=-2(\kappa_{x}-1),\\ \frac{\gamma_{1}\delta_{2}}{\gamma_{4}\delta_{3}}&= \frac{\gamma_{3}\delta_{4}}{\gamma_{2}\delta_{1}}=\frac{2\omega_{x}(1+x)}{1-x }=-2(\kappa_{x}+1).\end{split} \tag{5.6}\]
From now on, we write
\[c=c_{1}+c_{2}+c_{3}+c_{4},\quad d=d_{1}+d_{2}+d_{3}+d_{4},\]
where, for all \(1\leq i\leq 4\), we have \(c_{i}=C_{i}\gamma_{i}\), \(d_{i}=D_{i}\delta_{i}\) for some \(C_{i},D_{i}\in\mathcal{Z}\). We start with a technical lemma.
**Lemma 5.22**.: _We have \(c\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) if and only if each \(c_{i}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\). Similarly, \(d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) if and only if each \(d_{i}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\)._
Proof.: We will deal with the coefficient \(c\), being the other one \(d\) completely analogous. After conjugation by \(s,t,u\), we get
\[\begin{cases}c=c_{1}+c_{2}+c_{3}+c_{4},\\ c^{s}=y(-c_{1}+c_{2}+c_{3}-c_{4}),\\ c^{t}=\bar{x}(c_{1}-c_{2}+c_{3}-c_{4}),\\ c^{u}=\bar{x}y(-c_{1}-c_{2}+c_{3}+c_{4}).\end{cases}\]
Therefore we get the formulas
\[\begin{split} c_{1}&=\frac{c-\bar{y}c^{s}+xc^{t}-x\bar{y}c^ {u}}{4},\quad c_{2}=\frac{c+\bar{y}c^{s}-xc^{t}-x\bar{y}c^{u}}{4},\\ c_{3}&=\frac{c+\bar{y}c^{s}+xc^{t}+x\bar{y}c^{u}}{4}, \quad c_{4}=\frac{c-\bar{y}c^{s}-xc^{t}+x\bar{y}c^{u}}{4}.\end{split} \tag{5.7}\]
From here the lemma easily follows, since \(c\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) if and only if \(c^{s},c^{t},c^{u}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\).
The next proposition gives necessary and sufficient conditions for \(T\) as in (5.5) so that \(T^{2}\in\mathcal{Z}\).
**Proposition 5.23**.: _Let \(T\) be as in (5.5). Then the following conditions are equivalent._
1. \(T^{2}\in\mathcal{Z}\)_._
2. _We have_ (5.8) \[cd^{t}\bar{x}\bar{z}+c^{u}d\bar{y}=0.\]
3. _The equation_ \[(\kappa-1)C_{2}C_{4}-(\kappa+1)C_{1}C_{3}=0\] _is satisfied and (at least) one of the following is true:_
\begin{table}
\begin{tabular}{|l||c|c|c|c|c|} \cline{2-6} \multicolumn{1}{c|}{} & \(s\) & \(t\) & \(u\) & \(\delta_{i}^{2}z\) & \((\delta_{i}u)^{*}\) \\ \hline \hline \(\delta_{1}:=(1-\bar{z})\) & \(-z\) & \(-z\) & \(1\) & \(2(\kappa_{z}-1)\) & \(-\delta_{1}u\) \\ \hline \(\delta_{2}:=(x-\bar{x})(1+\bar{z})\) & \(z\) & \(-z\) & \(-1\) & \(8\omega_{x}^{2}(\kappa_{z}+1)\) & \(\delta_{2}u\) \\ \hline \(\delta_{3}:=(1+\bar{z})\) & \(z\) & \(z\) & \(1\) & \(2(\kappa_{z}+1)\) & \(\delta_{3}u\) \\ \hline \(\delta_{4}:=(x-\bar{x})(1-\bar{z})\) & \(-z\) & \(z\) & \(-1\) & \(8\omega_{x}^{2}(\kappa_{z}-1)\) & \(-\delta_{4}u\) \\ \hline \end{tabular}
\end{table}
Table 4. Second basis: \(\delta\)-elements.
\begin{table}
\begin{tabular}{|l||c|c|c|c|c|} \cline{2-6} \multicolumn{1}{c|}{} & \(s\) & \(t\) & \(u\) & \(\gamma_{i}^{2}\bar{x}y\) & \((\gamma_{i}t)^{*}\) \\ \hline \hline \(\gamma_{1}:=(1+x)(1-\bar{y})\) & \(-y\) & \(\bar{x}\) & \(-\bar{x}y\) & \(4(\kappa_{x}+1)(\kappa_{y}-1)\) & \(-\gamma_{1}t\) \\ \hline \(\gamma_{2}:=(1-x)(1+\bar{y})\) & \(y\) & \(-\bar{x}\) & \(-\bar{x}y\) & \(4(\kappa_{x}-1)(\kappa_{y}+1)\) & \(\gamma_{2}t\) \\ \hline \(\gamma_{3}:=(1+x)(1+\bar{y})\) & \(y\) & \(\bar{x}\) & \(\bar{x}y\) & \(4(\kappa_{x}+1)(\kappa_{y}+1)\) & \(\gamma_{3}t\) \\ \hline \(\gamma_{4}:=(1-x)(1-\bar{y})\) & \(-y\) & \(-\bar{x}\) & \(\bar{x}y\) & \(4(\kappa_{x}-1)(\kappa_{y}-1)\) & \(-\gamma_{4}t\) \\ \hline \end{tabular}
\end{table}
Table 3. First basis: \(\gamma\)-elements.
1. _In case either_ \(C_{1}\neq 0\) _or_ \(C_{4}\neq 0\)_, there exist_ \(\alpha,\beta\in\mathcal{Z}\) _such that_ \[D_{1}=-2\beta(\kappa-1)C_{4},\qquad\qquad D_{4}=\beta C_{1},\qquad\qquad D_{2}= \alpha C_{4},\qquad\qquad D_{3}=-2\alpha(\kappa+1)C_{1};\]
2. _In case either_ \(C_{2}\neq 0\) _or_ \(C_{3}\neq 0\)_, there exist_ \(\alpha,\beta\in\mathcal{Z}\) _such that_ \[D_{1}=-2\beta(\kappa+1)C_{3},\qquad\qquad D_{4}=\beta C_{2},\qquad\qquad D_{2} =\alpha C_{3},\qquad\qquad D_{3}=-2\alpha(\kappa-1)C_{2}.\]
_If any of these conditions hold, then_
\[T^{2}=a^{2}\omega^{2}+(c_{1}^{2}-c_{2}^{2}+c_{3}^{2}-c_{4}^{2})\bar{x}y+(d_{1} ^{2}-d_{2}^{2}+d_{3}^{2}-d_{4}^{2})z\]
_and furthermore all nine summands are in \(\mathcal{Z}\)._
Proof.: In this proof we will freely use the conjugation tables (3) and (4).
For the implication \((i)\Rightarrow(ii)\), we simply compute
\[T^{2}=(a\omega+ct+du)^{2}=(a^{2}\omega^{2}+cc^{t}y+dd^{u}z)+(cd^{t}\bar{x}\bar{ z}+c^{u}d\bar{y})s.\]
If we require \(T^{2}\in\mathcal{Z}\), we necessarily need
\[cd^{t}\bar{x}\bar{z}+c^{u}d\bar{y}=0,\]
which is the statement in \((ii)\).
Now we show the equivalence \((ii)\Leftrightarrow(iii)\). We have
\[c^{u} =\bar{x}y(-c_{1}-c_{2}+c_{3}+c_{4}),\] \[d^{t} =z(-d_{1}-d_{2}+d_{3}+d_{4}),\]
thus equation (5.8) becomes
\[0 =cd^{t}\bar{x}y\bar{z}+c^{u}d\] \[=(c_{1}+c_{2}+c_{3}+c_{4})(-d_{1}-d_{2}+d_{3}+d_{4})\bar{x}y+(-c_ {1}-c_{2}+c_{3}+c_{4})(d_{1}+d_{2}+d_{3}+d_{4})\bar{x}y\] \[=2\bar{x}y[(c_{3}+c_{4})(d_{3}+d_{4})-(c_{1}+c_{2})(d_{1}+d_{2})].\]
This is equivalent to the equation
\[(c_{1}+c_{2})(d_{1}+d_{2})=(c_{3}+c_{4})(d_{3}+d_{4}).\]
By conjugating this last equation with \(s\), \(t\) and \(u\), we get the following system of non-linear equations:
\[\begin{cases}(c_{1}+c_{2})(d_{1}+d_{2})=(c_{3}+c_{4})(d_{3}+d_{4}),\\ (c_{1}-c_{2})(d_{1}-d_{2})=(c_{3}-c_{4})(d_{3}-d_{4}),\\ (-c_{1}+c_{2})(d_{1}+d_{2})=(c_{3}-c_{4})(d_{3}+d_{4}),\\ (c_{1}+c_{2})(-d_{1}+d_{2})=(c_{3}+c_{4})(d_{3}-d_{4}).\end{cases}\]
This system is equivalent to the following system of equations:
\[\begin{cases}c_{1}d_{1}-c_{4}d_{4}=0,\\ c_{1}d_{2}-c_{4}d_{3}=0,\\ c_{2}d_{1}-c_{3}d_{4}=0,\\ c_{2}d_{2}-c_{3}d_{3}=0.\end{cases}\]
Using the definition of the \(c_{i}\)'s and the \(d_{i}\)'s, and the relations (5.6), those equations are equivalent to the following equations:
\[2(\kappa-1)C_{2}D_{2}+C_{3}D_{3} =0 C_{1}D_{1}+2(\kappa-1)C_{4}D_{4} =0\] \[2(\kappa+1)C_{1}D_{2}+C_{4}D_{3} =0 C_{2}D_{1}+2(\kappa+1)C_{3}D_{4} =0\]
Note that, given \(C_{1},C_{2},C_{3},C_{4}\), these are in fact two systems of equations, the first system to determine \(D_{2},D_{3}\) and the other system to determine \(D_{1},D_{4}\). Thus the conclusion is that equation (5.8) is equivalent to the condition that the determinants of the above systems vanish, i.e.
\[(\kappa-1)C_{2}C_{4}-(\kappa+1)C_{1}C_{3}=0,\]
together with the statements (I) and (II) of the proposition, depending on the non-vanishing of the \(C_{i}\)'s.
Finally we show \((iii)\Rightarrow(i)\). We have already shown that \((iii)\) implies the vanishing of the second term in
\[T^{2}=(a\omega+ct+du)^{2}=(a^{2}\omega^{2}+cc^{t}y+dd^{u}z)+(cd^{t}\bar{x} \bar{z}+c^{u}d\bar{y})s.\]
Thus it is enough to show that \((iii)\) also implies that the term \(cc^{t}y+dd^{u}z\) is central. We have
\[c^{t} =\bar{x}(c_{1}-c_{2}+c_{3}-c_{4}),\] \[d^{u} =d_{1}-d_{2}+d_{3}-d_{4},\]
therefore
\[cc^{t}y+dd^{u}z =(c_{1}+c_{2}+c_{3}+c_{4})(c_{1}-c_{2}+c_{3}-c_{4})\bar{x}y+(d_{1} +d_{2}+d_{3}+d_{4})(d_{1}-d_{2}+d_{3}-d_{4})z\] \[=\bar{x}y(c_{1}^{2}-c_{2}^{2}+c_{3}^{2}-c_{4}^{2})+z(d_{1}^{2}-d_ {2}^{2}+d_{3}^{2}-d_{4}^{2})+2\bar{z}y(c_{1}c_{3}-c_{2}c_{4})+2z(d_{1}d_{3}-d_{ 2}d_{4}).\]
Let us now argue that \(c_{1}c_{3}-c_{2}c_{4}=d_{1}d_{3}-d_{2}d_{4}=0\). We have, on one hand,
\[c_{1}c_{3}-c_{2}c_{4} =C_{1}C_{3}\gamma_{1}\gamma_{3}-C_{2}C_{4}\gamma_{2}\gamma_{4}\] \[=(1-\bar{y}^{2})[(1+x^{2}+2x)C_{1}C_{3}-(1+x^{2}-2x)C_{2}C_{4}]\] \[=2x(1-\bar{y}^{2})[(\kappa+1)C_{1}C_{3}-(\kappa-1)C_{2}C_{4}]=0.\]
On the other hand,
\[d_{1}d_{3}-d_{2}d_{4} =D_{1}D_{3}\delta_{1}\delta_{3}-D_{2}D_{4}\delta_{2}\delta_{4}\] \[=(1-\bar{z}^{2})(D_{1}D_{3}-4\omega^{2}D_{2}D_{4}).\]
In case (I), this equals
\[4(1-\bar{z}^{2})\alpha\beta C_{1}C_{4}[(\kappa+1)(\kappa-1)-\omega^{2}]=0,\]
and in case (II), we get
\[4(1-\bar{z}^{2})\alpha\beta C_{2}C_{3}[(\kappa+1)(\kappa-1)-\omega^{2}]=0.\]
We thus obtain the expression
\[T^{2}=a^{2}\omega^{2}+(c_{1}^{2}-c_{2}^{2}+c_{3}^{2}-c_{4}^{2})\bar{x}y+(d_{1 }^{2}-d_{2}^{2}+d_{3}^{2}-d_{4}^{2})z.\]
The fact that all nine summands are in \(\mathcal{Z}\), i.e. that each \(\gamma_{i}^{2}\bar{x}y,\delta_{i}^{2}z\in\mathcal{Z}\), follows from looking at the second last columns of Tables (3) and (4). This proves the implication \((iii)\Rightarrow(i)\), and the proposition.
We are now ready to present our main theorems.
**Theorem 5.24**.: _There exists a non-trivial unit \(U\in k[G]\) of type \(2\) if and only if the system of equations presented below have a solution_
\[(A,C_{1},C_{2},C_{3},C_{4},D_{1},D_{2},D_{3},D_{4},\alpha,\beta)\in\mathcal{Z }^{11}\]
_different from \((\pm 1,0,0,0,0,0,0,0,0,0)\), such that \(A\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) and \(C_{i}\gamma_{i},D_{i}\delta_{i}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\)._
\[\diamond\ (A^{2}-1)\omega_{x}^{2}+C_{1}^{2}(\kappa_{x}+1)(\kappa_{y}-1)-C_{2}^{2} (\kappa_{x}-1)(\kappa_{y}+1)+C_{3}^{2}(\kappa_{x}+1)(\kappa_{y}+1)-C_{4}^{2}( \kappa_{x}-1)(\kappa_{y}-1)\]
\[\diamond\ (\kappa_{x}-1)C_{2}C_{4}-(\kappa_{x}+1)C_{1}C_{3}=0,\]
\[\diamond\ \text{In case $C_{1}\neq 0$ or $C_{4}\neq 0$},\begin{cases}D_{1}=\beta(1- \kappa_{x})C_{4},\\ D_{2}=\alpha C_{4},\\ D_{3}=-\alpha(1+\kappa_{x})C_{1},\\ D_{4}=\beta C_{1}.\end{cases}\quad\text{In case $C_{2}\neq 0$ or $C_{3}\neq 0$}, \begin{cases}D_{1}=-\beta(1+\kappa_{x})C_{3},\\ D_{2}=\alpha C_{3},\\ D_{3}=\alpha(1-\kappa_{x})C_{2},\\ D_{4}=\beta C_{2}.\end{cases}\]
**Remarks 5.25**.:
1. We have preferred to write the equations in terms of the old notation \(\omega_{x},\kappa_{x}\) as it seems to be the most convenient form to reflect the explicit appearance of those terms in comparison with the other elements \(\kappa_{y},\kappa_{z}\). Nevertheless, we will continue to use the notation \(\omega,\kappa\) for the proof of the theorem.
2. As stated, the criterion in Theorem 5.24 includes \(6\) equations and \(11\) variables. However, the substitutions given by the last \(4\) equations reduce it to a system of \(2\) equations with \(7\) variables \((A,C_{1},C_{2},C_{3},C_{4},\alpha,\beta)\). The second equation can be used to reduce further to a single equation with \(6\) variables. We chose to state it in an "extended" form as it seems that it is the form most amenable to further analysis.
3. One can also use the conclusion of Corollary 5.20 to prove Theorem 5.24.
Proof of Theorem 5.24.: By Corollary 5.19, the existence of a non-trivial unit \(U\in k[G]\) of type \(2\) is equivalent to the existence of a non-trivial \(\omega\)-conjugate \(T\in k[G]\) with \(\omega\)-degree \((2,2)\) of the form
\[T=A\omega+ct+du, \tag{5.9}\]
where \(A\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) and \(c,d\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\). We have already noted that non-triviality of \(T\) implies that both \(c,d\neq 0\).
So first suppose that such \(T\) exists. In particular \(T^{2}=\omega^{2}\in\mathcal{Z}\), and so Proposition 5.23 tells us that all of the equations of the statement of the theorem, excluding the first one, follow. But the first one also follows since
\[\omega^{2}=T^{2} =A^{2}\omega^{2}+(c_{1}^{2}-c_{2}^{2}+c_{3}^{2}-c_{4}^{2})\bar{x} y+(d_{1}^{2}-d_{2}^{2}+d_{3}^{2}-d_{4}^{2})z\] \[=A^{2}\omega^{2}+C_{1}^{2}\gamma_{1}^{2}\bar{x}y-C_{2}^{2}\gamma_ {2}^{2}\bar{x}y+C_{3}^{2}\gamma_{3}^{2}\bar{x}y-C_{4}^{2}\gamma_{4}^{2}\bar{x}y\] \[\qquad+D_{1}^{2}\delta_{1}^{2}z-D_{2}^{2}\delta_{2}^{2}z+D_{3}^{2 }\delta_{3}^{2}z-D_{4}^{2}\delta_{4}^{2}z\] \[=A^{2}\omega^{2}+4C_{1}^{2}(\kappa+1)(\kappa_{y}-1)-4C_{2}^{2}( \kappa-1)(\kappa_{y}+1)+4C_{3}^{2}(\kappa+1)(\kappa_{y}+1)-4C_{4}^{2}(\kappa- 1)(\kappa_{y}-1)\] \[\qquad+2D_{1}^{2}(\kappa_{z}-1)-8D_{2}^{2}\omega^{2}(\kappa_{z}+ 1)+2D_{3}^{2}(\kappa_{z}+1)-8D_{4}^{2}\omega^{2}(\kappa_{z}-1),\]
which is the first equation after a change of variables \(2C_{i}\to C_{i}\) and \(2D_{2}\to D_{2},2D_{4}\to D_{4}\). Note that the remaining equations of the theorem have been also changed accordingly under these changes of variables. Lastly, all of the variables \(A,C_{i},D_{i},\alpha,\beta\) belong to \(\mathcal{Z}\), and moreover \(A\in k[\kappa_{x},\kappa_{y},\kappa_{z},\xi]\) and each \(c_{i}=C_{i}\gamma_{i},d_{i}=D_{i}\delta_{i}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\) by Lemma 5.22. Finally, since both \(c,d\neq 0\), at least some \(c_{i},d_{j}\neq 0\), and hence the solution obtained is different from the solution \((\pm 1,0,0,0,0,0,0,0,0,0,0)\).
Conversely, if such a solution of the stated system of equations exists, then the element
\[T=A\omega+\Big{(}\frac{C_{1}}{2}\gamma_{1}+\frac{C_{2}}{2}\gamma_{2}+\frac{C_ {3}}{2}\gamma_{3}+\frac{C_{4}}{2}\gamma_{4}\Big{)}t+\Big{(}D_{1}\delta_{1}+ \frac{D_{2}}{2}\delta_{2}+D_{3}\delta_{3}+\frac{D_{4}}{2}\delta_{4}\Big{)}u\]
belongs to \(k[G]\), and a straightforward computation using the equations of the statement of the theorem shows that \(T^{2}=\omega^{2}\), hence central. Clearly the \(\omega\)-degree of \(T\) is \((2,2)\), and non-triviality of \(T\) follows from the non-triviality of the solution obtained, i.e. different from \((\pm 1,0,0,0,0,0,0,0,0,0,0)\).
We now present a variant of the previous theorem for anti-selfadjoint \(\omega\)-conjugates, in case the involution of \(k\) restricts to the identity on its prime subfield.
**Theorem 5.26**.: _Suppose that the involution of \(k\) restricts to the identity on its prime subfield. Then there exists a non-trivial unitary \(U\in k[G]\) of type \(2\) if and only if the system of equations from Theorem 5.24 has a solution_
\[(A,C_{1},C_{2}\xi,C_{3}\xi,C_{4},D_{1},D_{2}\xi,D_{3}\xi,D_{4},\alpha,\beta) \in\mathcal{K}^{9}\times\mathcal{Z}^{2}\]
_different from \((\pm 1,0,0,0,0,0,0,0,0,0,0)\), such that \(A\in k[\kappa_{x},\kappa_{y},\kappa_{z}]\) and \(C_{i}\gamma_{i},D_{i}\delta_{i}\in k[x^{\pm 1},y^{\pm 1},z^{\pm 1}]\)._
For its proof, we first need some observations, which we will gather in the following lemma. Recall that \(\mathcal{Z}=\mathcal{K}(\xi)\), which is an extension of degree \(2\) over \(\mathcal{K}=k(\kappa_{x},\kappa_{y},\kappa_{z})\).
**Lemma 5.27**.: _Suppose that the involution of \(k\) is the identity. Then \(\mathcal{Z}^{*}=\mathcal{Z}\), and moreover \(z^{*}=z\) for any \(z\in\mathcal{K}\). Also, the following hold true._
1. _The element_ \(z\) _belongs to_ \(\mathcal{K}\) _if and only if_ \(z^{*}=z\)_._
2. _The element_ \(z\xi\) _belongs to_ \(\mathcal{K}\) _if and only if_ \(z^{*}=-z\)_._
Proof.: The statement \(\mathcal{Z}^{*}=\mathcal{Z}\) is clear. Note that \(z^{*}=z\) for any \(z\in\mathcal{K}=k(\kappa_{x},\kappa_{y},\kappa_{z})\) since \(\kappa_{x}^{*}=\kappa_{x},\kappa_{y}^{*}=\kappa_{y}\) and \(\kappa_{z}^{*}=\kappa_{z}\).
Now given \(z\in\mathcal{Z}\), write it as \(z=z_{1}+\xi z_{2}\) for some \(z_{1},z_{2}\in\mathcal{K}\). Then \(z^{*}=z_{1}-\xi z_{2}\), and so the rest of the lemma follows straightforwardly.
Finally we are in a position to prove Theorem 5.26.
Proof of Theorem 5.26.: This is a particular case of Theorem 5.24, and the only thing which requires further study is the condition \(T^{*}=-T\). We write
\[T=A\omega+C_{1}\gamma_{1}t+C_{2}\gamma_{2}t+C_{3}\gamma_{3}t+C_{4}\gamma_{4}t+D_ {1}\delta_{1}u+D_{2}\delta_{2}u+D_{3}\delta_{3}u+D_{4}\delta_{4}u,\]
and we use the last columns of Tables (3) and (4):
\[T^{*} =A^{*}\omega^{*}+C_{1}^{*}(\gamma_{1}t)^{*}+C_{2}^{*}(\gamma_{2}t )^{*}+C_{3}^{*}(\gamma_{3}t)^{*}+C_{4}^{*}(\gamma_{4}t)^{*}+D_{1}^{*}(\delta_{1 }u)^{*}+D_{2}^{*}(\delta_{2}u)^{*}+D_{3}^{*}(\delta_{3}u)^{*}+D_{4}^{*}(\delta_{ 4}u)^{*}\] \[=-A^{*}\omega-C_{1}^{*}\gamma_{1}t+C_{2}^{*}\gamma_{2}t+C_{3}^{*} \gamma_{3}t-C_{4}^{*}\gamma_{4}t-D_{1}^{*}\delta_{1}u+D_{2}^{*}\delta_{2}u+D_{3}^ {*}\delta_{3}u-D_{4}^{*}\delta_{4}u\]
Thus the condition \(T^{*}=-T\) is equivalent, by Lemma 5.27, to the conditions
\[A,C_{1},C_{2}\xi,C_{3}\xi,C_{4},D_{1},D_{2}\xi,D_{3}\xi,D_{4}\in\mathcal{K}.\]
These are precisely the condition in the statement of Theorem 5.26 which differs from the statement of Theorem 5.24. The proof is complete.
## Acknowledgments
The authors would like to thank Dawid Kielak for his helpful comments on the first part of the paper.
|
2306.16050 | Evaluating Similitude and Robustness of Deep Image Denoising Models via
Adversarial Attack | Deep neural networks (DNNs) have shown superior performance comparing to
traditional image denoising algorithms. However, DNNs are inevitably vulnerable
while facing adversarial attacks. In this paper, we propose an adversarial
attack method named denoising-PGD which can successfully attack all the current
deep denoising models while keep the noise distribution almost unchanged. We
surprisingly find that the current mainstream non-blind denoising models
(DnCNN, FFDNet, ECNDNet, BRDNet), blind denoising models (DnCNN-B, Noise2Noise,
RDDCNN-B, FAN), plug-and-play (DPIR, CurvPnP) and unfolding denoising models
(DeamNet) almost share the same adversarial sample set on both grayscale and
color images, respectively. Shared adversarial sample set indicates that all
these models are similar in term of local behaviors at the neighborhood of all
the test samples. Thus, we further propose an indicator to measure the local
similarity of models, called robustness similitude. Non-blind denoising models
are found to have high robustness similitude across each other, while
hybrid-driven models are also found to have high robustness similitude with
pure data-driven non-blind denoising models. According to our robustness
assessment, data-driven non-blind denoising models are the most robust. We use
adversarial training to complement the vulnerability to adversarial attacks.
Moreover, the model-driven image denoising BM3D shows resistance on adversarial
attacks. | Jie Ning, Jiebao Sun, Yao Li, Zhichang Guo, Wangmeng Zuo | 2023-06-28T09:30:59Z | http://arxiv.org/abs/2306.16050v2 | # Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack
###### Abstract
Deep neural networks (DNNs) have shown superior performance comparing to traditional image denoising algorithms. However, DNNs are inevitably vulnerable while facing adversarial attacks. In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models while keep the noise distribution almost unchanged. We surprisingly find that the current mainstream non-blind denoising models (DnCNN, FFDNet, ECNDNet, BRDNet), blind denoising models (DnCNN-B, NoiseNoise, RDDCNN-B, FAN), plug-and-play (DPIR, CurvPnP) and unfolding denoising models (DeamNet) almost share the same adversarial sample set on both grayscale and color images, respectively. Shared adversarial sample set indicates that all these models are similar in term of local behaviors at the neighborhood of all the test samples. Thus, we further propose an indicator to measure the local similarity of models, called robustness similitude. Non-blind denoising models are found to have high robustness similitude across each other, while hybrid-driven models are also found to have high robustness similitude with pure data-driven non-blind denoising models. According to our robustness assessment, data-driven non-blind denoising models are the most robust. We use adversarial training to complement the vulnerability to adversarial attacks. Moreover, the model-driven image denoising BM3D shows resistance on adversarial attacks.
image denoising, adversarial attack, denoising-PGD, robustness similitude.
## I Introduction
Images are prone to be contaminated from various noises during forming and transmission, which degrades the visual effect of the image and complicates the downstream tasks. Image noise degradation in optical imaging is usually described by:
\[f=u+n,\quad u\in\Omega, \tag{1}\]
where \(\Omega\) is the image domain, \(f\) is the observed image, \(u\) is the clean image, and \(n\) is zero mean noise usually assumed to follow a Gaussian distribution. The key task of image denoising is to recover \(u\) from \(f\). In particular, when noise information such as noise level and distribution are unknown, the task is called blind denoising. When the noise is known, it is called non-blind denoising.
Image denoising methods can be broadly classified into three categories: model-driven methods, data-driven methods, and model-data hybrid-driven methods. Model-driven methods are physical meaning constrained and interpretable, such as TV [1], PM [2], BM3D [3], therefore features disobey physical properties will not be generated. Model-driven methods work well in low-level noise scenarios. These model-driven methods work well in the case of low-level noise. However, they generally suffer from manual parameter and time-consuming optimization. Furthermore, incomplete manual priors are insufficient to express complex image structures, which limits their potential. In the past decade, data-driven methods based on deep neural networks have excellent performance, especially in the field of image denoising. According to the noise information demand, data-driven methods are divided into non-blind denoising methods (e.g. DnCNN [4], FFDNet [5], ECNDNet [6], BRDNet [7]) and blind denoising methods (e.g. DnCNN-B [4], Noise2Noise [8], RDDCNN-B [9], FAN [10]). Data-driven methods rely on a large amount of input data to train the model in order to obtain optimal weight parameters to minimize the loss function. Powerful abilities to smooth out the noise and recover features are promised by data-driven methods, but the un-interpretability brought by their black-box structure is worrying. Model-data hybrid-driven methods are proposed to eliminate the shortcomings of artificially designed prior and the limitations of lack interpretability. There are two known types of hybrid-driven methods: plug-and-play and unfolding methods [11, 12, 13], such as DPIR [14], CurvPnP [15], and DeamNet [16]. However, the performance of current hybrid-driven methods are still not as good as data-driven methods.
Model-driven image denoising methods are robust and flexible in terms of denoising range. While data-driven image denoising methods are less robust and more sensitive to noise variations. From the perspective of adversarial attacks, deep neural networks are susceptible to adversarial samples [17, 18]. By adding a carefully selected tiny perturbation to the input image, the system can make a wrong judgment, which can lead to serious consequences. This carefully designed method of generating small and imperceptible adversarial perturbations is known as adversarial attack [19, 20, 21]. From a manifold learning perspective [22], high-dimensional data in nature are concentrated near nonlinear low-dimensional manifolds. Adversarial attacks can be considered as a malicious process that drags benign examples out of the manifolds where they are concentrated.
An interesting property of adversarial samples is their transferability, i.e., adversarial samples designed for one model
can successfully fool another model. To the reason for the existence of adversarial samples transferability, there are many opinions but no conclusion has been reached yet. The prevailing claim is that the transferability is created due to the decision boundaries of the samples are correlated between different models [23, 24, 25]. This also suggests that the adversarial samples are not randomly generated, but are traceable subspaces in a high-dimensional space. Some literature [26, 27] attributes transferability to architectural similarity, where the transferability of adversarial samples between two models increases with architectural similarity. Some researches [18, 28] believe that transferability is caused by the high linear correlation between features extracted in different deep neural networks. For the same set of samples, features extracted using one model can be correlated with features extracted using another model by a simple affine transformation.
Existing adversarial attacks [29, 30] and defense techniques [31, 32, 33] mostly target on classification tasks, while few involving image denoising tasks aim to pave the way for subsequent interference with other classification tasks [34]. Genzel et al. [35] argue that regression tasks, which mapping inputs to high-dimensional signal manifolds, are resilient to adversarial perturbations even under vanilla end-to-end network architectures. Furthermore, deep image denoising is an uninterpretable black-box model, so its stability, robustness, and the affective range of noise are unknown and rarely discussed. The same is true for the model-data hybrid-driven approach, which retains a portion of the network structure that brings uninterpretability, and whose stability, robustness, and the range of noise that can be removed remain to be verified.
Based on above reasons, we propose a PGD-based image denoising adversarial attack method, namely denoising-PGD, to disturb the regression task of deep image denoising. By adding adversarial perturbation to the input image, attacks are covertly embedded into the noise, i.e., the change in the noise distribution is imperceptible. The robustness of denoising models and the denoising mechanism of neural networks are explored from the performance variation. Moreover, since PGD-based adversarial attacks have low transferability among different model architectures, if adversarial samples generated by denoising-PGD exhibits strong transferability, high similitude between the models can be indicated. Our contribution consists of four aspects:
(1) We propose image denoising adversarial attack, a task for image denoising, with the aim of verifying the robustness of deep image denoising methods. The proposed denoising-PGD adversarial attack method has a significant adversarial effect and exhibits good generalization.
(2) Adversarial samples generated by denoising-PGD shows excellent transferability in among various image denoising models, including non-blind denoising, blind denoising, plug-and-play, and unfolding models. The shared adversarial sample space is further experimentally explored which indicating that the current deep image denoising methods have high similitude.
(3) To improve the robustness of deep image denoising methods, adversarial training is conducted. It is found that adversarial training can significantly improve the visual effect in some pattern. To explore this problem, a conjecture that adversarial noise constitutes a continuous space is proposed.
(4) The robustness of the classical model-driven method and the deep image denoising method on adversarial attacks is compared and analyzed. Classical model-driven methods BM3D shows some resistance to adversarial attack.
This paper is organized as follows. Section II reviews typical methods of image denoising and the adversarial attack method PGD. Section III describes the denoising-PGD method, its transferability among various denoising models, the exploration of the adversarial space, and the definition of robustness similitude. The experimental results are presented in Section IV. Section V analyzes and discusses the experimental results and presents some possible conjectures. Finally, Section VI concludes the paper.
## II Related work
### _DnCNN and non-blind image denoising_
A milestone in data-driven models is DnCNN [4] which implicitly recovers clean images through a residual learning strategy for predicting noise. The residual mapping will be easier to be optimized, when the original mapping is more like an identity mapping. By this characteristic, the residual learning with batch-normalization will improve the noise removal performance while stable the convergence. In terms of architecture design, DnCNN modific the VGG convolutional filters and removes all pooling layers to increase the receptive field for utilizing more contextual information and making it suitable for image denoising. DnCNN greatly improves the image denoising task metrics, far surpassing the classical model-driven method BM3D.
FFDNet [5] follows a similar framework but has an adjustable noise level map as input. The noise level map as an input allows the model parameters to not change with the noise level. Thus, FFDNet can be viewed as consisting of multiple denoisers with different noise levels, enabling control of the trade-off between denoising and detail retention. FFDNet denoising effect performs better in high noise cases and spatially varying noise than DnCNN. ECNDNet [6] continues the idea of DnCNN to solve the internal covariate shift problem and accelerate the network convergence. ECNDNet uses dilated convolution, which uses a dilated filter with a dilation factor to enlarge the receptive field for improving the performance. ECNDNet is more robust and more effective than commonly used denoising methods. BRDNet [7] combines two networks and dilated convolution to increase the width of the network and the receptive field to obtain more features. Instead of BN operations that are ineffective in mini-batch problems, BRDNet uses BRN operations that approximate the distribution of the training data by a single sample rather than the entire mini-batch to prevent gradient problems and improve performance. BRDNet outperforms DnCNN and FFDNet in terms of image denoising metrics and is highly competitive.
### _Blind denoising_
Zhang et al. [4] introduced a DnCNN-based blind Gaussian denoising model, namely DnCNN-B. DnCNN-B gathers
noise images from a wide range of noise levels to train a single model, thus it can perform denoising without knowing the noise level. DnCNN-B is no less impressive as a blind denoising method, with metrics levels close to DnCNN, and even outperforms DnCNN in the high noise case. Noise2Noise [8] cleverly utilizes the statistical inference property of the l2-norm, that is, as long as the mean value of the input is constant, the valuation of the output remains constant, to achieve unsupervised blind denoising. Excellent denoising performance is obtained applying independent and identically distributed Gaussian noise images as both input and target. It verifies that using noise images yields the same performance as using clean target images as long as the noise mean is zero. RDDCNN-B [9] introduces deformable learnable kernels and stacked convolutional structure to extract more representative noise features based on the relationship of surrounding pixels. RDDCNN-B has a smaller number of parameters and outperforms DnCNN-B in both qualitative and quantitative analysis. As a blind model-data hybrid-driven method, FAN [10] combines frequency domain analysis with attention mechanism. Its wavelet transform creates sparser features in frequency domain making full use of spectral information and spatial information. The goal of the neural network in FAN is to estimate the optimal solution of wavelet coefficients of clean images using nonlinear features. In the case of different noise distributions, FAN has a significant advantage over DnCNN and FFDNet in blind image denoising.
### _Plug-and-play and unfolding_
These data-driven methods perform well, but often have undesired artificial effects and uninterpretability. As a result, model-data hybrid-driven methods emerged, such as plug-and-play and unfolding. The plug-and-play method is known for its flexibility and efficiency in handling various inverse problems. The main idea of this method is to expand the energy function into subproblems by using a variable splitting algorithm and then solving the subproblems associated with the prior using a pre-trained data-driven prior. In this approach, network training and model solving are separated. In contrast, unfolding integrates model-driven and data-driven approaches into an end-to-end network that performs network parameter optimization and model solving simultaneously. These model-data hybrid-driven methods have a complex mathematical foundation, which brings partial interpretability.
DPIR [14] trains a highly flexible and effective CNN denoiser and then inserts it as a module into a semi-quadratic splitting-based iterative algorithm, providing a prior that more closely fits the image structure. DPIR significantly improves the effectiveness of model-driven methods due to the powerful implicit a priori modeling of deep image denoising. In the absence of task-specific training, it is more flexible than data-driven methods such as DnCNN, while offering comparable performance. Li et al. [15] developed a plug-and-play blind denoising model CurvPnP, including a noise estimation sub-network to provide the noise level to the denoising sub-network. CurvPnP uses a depth curvature denoiser in the denoising module, which introduces Gaussian curvature mapping that contain rich fine structures and image details into the encoder-decoder architecture and supervised attention module as a prior. CurvPnP is more flexible to adapt to different image recovery tasks and outperforms state-of-the-art plug-and-play methods. DeamNet [16] is a novel end-to-end and deep unfolding denoising network, which designed around adaptive consistency prior (ACP) and DEAM modules. By introducing nonlinear filtering operators, reliability matrices and high-dimensional feature transformation functions into the consistency prior, ACP can better portray the image information. Incorporates ACP into the maximum a posteriori framework and unfolding it to get the dual element-wise attention mechanism (DEAM) module for more feature. DeamNet improves the interpretability of the network to a certain extent and outperforms BRDNet in terms of denoising performance.
_Summary and our finding:_ In the last decade, deep learning has excelled in the field of image denoising, and the metrics have reached a very high level. Some scholars even believe that the problem in the direction of image denoising has been completely solved and does not need to spend too much effort on research. However, our study finds that all these excellent deep image denoising methods can be fooled, and even all of them can be easily fooled by the same image.
### _PGD adversarial attack_
Goodfellow et al. [18] proposed an efficient one-step attack method based on the gradient of deep neural network, called the Fast Gradient Sign Method (FGSM). FGSM uses relationship between the noise map and image pixels as the loss function. The gradient of the loss function is computed by subtracting or adding a fixed value \(\varepsilon\) from each pixel depending on its sign. The basic iterative method (BIM) [36] replaces the one-step algorithm with multiple small steps, extending the FGSM to an iterative algorithm. Another iterative version of FGSM is the projection gradient descent (PGD) method [37], which uses projection gradient descent on a negative loss function to generate adversarial samples and can be described as a saddle point optimization problem. Unlike BIM, PGD perturbs the original image randomly before starting the iteration.
This saddle point optimization problem consists of an internal maximization problem and an external minimization problem. The internal maximization tries to find an adversarial sample of a given data point \(x\) to maximize the loss, and the external minimization aims to find the model parameters that minimize the adversarial loss given by the internal maximization part. When using a loss function based on noise maps and image pixels, it is difficult to find local maxima that are significantly better than PGD. The authors of PGD verified in [37] that if a network is trained to be robust to PGD adversarial samples, then it becomes robust to a wide range of other attacks as well. In the transferability study of adversarial samples, Lin et al. [38] mentioned that typical gradient-based iterative attacks greedily perturb the image in the direction of the gradient sign at each iteration, usually falling into poor local maxima, so that have weak transferability. Because of this reason, PGD methods as an
iterative version of FGSM are rarely mentioned in the study of adversarial attacks transferability. In summary, PGD is a concise and effective, representative adversarial attack method, so our study will be based on this method. In addition, as an adversarial attack method with low transferability in classification tasks, PGD that exhibits strong transferability in more complex and higher dimensional regression tasks indicates high similarity of the model.
Recently, Yan et al. [39] proposed OBSATK, a two-step adversarial attack method for evaluating the robustness of deep denoising models, aiming to obtain a bounded perturbation that can decrease the quality of the denoised image. In the experiments, DnCNN-B were demonstrated to be vulnerable to adversarial attacks. Different from this work, we aim to explore characteristics of deep denoising models and the correlation between models from the adversarial attack perspective, rather than finding stronger attack methods.
In this paper, we propose denoising-PGD which uses an additive operation to superimpose adversarial noise on a noise image, making it difficult for a trained deep image denoising model to perform the image denoising task properly. Since our adversarial noise distribution conforms to a Gaussian distribution, the composed noise should also be subordinate to Gaussian noise. Theoretically, noise images with added adversarial noise also fall into the denoising range of Gaussian denoising methods, and should not have difficulty in denoising. We try to find the reason of the excellent adversarial sample transferability among various image denoising methods by providing a conjecture of the similitude of deep image denoising tasks.
## III Methodology
### _Motivation_
Deep image denoising is a regression task, which is quite different from classification tasks where adversarial attacks are usually studied. Existing denoising networks vary widely, but they have the same training paradigm, which include clean images as labels and clean images adding zero-mean Gaussian noise as input. Thus, we propose denoising-PGD which follows the standard PGD attack but using the negative l2-norm between the model output and the clean image as the loss function. The adversarial perturbation is applied to the noise input image to maintain the noise level and distribution.
As seen in Fig. 1, the deep image denoising method has obvious adversarial effect on the adversarial sample. The adversarial noise is visually random and independent without obvious structural information. The magnitude of the adversarial noise is small and imperceivable. However, the PSNR of the attacked image after model denoising drops more than 4dB and obvious artifacts appear. Thus, the deep image denoising method is not robust in the tiny neighborhood around the Gaussian noise. In addition, the PGD-based adversarial attack is supposed to have poor transferability among different model structures. If the adversarial samples can easily transfer in different deep image denoising models, it will indicate that current deep image denoising models are essentially similar. To this end, we explore the robustness of different models and the adversarial attack transferability among different models.
### _Image denoising adversarial attack_
Given a noise image \(x\), our goal is to generate an adversarial noise image \(x^{\prime}\) that can fool the deep image denoising model to result much lower denoising quality than functioning on a Gaussian noise image with the same noise level and similar distribution. We name this task as image denoising adversarial attack, whose flowchart is showed in Fig. 2.
For image denoising adversarial attacks, the following mathematical definition is given: let \(D:R^{d}\to R^{d}\) be a denoiser, where \(d\) is the image dimension. A noise image \(x\in R^{d}\) is denoised as \(E(D(x),y)\), where \(y\in R^{d}\) is the clean image. The minimal adversarial sample \(x^{\prime}\in R^{d}\) of \(x\) w.r.t. a distance function \(\gamma:R^{d}\to R_{+}\) is given as the solution \(x^{\prime}\in R^{d}\) of the optimization problem:
\[\begin{array}{l}\min\gamma\left(x^{\prime}-x\right)\\ \text{s.t. }E(D(x),y)-E\left(D\left(x^{\prime}\right),\text{y}\right)>M \end{array} \tag{2}\]
where \(E\) is an arbitrary monotonically increasing function that evaluates the similarity between the original image and the predicted image, such as PSNR, SSIM or MAE. \(M\) is a threshold value that measures a successful adversarial attack. In other words, \(x^{\prime}\) is the closest point to \(x\) with the distance function \(\gamma\) whose denoising performance is lower than \(x\).
Fig. 1: The performance of DnCNN decreases under adversarial attack.
Fig. 2: The flowchart of image denoising adversarial attack method. By adding small imperceptible adversarial perturbations to the noise image, the adversarial samples will fool the trained deep image denoising model and produce unsuccessful denoising results.
### _Image denoising adversarial attack transferability_
For the transferability of image denoising adversarial attacks, our study focuses on evaluating how the adversarial perturbations generated for the source denoising model \(D_{1}\) affect the denoising results of the target denoising model \(D_{2}\). Therefore, we choose the denoising effect of the target model on Gaussian noise as the baseline to compare the denoising effect of the target model on the adversarial samples with it, and consider the adversarial as successful if the effect decreases significantly. Therefore, we evaluate the transferability only for the adversarial samples to fool \(D_{1}\), while the Gaussian noise images that were successfully denoised by deep image denoising model \(D_{1}\) and \(D_{2}\).
\[\begin{split} x^{\prime}\sim& O_{D_{1},D_{2}}=\\ &\left\{\begin{array}{l}x\sim O\\ \end{array}\left|\begin{array}{l}E\left(D_{1}(x),y\right)>E(x,y)\\ E\left(D_{2}(x),y\right)>E(x,y)\\ E\left(D_{1}(x),y\right)-E\left(D_{1}\left(x^{\prime}\right),y\right)>M\end{array} \right.\right\}\end{split}\right.\end{split} \tag{3}\]
where the adversarial sample \(x^{\prime}=adv(x,D_{1})\) is generated by an adversarial attack \(adv(\cdot)\) on a Gaussian noise image \(x\) in the original set \(O\). For these adversarial samples, we say that the adversarial sample is transferable when \(E\left(D_{2}(x),y\right)-E\left(D_{2}\left(x^{\prime}\right),y\right)>M\), and vice versa.
As shown in section IV.B, the adversarial samples generated by denoising-PGD have strong transferability in all the tested deep image denoising models. To further explore the latent reason, we take noise samples surrounding the Gaussian noise image. Specifically, we use denoising-PDG to generate adversarial perturbation and denote the perturbation as \(v\). The Gaussian noise added to the clean image is denoted as \(n\) and the clean image is denoted as \(u\). The noise samples
\[s=\left(\frac{n}{\|n\|}\sin\theta+\frac{v-<v,n>n}{\|v-<v,n>n\|}\cos\theta \right)\|v\|+n+u \tag{4}\]
are collected on the circle centered at the Gaussian noise image and located in the plane whose basis are \(n\) and \(u\), where \(\theta=\{\mathrm{k}/\mathrm{N}^{*}2\pi,\mathrm{k}=1,\ldots,\mathrm{N}\}\), and \(<v,n>\) is the inner product of \(v\) and \(n\). Intuitively, we hope to observe the robustness of denoising models in a tiny sphere neighborhood of the Gaussian noise image. But as images are high-dimensional vectors, randomly collected samples may fail to collect the pivot behavior. Thus, we focus on the samples is a specific cross section of the sphere neighborhood, i.e. the circle in the \(n,v\) plane. Note that, the direction of \(n\) is set as the 0-angle of the circle.
We generate noise samples on the classical non-blind deep denoising model DnCNN for the entire Set12 dataset with \(N=100\). As shown in Fig. 3 and Fig. 4, the adversarial effect only happens on noise samples collected from 30 degrees to 150 degrees on the circle centered at the Gaussian noise "cam-eraman". DnCNN presents performance drop and artifacts only on samples collected in the corresponding region. Surprisingly, we find that for all the images in Set12, the regions of adversarial samples are almost the same. In other word, for all the test images in Set12, the adversarial effect happens on noise samples collected from 30 degrees to 150 degrees. The interesting phenomenon indicates that the robustness of DnCNN is actually image content invariant, as the adversarial space remains the same for the entire test dataset.
In addition, we tested these generated noise samples on the other deep image denoising models, such as the non-blind denoising model ECNDNet, blind denoising models DnCNN-B and RDDCNN-B, the unfolding model DeamNet, and the plug-and-play model DPIR. The adversarial behaviors of these models are observed and recorded in Fig. 5. For all the tested models, the adversarial regions are concentrated in the angular range from 30 degrees to 150 degrees. It means that all the tested models regressed similar denoising functions which leads to similar adversarial regions. It also explains why the denoising-PGD has unexpected excellent transferability among different types of denoising models, as their adversarial spaces are almost overlap with each other. In a deeper observation, the non-blind denoising model (a) and (b) perform similarly, and the blind denoising model (c) and (d) perform similarly.
Fig. 4: Adversarial regions of the DnCNN model under denoising-PGD attack on the dataset Set12.
Fig. 3: Radar diagram of the distribution of the adversarial space and normal denoising space. The closer green area to the edge the stronger is the adversarial performance.
This demonstrates that the adversarial ranges are similar on the same type of deep image denoising networks. The model (e) is the unfolding method and (f) is the plug-and-play method. Although both of them are model-data hybrid-driven methods, their adversarial regions are similar to pure data-driven methods. At a closer look, DPIR has larger adversarial regions.
### _Denoising model robustness and robustness similitude_
We evaluate the denoising model robustness (DMR) as
\[DMR=\frac{1}{M}\sum_{i=1}^{M}E(D(x_{i}^{\prime}))-E(D(x_{i})) \tag{5}\]
where \(X=\{x_{1},\ldots,x_{M}\}\) is the evaluation dataset and \(x_{i}^{\prime}=adv(x_{i},D)\) is the adversarial sample. DMR calculates the average metric change caused by applying adversarial attack to the target model on a specified dataset.
As demonstrated in the previous subsection, all the tested models regress a similar denoising function or at least similar in the local function regression around all the test data. To evaluate the similarity across models, or precisely the local similarity around all the test data, we propose an indicator called denoising model robustness similitude (DMRS):
\[DMRS=\frac{1}{M}\sum_{i=1}^{M}\text{IoU}(\text{Radar}(D_{1}(s_{i})),\text{Radar }(D_{2}(s_{i}))) \tag{6}\]
where IoU is the Intersection over Union, Radar is the radar diagram calculated on the \(n,v\)-plane cross section of the sample neighborhood, i.e.
\[\text{Radar}(D(s_{i}))=\text{enclosed area of }\left\{\frac{E(D(s_{ij}))}{ \max_{j}E(D(s_{ij}))}\right\} \tag{7}\]
where \(s_{ij}\) is the noise sample of \(x_{i}\) at \(\theta_{j}=j/N*2\pi\). Intuitively, DMRS is the averaged IoU between two different models \(D_{1}\) and \(D_{2}\) over the evaluation dataset. High DMRS indicates two models are essentially similar observed from the robustness perspective.
### _Image denoising adversarial space_
According to above observations, adversarial regions are similar across models and across input images. To verify that the adversarial effect is almost everywhere in the region instead of isolated points, we analyzed the linear combination of any two adversarial samples obtained in the noise sample circle. The linear combination is
\[\text{s}_{l}=\lambda\text{s}_{1}+(1-\lambda)\text{s}_{2} \tag{8}\]
where \(\lambda\in[0,1]\) is the hyperparameter. \(s_{1},s_{2}\) are the adversarial instances and \(s_{l}\) is the new example formed by the combination.
In Fig. 6, it can be seen that the linear combination of two adversarial samples can generate new adversarial samples. In other words, the image denoising adversarial region constitutes a continuous space in which an infinite number of adversarial samples lurk.
## IV Experiments
_Dataset_: We follow the setup of [4] using ClImageNet400 dataset for training, which has 400 grayscale images of size 180 x 180. We embedding adversarial attacks in the testing phase, and use the Set12 and Set3c dataset for effect evaluation. These images are widely used for the evaluation of Gaussian denoising methods and they are not included in the training dataset.
_Metrics_: There are two widely used metrics for full-reference image quality assessment: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [40]:
\[\mathrm{PSNR}(\mathbf{X},\mathbf{Y}) =20\cdot\log_{10}\left(\frac{\mathrm{MAX_{I}}}{\mathrm{MSE}( \mathbf{X},\mathbf{Y})}\right) \tag{9}\] \[\mathrm{SSIM}(\mathbf{X},\mathbf{Y}) =\frac{\left(2\mu_{x}\mu_{y}+c_{1}\right)\left(2\sigma_{xy}+c_{2} \right)}{\left(\mu_{x}^{2}+\mu_{y}^{2}+c_{1}\right)\left(\sigma_{x}^{2}+ \sigma_{y}^{2}+c_{2}\right)} \tag{10}\]
where \(X\) and \(Y\) are the two images being used for comparison. \(\mu_{x}\), \(\mu_{y}\), \(\sigma_{x}^{2}\), \(\sigma_{y}^{2}\) are corresponding mean variance values, and \(\sigma_{xy}\) is the covariance. \(MAX_{I}\) is the maximum intensity, which is usually 255 in 8-bit representation.
To evaluate the effectiveness of adversarial denoising attacks, in addition to these two metrics, we chose the mean
Fig. 5: Adversarial regions of various models under adversarial samples generated for DmCNN.
Fig. 6: A linear combination of two arbitrary adversarial samples, where the coefficients are traversed to 1 in steps of 0.1.
absolute error (MAE) to measure the performance of image denoising adversarial attack methods.
\[\text{MAE}\ =\ \frac{\|X-Y\|_{1}}{N_{x}N_{y}} \tag{11}\]
where \(N_{X}\), \(N_{Y}\) are the image size. MAE provides a good point-by-point measure of the error.
### _The denoising-PGD for image denoising adversarial attack_
We apply denoising-PGD on DnCNN model to generate adversarial samples for Set12. The noise level set to 25, maximum perturbation \(\varepsilon\) set to 0.012, and the number of iteration steps is set to 5.
The adversarial noise we generate is a perturbation of very small noise level superimposed on a Gaussian noise image. However, the tiny perturbation can nevertheless produce a great disturbance in the denoising process. As shown in Fig. 7, the adversarial noise image is visually the same as the Gaussian noise image, but the denoising effect shows a great difference, with a PSNR difference of 1.510 dB. The magnified area in the image shows that the artifacts that are slight in the Gaussian denoised image are very serious in the adversarial denoised image, and the texture details of the image are also changed.
Table I lists the adversarial effects caused by denoising-PGD on DnCNN for Set12. The adversarial sample of each image behaves significantly worse than the Gaussian noise image during the denoising process in every evaluation metrics. In average, PSNR drops up to 0.8 dB, SSIM drops by 0.04, and MAE increases by 0.8.
Considering that the adversarial noise is superimposed on the original Gaussian noise image in this paper, this operation may bring a noise level shift to the noise image. To ensure the experimental rigor, we generated a Gaussian noise map with the same noise level with reference to the adversarial noise image to test the denoising ability of the deep image denoising method. It is found that the deep image denoising method can denoise such Gaussian noise images with slightly fluctuating noise levels normally. Therefore, it is verified that our adversarial sample is not adversarial due to perturbed noise levels.
Further, we use denoising-PGD to generate adversarial samples for a blind denoising model DnCNN-B with the same hyperparameter setting. In Table II, it shows that the PSNR drop is up to 4.0 dB, the SSIM drop is 0.22, and the MAE difference is 4.1 on DnCNN-B. The adversarial effect is much stronger on DnCNN-B and the visually difference is much obvious with many noise traces remaining on the image. DnCNN-B is less robust comparing to DnCNN.
### _Denoising model robustness evaluation_
We use denoising-PGD to generate adversarial samples against the target model on Set12. PSNR is used as evaluation metric, as shown in Table III. We can see that data-driven non-blind denoising models are the most robust, followed by hybrid-driven models, while blind denoising models are the least robust.
### _The transferability of denoising-PGD_
To measure the transferability of generated adversarial samples, several denoising methods are selected for comparative and exploration, including non-blind denoising methods FFDNet and ECNDNet, blind denoising methods DnCNN-B and RDDCNN-B, the plug-and-play method DPIR and the unfolding method DeamNet. We generate adversarial samples
Fig. 7: The denoising-PGD adversarial attack effect (house).
on DnCNN and then test these adversarial samples on the above models.
We surprisingly find that all the tested models shown adversarial effect on adversarial samples generated for DnCNN. In the zoomed areas of Fig. 8, the Gaussian noise image after denoising exhibits some unrealistic smoothness while the adversarial noise image exhibits difficulty in removing the noise but produces blurred noise patches. From the measured metrics, there is a significant decrease in PSNR, down by 1.7504 dB on the FFDNet model, a lesser decrease of 0.9018 dB on the DnCNN-B model, and a largest decrease of 2.2879 dB on the DPIR model. In addition, the PSNR decreases by 1.6065 dB on ECNDNet, 0.7405 dB on RDDCNN-B, and 2.1565 dB on DeamNet.
We show the denoising performance of the adversarial samples generated on the DnCNN model using denoising-PGD on the Set12 dataset at noise level 25 on the non-blind denoising method FFDNet, the blind denoising method DnCNN-B, and the plug-and-play method DPIR in Table IV. For the averaged PSNR, FFDNet decreases by 0.7 dB, DnCNN-B decreases by 0.7 dB, and DPIR decreases by 0.9 dB. The SSIM and MAE metrics show similar trends to each other. It can be seen that the blind denoising model is relative resistant to the adversarial perturbation created for the non-blind denoising model. However, the plug-and-play model, as a model-data hybrid-driven method, is unexpectedly sensitive to the adversarial perturbation for the non-blind denoising model. Note that, denoising-PGD, which is supposed to have poor transferability, exhibits excellent transferability among multiple deep image denoising methods which are designed in various motivations, mathematical backgrounds, and across years. It indicates that current deep denoising models are essentially similar, at least similar in the local function regression around all the test data.
In the experiments to test the transferability from DnCNN-B, we selected the non-blind denoising methods FFDNet and ECNDNet, the blind denoising methods DnCNN-B and RDDCNN-B, the plug-and-play method DPIR and the deep unfolding method DeamNet to test the adversarial samples generated by DnCNN-B under denoising-PGD. The results are compiled in Table V.
Comparing to the non-blind denoising method and plug-and-play method with less than 1 dB decrease in PSNR, the blind denoising method DnCNN-B decreases by 4.01 dB and RDDCNN-B decreases by 2.61 dB. The SSIM and MAE metrics also show the same trend, i.e., the adversarial attack samples generated with the blind denoising model DnCNN-B are highly transferable on the blind denoising model and is also transferable on non-blind denoising methods as well as plug-and-play methods, but the attack effect is largely weakened. A possible reason is that blind denoising methods have larger adversarial space comparing to non-blind denoising and plug-and-play methods. Thus, adversarial samples generated for DnCNN-B can fall out of the adversarial regions of non-blind denoising and plug-and-play methods.
### _Denoising model robustness similitude evaluation_
We use denoising-PGD to generate adversarial samples on the Set12 dataset for the DnCNN model and then calculate corresponding noise samples at different angles as the evaluation noise sample set for the similitude evaluation of all the models. As shown in Table VI, deep image denoising models of the same type have up to 90% similarity even though the comparison models were proposed across many years. And surprisingly, the model-data hybrid-driven models with partial interpretability also has around 90% similarity with the non-blind denoising models.
### _Color image denoising adversarial attack_
Under the constraint of denoising-PGD, 3-channel color RGB adversarial samples are generated for non-blind denoising model FFDNet with noise level set to 25, maximum perturbation \(\varepsilon\) set to 0.055, and 5 iterative steps.
As can be seen in Fig. 9, the addition of adversarial noise does not make a perceptible difference to the noise image, but makes the model denoised substantially less effective. We can see that PSNR decreases by 0.5374 dB, MAE increases by 0.3679, and SSIM decreases by about 0.0139. Visually,
Fig. 8: Transferability of adversarial samples generated by DnCNN to various models (house). The test values are PSNR/MAE/SSIM from left to right.
the enlarged areas in the denoised image produce the wrong texture, and many image details are erased.
In the experiments to test the transferability, we selected multiple types of deep image denoising models for color RGB image denoising: non-blind denoising methods FFDNet and BRDNet, blind denoising methods Noise2Noise and FAN, plug-and-play methods CurvPnP and DPIR. The adversarial samples generated for FFDNet under denoising-PGD are tested on the above models.
As can be seen from Table VII, the transferability of denoising-PGD on RGB images generated for FFDNet is less than the transferability on grayscale images generated for DnCNN. The adversarial examples show a high adversarial effect on the same type of denoising model, with PSNR decreasing by 0.59 dB, MAE increasing by 0.44, and SSIM decreasing by 0.01 for FFDNet. The adversarial samples show a higher adversarial effect on the plug-and-play model, with PSNR decreasing about 1.1 dB. While for blind denoising model, the adversarial effect decreases, but are still have about 0.3 dB in PSNR. Based on this, we can say that the best
Fig. 9: Effect of denoising-PGD on color RGB images (butterfly).
transferable adversarial samples can be obtained by applying the image denoising adversarial attack method to a non-blind deep image denoising model. The observation also coincides with the result in section III.C, i.e., the non-blind denoising model has the smallest adversarial space.
### _Deep image denoising adversarial training_
In terms of defense, there are two main categories: adversarial defense and adversarial robustness. Adversarial defense methods usually obfuscate the loss gradient, making effective attacks harder to find, and they often succeed against specific, weaker attacks and fail against stronger ones. In this sense, networks that use adversarial defense methods are secure against weak attacks, but are not truly robust because the adversarial samples remain. Currently, the only method that has demonstrated true adversarial robustness is adversarial training, in which the network adds adversarial samples as input to the training process.
We used denoisng-PGD at an iteration number of 5 and \(\varepsilon=0.012\) to generate the adversarial noise images. Mix the adversarial noise images and Gaussian noise images in 1:1 ratio and feed into the model training phase. The adversarial samples re-generated by denoising-PGD on DnCNN and DnCNN-B are integrated as a test set for testing the resistance of the model after adversarial training against adversarial samples. As can be seen in Fig. 10, the PSNR decremental are reduced from the original 0.8862 dB to 0.1609 dB, and visually denoised adversarial samples become closer to original images. Evaluation of the effect of adversarial training is showed in Table VIII. The difference in the average denoising effect before and after adversarial training is reduced from 0.8 dB to an impressive 0.1 dB. Surprisingly, the adversarial trained denoising model doesn't show any performance decreasing on original Gaussian noise images, as the main drawback of adversarial training is performance decreasing on the original task. Fig. 11 shows that the adversarial trained model has even better denoising effect on pure Gaussian noise images than vanilla deep image denoising model. The adversarial trained model can reduce artifacts which are commonly appeared in deep image denoising models.
## V Discussion
In this chapter, we explore in Section V-A whether generating adversarial perturbation directly to the clean image can lead to better adversarial effect. In section V-B, we demonstrate that classical model-driven denoising method is resistant to adversarial attacks. It explains why many defensive methods that try to eliminate adversarial noise in the preprocessing stage tend to use classical model-driven image denoising methods in stead of the more effective deep image denoising methods in recent years. We introduce L2-denoising-PGD in Section V-C and Section V-D to further mimic the original Gaussian noise distribution.
### _Image denoising adversarial perturbation on clean images_
Adversarial perturbation is actually a noise which can be directly applying to the clean image instead of applying to Gaussian noise images as what we did in this work. However, adversarial perturbation applied to the clean image follows different distribution instead of Gaussian and results less adversarial effect.
To maintain a similar noise standard deviation, the parameter setting of 10 iterations and \(\varepsilon=0.157\) was used for generating the adversarial samples on the clean image. The parameter setting of 5 iterations and \(\varepsilon=0.012\) was used for generating the adversarial samples on the Gaussian noise image. As can be seen from Fig. 12, generating adversarial attack on the clean image shows little adversarial effect with tiny PSNR
Fig. 11: Denoising effect of the adversarial training model and the original model when facing the same Gaussian noise image.
Fig. 12: Demonstration of the effect of generating adversarial samples based on the clean and noise images (a) based on the clean image (b) based on the noise image.
Fig. 10: Comparison of the results of the adversarial training test. (a) is the clean image. (b)(c) shows the denoising effect of the original model. (d)(e) shows the denoising effect of the model after adversarial training.
drop in the denoising result, although there is more artifacts. Generating adversarial samples on noise image has obvious adversarial effect. The denoised image has texture change and new artifacts, meanwhile the denoising result PSNR decreases significantly. Thus, in this work, the adversarial perturbation is applied on the Gaussian noise images. From another aspect, the adversarial perturbation is an attack for the Gaussian noise instead of the image content, which is consistent with the phenomena that robustness is image content invariant.
In summary, the image denoising network do not have adversarial phenomenon in the case that generating adversarial instances on the clean image. We conjecture that similar to the classification boundary of image classification task, image denoising also has a denoising space, and the outide of the space is the part of the network that is difficult to denoise. Therefore, in this paper, we use the noise map with Gaussian noise added to the input to generate the adversarial samples.
### _Image denoising adversarial attack on the classic method_
Among the adversarial defense methods based on denoising, most of them use classical model-driven methods as purifiers. We test the adversarial samples generated by DnCNN in the classical model-driven image denoising method BM3D.
As shown in Fig. 13, the adversarial attack of image (c) has almost no effect on BM3D, and its PSNR after denoising only differs from that of the denoised Gaussian noise image by 0.0636 dB, which is within the range of normal denoising fluctuations. The denoised adversarial sample of image (d) also only differs from the denoised Gaussian noise image by 0.2219 dB in PSNR. Thus, the classical model-driven image denoising method BM3D shows strong resistance to the adversarial samples generated by DnCNN.
### _L2-denoising-PGD image denoising adversarial attack_
The denoising-PGD has an uniformly distributed random perturbations on the images for generating adversarial samples before starting the iteration, it inevitably causes changes in the noise distribution. Zhu et al.[33] proposed that moving an image out of its original distribution can enhance the adversarial transferability. To ensure the experimental rigor in distribution, we impose L2 constraint to the denoising-PGD, i.e. changing the random perturbation to Gaussian, namely L2-denoising-PGD. Algorithm 1 describes the detailed steps of the L2-denoisng-PGD.
INPUT: Gaussian noise sample \(x\); its clean image \(y\); denoiser \(D\); loss function \(J\); perturbation size \(\varepsilon\); maximum iterations \(T\). OUTPUT: adversarial sample \(x^{\prime}\). \(\alpha\leftarrow\frac{\varepsilon}{T}\) \(x_{0}{}^{\prime}\)\(x\)\(\boldsymbol{\ }\)for \(t=0:T-1\)do randomly Gaussian perturbations on \(x\) get the denoised image \(D(x)\) calculate the gradient by \(\bigtriangledown_{x}J(D(x),y)\) \(x^{t+1}=x^{t}+\alpha sign(\bigtriangledown_{x}J(D(x),y))\) \(x^{t+1}=clip_{L_{2}}(x^{t+1}-y)+y\) \(x^{t+1}=clip_{image\_range}(x^{t+1})\) \(t=t+1\) endfor
**Algorithm 1** L2-denoising-PGD image denoising adversarial attack.
### _The effect of L2-denoising-PGD adversarial attack_
In order to eliminate the disturbing pitfall that the random perturbation of the PGD method may have an impact on the noise distribution, the L2-denoising-PGD image denoising adversarial attack method is proposed. As can be seen from the distribution comparison plot in Fig. 14, the adversarial noise of denoising-PGD still has some influence on the distribution, which overlaps more with the Gaussian distribution curve after the L2 constraint is applied. In other words, the
Fig. 13: Effectiveness of BM3D under image denoising adversarial attacks, where the adversarial samples are generated by DnCNN under denoising-PGD. The test values are PSNR/MAE/SSIM from left to right, respectively.
noise distribution of denoising-PGD is close to the Gaussian distribution assumption, while the noise distribution of the L2-denoising-PGD is more consistent with the Gaussian distribution assumption after the constraint is applied.
The L2-denoising-PGD image denoising adversarial attack method also attaches an L2 constraint to the perturbation range of the noise, which further enhances the imperceptibility of the adversarial perturbation and further reduces the impact on the original image. As can be seen from Fig. 15, the effect of the adversarial under such a constraint that conforms to the Gaussian distribution noise assumption and limits the noise perturbation range is very little affected, and the PSNR residual is 0.8051 dB, which is 0.2584 dB lower than the case without the constraint, and the image still has a significant adversarial effect.
In addition, by using the L2-denoising-PGD on different kinds of deep neural denoising networks to generate grayscale adversarial samples and color adversarial samples and testing their adversarial and transferability on the remaining classes of deep neural denoising networks, we can witness that the method is still superior in maintaining the noise distribution, eliminating the concern that the adversarial effect is significantly related to the change of the noise distribution.
## VI Conclusion
In this work, we investigate a new task called image denoising adversarial attack, aiming to explore whether deep image denoising methods are robustness to small adversarial perturbations. Based on this idea, we propose the denoising-PGD image denoising adversarial attack and the L2-denoising-PGD attack with constraints to the noise distribution. Through extensive experiments, we verify that the proposed method has consistent excellent adversarial effect as well as transferability among different types of deep image denoising models. Deep image denoising methods are not robust, especially blind denoising models. The pure data-driven non-blind denoising model have shown better robustness comparing to hybrid-driven models and non-blind denoising models. Unlike adversarial attack tasks targeting image classification, denoising-PGD shows surprising transferability in image denoising. We conjecture and verify that this is due to the high similitude of deep image denoising methods, and the adversarial region can form a continuous space. In addition, we find that the classical model-driven image denoising approach shows resistance to the adversarial attack.
|
2306.09170 | Can ChatGPT pass the Vietnamese National High School Graduation
Examination? | This research article highlights the potential of AI-powered chatbots in
education and presents the results of using ChatGPT, a large language model, to
complete the Vietnamese National High School Graduation Examination (VNHSGE).
The study dataset included 30 essays in the literature test case and 1,700
multiple-choice questions designed for other subjects. The results showed that
ChatGPT was able to pass the examination with an average score of 6-7,
demonstrating the technology's potential to revolutionize the educational
landscape. The analysis of ChatGPT performance revealed its proficiency in a
range of subjects, including mathematics, English, physics, chemistry, biology,
history, geography, civic education, and literature, which suggests its
potential to provide effective support for learners. However, further research
is needed to assess ChatGPT performance on more complex exam questions and its
potential to support learners in different contexts. As technology continues to
evolve and improve, we can expect to see the use of AI tools like ChatGPT
become increasingly common in educational settings, ultimately enhancing the
educational experience for both students and educators. | Xuan-Quy Dao, Ngoc-Bich Le, Xuan-Dung Phan, Bac-Bien Ngo | 2023-06-15T14:47:03Z | http://arxiv.org/abs/2306.09170v3 | # Can ChatGPT pass the Vietnamese National High School Graduation Examination?
###### Abstract
This research article highlights the potential of AI-powered chatGPT, a large language model, to complete the Vietnamese National High School Graduation Examination (VNHSGE). The study dataset included 30 essays in the literature test case and 1,700 multiple-choice questions designed for other subjects. The results showed that ChatGPT was able to pass the examination with an average score of 6-7, demonstrating the technology's potential to revolutionize the educational landscape. The analysis of ChatGPT performance revealed its proficiency in a range of subjects, including mathematics, English, physics, chemistry, biology, history, geography, civic education, and literature, which suggests its potential to provide effective support for learners. However, further research is needed to assess ChatGPT performance on more complex exam questions and its potential to support learners in different contexts. As technology continues to evolve and improve, we can expect to see the use of AI tools like ChatGPT become increasingly common in educational settings, ultimately enhancing the educational experience for both students and educators.
ChatGPT, chatbot, performance analysis, large language model, Vietnamese National High School Graduation Examination
## I Introduction
Artificial intelligence (AI) has the potential to revolutionize education. According to Chassignol et al. [1], AI can transform four areas of education: customized content, innovative teaching strategies, technology-enhanced evaluation, and communication between students and teachers. Zawacki-Richter et al. [2] provided an overview of AI applications in higher education, including profiling and prediction, evaluation and assessment, adaptive systems and personalization, and intelligent tutoring systems. Hwang et al. [3] suggested potential research topics in AI applications for education. Chen et al. [4] focused on using AI for administration, instruction, and learning to improve administrative operations, content modification, and learning quality. Dao et al. [5] highlighted the potential of generative AI in education to reduce workload and increase learner engagement in online learning. Finally, Nguyen et al. [6] proposed an online learning platform that incorporates a Vietnamese virtual assistant to assist teachers in presenting lectures to students and to simplify editing without the need for video recording.
Recent advancements in large language models (LLMs) have enabled AI to understand and communicate with humans, creating opportunities for its use in education. LLMs have shown great potential in education, content development, and language translation. The two primary architectures of LLMs are BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer). In 2018, Google introduced BERT [7], which excelled in various natural language processing (NLP) tasks. OpenAI developed the GPT algorithm [8], which was trained on extensive unlabeled text datasets. Facebook's RoBERTa [9] built on Google's research, and Google released T5 [10] in 2019. In 2020, OpenAI created GPT-3 [11], which demonstrated exceptional performance in various NLP tasks. Recently, OpenAI developed GPT-4 [12], a text-to-text machine learning system capable of processing both text and image inputs. GPT-4 has shown human-level performance in many professional and academic criteria, although it may not perform as well as humans in other contexts.
In recent years, there has been an increasing interest in the potential use of chatbots in education. Several studies have explored the potential benefits, concerns, and challenges of integrating chatbots, specifically ChatGPT1, into educational settings. Halaweh et al. [13] examined the concerns expressed by educators regarding the integration of ChatGPT into educational settings. In another study, Zhai et al. [14] conducted a study to explore the potential impacts of ChatGPT on education by using it to write an academic paper. Furthermore, Stephen [15] has discussed potential applications of the ChatGPT language model in higher education, including brainstorming and writing help, professional communication, and individualized learning. Kasneci et al. [16] have discussed the potential benefits and challenges of using large language models in educational settings.
Footnote 1: [https://chat.openai.com/chat](https://chat.openai.com/chat)
Studies have examined the ability of ChatGPT, a large language model, to participate in exams. Gerd Kortemeyer [17]
evaluated ChatGPT's performance on a calculus-based physics course and found that it would narrowly pass but exhibited errors of a beginning learner. Katz et al. [18] evaluated GPT-4's zero-shot performance on the Uniform Bar Examination (UBE) and found that it outperformed human test-takers and prior models on the multiple-choice component, beating humans in five out of seven subject areas. GPT-4 also scored well on the open-ended essay and performance test components, exceeding the passing threshold for all UBE jurisdictions. Gilson et al. [19] evaluated ChatGPT's performance on multiple-choice questions from the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 exams and found that its performance decreased as question difficulty increased but provided logical justification for its answer selection. Teo [20] conducted a study to evaluate ChatGPT's ability to produce text indistinguishable from human-generated text and exhibit critical thinking skills, suggesting that it has the potential to be used for academic misconduct in online exams. The authors propose using invigilated and oral exams, advanced proctoring techniques, and AI-text output detectors to combat cheating but acknowledge that these solutions may not be foolproof.
The existing literature highlights the potential of chatbots to enhance learning outcomes and provide support to students in various educational settings. However, there is a lack of research investigating their ability to complete high-stakes examinations. Hence, this study aims to bridge this gap in the literature by examining ChatGPT potential to pass the Vietnamese National High School Graduation Examination (VNHSGE). In this paper, we present the results of ChatGPT in completing the VNHSGE exam. The purpose of this study is to answer the question: "Can ChatGPT pass the Vietnamese High School Graduation Examination?" To answer this question, we utilized the VNHSGE dataset [21] which was created from the VNHSGE exam and similar examinations and designed to evaluate the capabilities of LLMs. We evaluated ChatGPT performance on this dataset and analyzed the results to determine the model's accuracy in completing the exam. Furthermore, we will compare the performance of ChatGPT in our test case to other examinations previously conducted by the OpenAI team [12]. By doing so, this study aims to contribute to a better understanding of the effectiveness of AI-powered chatbots in supporting learners in high-stakes examinations.
## II Methods
This paper aims is to examine the performance of ChatGPT in completing the VNHSGE exam and provide critical analysis on the results. Through this research, we aim to contribute to the development of AI tools for educational support, and to shed light on the future possibilities of AI in transforming the education landscape.
### _Dataset_
In this study, we utilized the evaluation set of the VNHSGE dataset [21]. The dataset includes questions from the VNHSGE exam for various subjects such as mathematics, English, physics, chemistry, biology, history, geography, civic education, and literature. The VNHSGE dataset was compiled by high school instructors and the Vietnamese Ministry of Education and Training (VMET) and includes both official and illustrative exam questions.
### _Prompt_
In this study, we employed zero-shot learning to evaluate the performance of ChatGPT on the VNHSGE dataset D. The dataset consists of pairs of questions Q and ground truth solutions S. We define the context of words as P. The answer A of ChatGPT is determined by the following equation
\[A=f(P,Q) \tag{1}\]
where f is ChatGPT and takes into account the context P and the question Q. The context P in this case is a specific structure that guides the response of ChatGPT. It instructs ChatGPT to provide the answer in the following format: { Choice: "A" or "B" or "C" or "D"; Explanation: Explain the answer; The question is: [the actual question] }. By following this structure, ChatGPT generates its answer A, which can be evaluated and compared to the ground truth solution S.
Figure 1 illustrates the process of prompting ChatGPT and retrieving the results. In the case of multiple-choice questions from the VNHSGE dataset, the questions are formatted to align with the expected answer format. The questions are then sent to OpenAI' API.
### _Grading_
To evaluate the performance of ChatGPT in answering questions, we assessed ChatGPT's response by comparing it to the ground truth solution. The evaluation process was conducted using a binary grading system, where ChatGPT's answer was classified as correct or incorrect. The ground truth solution S for each question Q was determined by a human expert. The answer A generated by ChatGPT was then compared to the ground truth solution using the following equation:
\[G=g(Q,S,A) \tag{2}\]
## III Results
### _ChatGPT's results_
In this study, we aimed to investigate whether ChatGPT, a large language model trained by OpenAI, could pass the VNHSGE exam. To achieve this goal, we compared ChatGPT performance in completing the examination with the correct answer numbers for mathematics, literature, English, physics, chemistry, biology, history, geography, and civic education in the years 2019 to 2023. The detailed results of ChatGPT performance on the examination are provided in [22].
Table I presents the correct answer numbers for each subject in the VNHSGE exam from 2019 to 2023. The results show that the number of correct answers varies across subjects and years. The highest number of correct answers in a subject was 43 for English in 2020, while the lowest was 16 for chemistry in 2019. Overall, the number of correct answers
in each subject remains relatively stable over the years, with only slight fluctuations.
Table II displays the scores obtained by ChatGPT in the VNHSGE exam across all subjects from 2019 to 2023. The data indicates that the average scores achieved by ChatGPT range from 4.8 to 7.92, with the lowest average score being observed in chemistry, and the highest in English. This highlights ChatGPT diverse performance across the range of subjects, with some subjects yielding higher scores than others. Figure 2 presents the average results of the subjects for each year from 2019 to 2023, along with the corresponding standard deviation bars. The average results were computed based on the data presented in Table II, which shows ChatGPT scores for mathematics, English, physics, chemistry, biology, history, geography, civic education, and literature for each year. English consistently had the highest average score, ranging from 7.6 in 2019 to 8.6 in 2020, followed by civic education with scores ranging from 6.0 in 2019 to 8.25 in 2022. In contrast, chemistry had the lowest average score, ranging from 4.0 in 2019 to 4.75 in 2022. Biology and history also had relatively low average scores, ranging from 5.25 to 6.0. The results reveal that the performance of ChatGPT in chemistry has remained relatively stable across all four years, with the exception of 2021. Conversely, there is a noticeable variability in ChatGPT scores for subjects such as history, civic education, and geography, suggesting that its performance is less consistent in these areas. The data analysis shows a low degree of variability in ChatGPT scores in subjects like English, literature, physics, and biology, indicating a relatively consistent performance in these subjects. It is worth noting that the variability in ChatGPT scores could be attributed to the complexity of questions and topics covered in these subjects, which may require a deeper understanding of cultural and historical contexts. Overall, the results suggest that ChatGPT performance in certain subjects might require further development to achieve a more consistent and stable performance level, which could be vital for passing the VNHSGE exam.
### _Comparison of ChatGPT's performance in VNHSGE and other exams_
Figure 3 depicts ChatGPT performance in various test cases, including the VNHSGE exam and other examinations [12]. Overall, the results show that ChatGPT performance in the VNHSGE exam is comparable to its performance in other examinations, such as SAT Math, AP Biology, and AP Physics. However, there are some differences in its performance across various test cases. For example, in the mathematics test case, ChatGPT achieved a lower score on the VNHSGE mathematics test compared to the SAT Math. This difference could be due to the higher level of analytical and critical thinking skills required in the VNHSGE mathematics test, which is also conducted in Vietnamese.
In contrast, ChatGPT performance on the literature test case was higher than that of the GRE Writing examination. This success can be attributed to ChatGPT ability to search and summarize content effectively, as well as its training on a vast corpus of paragraphs, enabling it to produce a well-structured essay. Similarly, ChatGPT achieved a significantly higher score on the VNHSGE English test than on the AP English language examination. This difference could be due to the level of difficulty of the tests, with the VNHSGE English test being comparatively easier than the AP language examination.
In the physics test case, ChatGPT accuracy in the VNHSGE physics exam was considerably higher than that in the AP Physics 2 exam. In the chemistry test case, ChatGPT achieved its lowest score in the VNHSGE chemistry examination, but its performance was still significantly better than that in the AP Chemistry examination. The difference in performance could be due to the varying content and format of the exams, the level of difficulty of the assessment questions, and ChatGPT level of exposure to each exam's content.
Regarding the biology test case, ChatGPT performance across the VNHSGE biology exam and AP Biology examinations was relatively similar, achieving comparable accuracy scores of approximately 60%. Similarly, there were no signif
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & **Math** & **Eng** & **Phy** & **Che** & **Bio** & **His** & **Geo** & **Civ** \\ \hline
2019 & 26 & 38 & 24 & 16 & 24 & 17 & 20 & 24 \\ \hline
2020 & 33 & 43 & 25 & 17 & 24 & 19 & 21 & 28 \\ \hline
2021 & 30 & 38 & 24 & 25 & 21 & 22 & 30 & 25 \\ \hline
2022 & 31 & 40 & 26 & 19 & 23 & 24 & 25 & 33 \\ \hline
2023 & 27 & 39 & 23 & 19 & 24 & 31 & 27 & 31 \\ \hline \end{tabular}
\end{table}
Table I: The number of correct answers for multiple-choice subjects
Figure 1: Prompt to ChatGPT.
icant differences observed between ChatGPT performance on the VNHSGE history examination and the AP World History examination, with both tests producing scores of approximately 56.5% and 61%, respectively.
Unfortunately, there is a lack of data from the two subjects, geography and civic education, which makes it impossible to draw any conclusions about ChatGPT performance in these areas. In summary, the results suggest that ChatGPT performance varies across different test cases, and it is important to consider factors such as test difficulty, content, and language when evaluating its performance.
### _ChatGPT and Vietnamese Student Score Spectrums_
In this section, ChatGPT results are compared with those of Vietnamese students in the years 2019, 2020, 2021, 2022. The score distribution serves as an indicator for evaluating the performance of candidates in exams. Each year, Vietnam Ministry of Education and Training releases a chart depicting the score distribution for each subject. This distribution helps assess the candidates' competence and gauge the difficulty level of the exams, thus providing an evaluation of the applicants' proficiency. We have gathered the score distributions from 2019 to 2022, allowing us to compare the performance of ChatGPT with that of Vietnamese students. To facilitate this comparison, Table III presents the average score (AVS) and the most reached score (MVS) by Vietnamese students. For example, in 2019, the AVS and MVS for mathematics are 5.64 and 6.4, respectively.
#### Iv-C1 Mathematics
In this section, we investigated the performance of ChatGPT in mathematics in comparison to Vietnamese students. The data collected from 2019 to 2022 were analyzed and presented in Figure 4. The results indicate that the Mathematics scores of ChatGPT ranged from 5.2 to 6.6, which is lower than the majority of Vietnamese students. These findings suggest that while ChatGPT has shown impressive capabilities in natural language processing, its performance in mathematics is still suboptimal. Further investigation is warranted to understand the factors contributing to ChatGPT performance in mathematics and to improve its abilities in this area.
#### Iv-C2 Literature
This section examined the performance of ChatGPT in the literature component of Vietnam's national high school graduation exam, illustrated in Figure 5. The literature scores of ChatGPT ranged from 5.63 to 7.5, with an average score of 6.68. A comparison between ChatGPT performance and Vietnamese students in the literature test was conducted, and the results were intriguing. ChatGPT scores were consistently higher than the majority of Vietnamese students, except in 2022, where the results were slightly lower. This finding highlights the potential of language models to excel in tasks that require a deep understanding of textual content. Literature is a crucial component of the national high school graduation exam in Vietnam, and the performance of ChatGPT in this subject is critical in determining its ability to pass the exam. However, it is important to note that ChatGPT performance may vary depending on factors such as test difficulty, content, and language. Further research is needed to explore these factors and assess the reliability of ChatGPT performance in literature and other subjects.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \cline{2-10} \multicolumn{1}{c|}{} & **Math** & **Lit** & **Eng** & **Phy** & **Che** & **Bio** & **His** & **Geo** & **Civ** \\ \hline
2019 & 5.2 & 7.5 & 7.6 & 6 & 4 & 6 & 4.25 & 5 & 6 \\ \hline
2020 & 6.6 & 6.89 & 8.6 & 6.25 & 4.25 & 6 & 4.75 & 5.25 & 7 \\ \hline
2021 & 6 & 7.5 & 7.6 & 6 & 6.25 & 5.25 & 5.5 & 7.5 & 6.25 \\ \hline
2022 & 6.2 & 5.63 & 8 & 6.5 & 4.75 & 5.75 & 6 & 6.25 & 8.25 \\ \hline
2023 & 5.4 & 6.48 & 7.8 & 5.75 & 4.75 & 6 & 7.75 & 6.75 & 7.75 \\ \hline Avg & 5.88 & 6.8 & 7.92 & 6.1 & 4.8 & 5.8 & 5.65 & 6.15 & 7.05 \\ \hline \end{tabular}
\end{table}
Table II: The scores for subjects on a 10-point scale
Figure 2: Average scores and standard deviation of subjects for the years 2019-2023.
#### Iv-B3 English
In this section, we examined the English language proficiency of ChatGPT in comparison to Vietnamese students. The English scores of Vietnamese students were compared to those of ChatGPT in the period from 2019 to 2022, and the results were presented in Figure 6. The findings indicate that ChatGPT achieved a score for the English subject that ranged from 7.8 to 8.6. Notably, the results showed that ChatGPT performed better than the majority of Vietnamese students in English language proficiency.
It is worth noting that ChatGPT superior performance in English proficiency compared to Vietnamese students could be attributed to several factors, such as its access to a vast corpus of English texts and resources, as well as its ability to learn and adapt from a wide range of language use cases. However, it is important to recognize that the comparison between ChatGPT and Vietnamese students is not entirely straightforward, as ChatGPT is a language model and does not possess the same cognitive and cultural background as human learners. Moreover, the study did not take into account other variables that might influence language learning, such as socio-economic status, age, and motivation, which could affect the English proficiency of Vietnamese students.
Overall, the results of this study demonstrate the potential of large language models like ChatGPT to achieve high levels of proficiency in foreign languages such as English. The study highlights the need for further research to explore the use of such models in language learning and teaching contexts and the implications of their use for learners and educators.
Figure 3: ChatGPT performance in Vietnamese examinations and other examinations.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Math**} & \multicolumn{2}{c|}{**Lit**} & \multicolumn{2}{c|}{**Eng**} & \multicolumn{2}{c|}{**Phy**} & \multicolumn{2}{c|}{**Che**} & \multicolumn{2}{c|}{**Bio**} & \multicolumn{2}{c|}{**His**} & \multicolumn{2}{c|}{**Geo**} & \multicolumn{2}{c|}{**Civ**} \\ \hline
**2019** & 5.64 & 6.4 & 5.49 & 6 & 4.36 & 3.2 & 5.57 & 6.25 & 5.35 & 6 & 4.68 & 4.5 & 4.3 & 3.75 & 6 & 6 & 7.37 & 7.75 \\ \hline
**2020** & 6.67 & 7.8 & 6.61 & 7 & 4.58 & 3.4 & 6.72 & 7.75 & 6.71 & 7.75 & 5.6 & 5.25 & 5.19 & 4.5 & 6.78 & 7.25 & 8.14 & 8.75 \\ \hline
**2021** & 6.61 & 7.8 & 6.47 & 7 & 5.84 & 4 & 6.56 & 7.5 & 6.63 & 7.75 & 5.51 & 5.25 & 4.97 & 4 & 6.96 & 7 & 8.37 & 9.25 \\ \hline
**2022** & 6.47 & 7.8 & 6.51 & 7 & 5.15 & 3.8 & 6.72 & 7.25 & 6.7 & 8 & 5.02 & 4.5 & 6.34 & 7 & 6.68 & 7 & 8.03 & 8.5 \\ \hline \end{tabular}
\end{table}
Table 3: Average score and Most reached score of Vietnamese students
#### Vietnamese students
#### Vietnamese students
This section investigates the performance of ChatGPT in comparison to Vietnamese students in physics. The results are presented in Figure 7, which displays the physics scores of Vietnamese students compared to ChatGPT over the period of 2019-2022. The findings reveal that ChatGPT score for physics subject varied from 5.75 to 6.5. Interestingly, the results also indicate that ChatGPT mostly achieved lower scores compared to the majority of Vietnamese students. This is consistent with previous research that has shown that language models like ChatGPT may not perform as well as humans on tasks that require reasoning or understanding of physical concepts. Further investigation is warranted to identify the factors contributing to the observed differences in performance between ChatGPT and Vietnamese students.
#### Vietnamese students
Figure 8 presents the results of the chemistry subject scores of Vietnamese students compared to ChatGPT in the years 2019 to 2022. The findings reveal that ChatGPT score for the chemistry subject varied between 4.0 to 6.25, indicating a range of performance across the years. Interestingly, ChatGPT performance is consistently lower than the majority of Vietnamese students in chemistry. The results suggest that although ChatGPT has achieved competitive scores in other subjects, its performance in chemistry falls below the expectations of the average Vietnamese student. Further research could investigate the possible factors that contribute to this discrepancy and explore ways to improve ChatGPT performance in chemistry.
#### Vietnamese students
This section assesses the performance of ChatGPT in the biology subject compared to Vietnamese students. Figure 9 displays the biology subject scores of Vietnamese students compared to ChatGPT from 2019 to 2022. The results indicate that ChatGPT scores for biology ranged from 5.25 to 6.00, with an average score of 5.63. Interestingly, ChatGPT was able to achieve higher results than the majority of Vietnamese students, with only a small proportion of students outscoring the language model. These findings suggest that
Figure 4: Comparison in mathematics score.
Figure 5: Comparison in literature.
Figure 8: Comparison in chemistry.
Figure 6: Comparison in English.
Figure 7: Comparison in physics.
ChatGPT has the potential to perform well in biology subjects and may serve as a useful tool for educational purposes. However, further research is needed to investigate the factors that contribute to the success of ChatGPT in biology and how it can be integrated into the education system to support student learning.
#### V-B7 History
Figure 10 presents a comparison of the history subject scores of Vietnamese students to those of ChatGPT from 2019 to 2022. The results reveal that the ChatGPT score for the history subject ranged from 4.25 to 7.75, indicating a significant variation across the years. Notably, ChatGPT achieved higher results than the majority of Vietnamese students in most cases. These findings suggest that ChatGPT proficiency in history is commendable and may reflect the potential of language models to perform well in various subjects. However, further investigation is needed to identify the factors that contribute to ChatGPT success and to ensure that its performance aligns with the educational standards and requirements of the Vietnamese curriculum.
#### V-B8 Geography
This section presents a comparison between the performance of ChatGPT and Vietnamese students in the geography subject from 2019 to 2022, as displayed in Figure 11. The results reveal that ChatGPT scores in geography ranged from 5.0 to 7.5. Interestingly, in 2021, ChatGPT score surpassed that of most Vietnamese students, yet, in general, ChatGPT achieved lower results than the majority of Vietnamese students. One possible explanation for ChatGPT lower performance is its inability to analyze images and charts. As ChatGPT is based on GPT-3.5, it lacks the capability to read and analyze images or charts, rendering it unable to answer questions related to the geographical Atlas of Vietnam, which accounts for approximately 50% of questions in the geography exam. To address this issue, we plan to evaluate the performance of GPT-4 to examine its improvement. Moreover, further investigation is necessary to identify the specific factors contributing to the differences in performance between ChatGPT and Vietnamese students in geography. Overall, our findings indicate that while ChatGPT has shown potential in some subjects, it still requires improvement to match the performance of human learners in certain areas. The results of this study highlight the need for future research to enhance the abilities of language models in analyzing and interpreting visual information.
#### V-B9 Civic Education
This section presents an investigation into ChatGPT performance in the civic education subject in comparison to Vietnamese students. The scores of ChatGPT and Vietnamese students from 2019-2022 are depicted in Figure 12. Our results show that ChatGPT scores in the civic education subject ranged from 6.0 to 8.25, but consistently achieved lower results compared to the majority of Vietnamese students. Although in 2022, the gap between ChatGPT scores and Vietnamese students' scores was less significant than in previous years, ChatGPT still scored lower than most Vietnamese students.
One possible explanation for ChatGPT lower performance is its lack of analytical skills and higher-order thinking required for the problem-solving section of the civic education exam, which is a model of the law and how to apply it. While ChatGPT answered theoretical questions related to the law relatively well, it struggles with this type of question.
Our findings suggest that while ChatGPT has demonstrated impressive language-related abilities in various tasks, its performance in subject-specific tests still has room for improvement. Further studies are needed to identify the factors that contribute to ChatGPT performance in subject-specific tests and to develop strategies for enhancing its performance in
Figure 11: Comparison in geography.
Figure 10: Comparison in history.
Figure 9: Comparison in biology.
these contexts. These findings are essential as it highlights the importance of developing ChatGPT analytical and problem-solving skills in the context of subject-specific tests. It also underscores the need for future developments in AI language models to improve their performance in tasks that require higher-order thinking skills.
### _ChatGPT passes the Vietnamese National High School Graduation Examination_
Scoring formula of the Ministry of Education and Training to consider graduation:
\[\mathrm{GAS}=\frac{\left(\frac{\mathrm{TFS+TIP}}{4}\right)\times 7+(\mathrm{AV12} )\times 3}{10}+\mathrm{PP} \tag{3}\]
In which: GAS is General Admission Score; \(\mathrm{TFS}=\frac{\mathrm{M+L+E+AC}}{4}\) where TFS, M, L, E and AC are Total Four Subjects, Mathematics, Literature, English, and Averaged Combination scores, respectively; TIP is Total Incentive Points; AV12 is Grade point average for the whole year of grade 12; PP is Priority Points; AC is Average of Combination, which includes two subcategories: Average of natural combination \(\mathrm{AC_{N}}=\frac{1}{4}(\mathrm{Phy}+\mathrm{Che}+\mathrm{Bio})\); Average of social combination \(\mathrm{AC_{S}}=\frac{1}{3}(\mathrm{His}+\mathrm{Geo}+\mathrm{Civ})\). In case TIP, PP are not taken into account, and suppose that \(\mathrm{AV12}=\frac{1}{4}(\mathrm{M}+\mathrm{L}+\mathrm{E}+\mathrm{AC})\), we obtain
\[\mathrm{GAS}=\frac{1}{4}(\mathrm{M}+\mathrm{L}+\mathrm{E}+\mathrm{AC}) \tag{4}\]
To evaluate ChatGPT performance in terms of graduation, we utilized the scoring formula of the Ministry of Education and Training and computed the combined scores for natural and social classes. Table IV presented the Natural and Social combination scores for each year from 2019 to 2023. The results showed that ChatGPT performance in completing the examination was sufficient to pass for all years, with combined scores ranging from 6.35 in 2019 to 6.94 in 2020.
The graph presented in Figure 13 displays the natural and social combination scores achieved by ChatGPT from 2019 to 2023. Our findings show that ChatGPT consistently performs well in both natural and social combinations, with no significant difference between the scores. This observation is noteworthy, as the VNHSGE exam includes a considerable number of theoretical questions that emphasize knowledge and comprehension. ChatGPT exceptional accuracy in answering these types of questions implies that the model holds considerable promise for passing the VNHSGE exam in the future. The model's consistent and stable performance over the years in both natural and social combinations highlight its versatility and training capabilities in various subjects. Moreover, the consistent performance of ChatGPT in both natural and social combinations may have significant implications for students preparing for the VNHSGE exam. ChatGPT can serve as an effective tool for students to enhance their knowledge and comprehension levels in both fields. While ChatGPT performance in subject-specific tests requires further improvements, its potential to answer general knowledge questions related to both fields with high accuracy could be beneficial to students in their exam preparations. Further studies are necessary to examine the factors contributing to ChatGPT consistent performance in both natural and social combinations. Overall, the study demonstrates that ChatGPT has shown significant promise as a valuable tool for students preparing for the VNHSGE exam.
## IV Discussion
The findings of this study demonstrate that ChatGPT has great potential for use in various educational settings. The ability of ChatGPT to achieve an average score of 6-7 on the VNHSGE exam is an encouraging indication of its capabilities. Our research reveals that ChatGPT performed exceedingly well
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & **Math** & **Lit** & **Eng** & \(\mathrm{AC_{N}}\) & \(\mathrm{AC_{S}}\) & \(\mathrm{GAS_{N}}\) & \(\mathrm{GAS_{S}}\) \\ \hline
**2019** & 5.20 & 7.50 & 7.60 & 5.33 & 5.08 & 6.41 & 6.35 \\ \hline
**2020** & 6.60 & 6.88 & 8.60 & 5.50 & 5.67 & 6.89 & 6.94 \\ \hline
**2021** & 6.00 & 7.50 & 7.60 & 5.83 & 6.42 & 6.73 & 6.88 \\ \hline
**2022** & 6.20 & 5.63 & 8.00 & 5.67 & 6.83 & 6.37 & 6.66 \\ \hline
**2023** & 5.40 & 6.48 & 7.80 & 5.50 & 7.42 & 6.30 & 6.77 \\ \hline \end{tabular}
\end{table}
Table IV: Subject scores and combination scores for natural and social classes
Figure 12: Comparison in civic education.
Figure 13: Graph of combination scores for natural and social classes from 2019 to 2023.
in mathematics, English, physics, chemistry, biology, history, geography, civic education, and literature, achieving scores similar to those attained by high school students. These results highlight the potential of ChatGPT to provide effective support for learners in different fields.
However, it is crucial to consider the types of examination questions and the extent of ChatGPT training in answering those specific types of questions. The VNHSGE exam evaluates students' abilities to apply critical thinking, analytical, and problem-solving skills across a diverse range of subjects, some of which require memorization of facts and formulas. While ChatGPT shows promise in its capacity to manage a range of subjects, further research is needed to evaluate its performance on more complex exam questions and its potential to support learners in various contexts.
The results of this study offer several implications for educators and researchers. First, ChatGPT may be utilized as an effective tool to support learners in various subjects. This AI model may serve as a reliable source of information, answering questions, and providing feedback. Second, educators may use ChatGPT as a complement to traditional teaching methods, providing students with personalized learning experiences. Third, researchers may employ ChatGPT to explore the nature of machine learning in education and develop new approaches for AI-based education.
Overall, ChatGPT potential for use in education is evident. With the ability to pass the VNHSGE exam and achieve comparable scores to those of high school students across a range of subjects, ChatGPT may serve as an essential tool for learners, educators, and researchers. However, further research is needed to evaluate the model's performance in more complex exam questions and its potential to support learners in diverse contexts.
## V Conclusion
In conclusion, this research article presents the use of ChatGPT for completing the VNHSGE exam, and the results showed that ChatGPT was able to pass the exam. This achievement demonstrates the potential of ChatGPT to automate the exam-taking process and provide a new approach for students to prepare for exams. However, further research is needed to explore the effectiveness of ChatGPT in different educational settings and to identify potential challenges and limitations.
Despite the need for further research, the possibilities for ChatGPT to transform the education landscape are promising. The application of AI in education is a field that is rapidly growing. With continued advancements in AI technology, we can expect to see more opportunities for ChatGPT and similar tools to be used in various educational contexts.
In summary, the successful completion of the VNHSGE exam by ChatGPT is a significant milestone in the development of AI in education. It highlights the potential of AI tools to provide new approaches for learning and assessment, and the need for further research to fully explore their capabilities and limitations.
|
2301.02082 | Linking number of monotonic cycles in random book embeddings of complete
graphs | A book embedding of a complete graph is a spatial embedding whose planar
projection has the vertices located along a circle, consecutive vertices are
connected by arcs of the circle, and the projections of the remaining
"interior" edges in the graph are straight line segments between the points on
the circle representing the appropriate vertices. A random embedding of a
complete graph can be generated by randomly assigning relative heights to these
interior edges. We study a family of two-component links that arise as the
realizations of pairs of disjoint cycles in these random embeddings of graphs.
In particular, we show that the distribution of linking numbers can be
described in terms of Eulerian numbers. Consequently, the mean of the squared
linking number over all random embeddings is $\frac{i}{6}$, where $i$ is the
number of interior edges in the cycles. We also show that the mean of the
squared linking number over all pairs of $n$-cycles in $K_{2n}$ grows linearly
in $n$. | Yasmin Aguillon, Eric Burkholder, Xingyu Cheng, Spencer Eddins, Emma Harrell, Kenji Kozai, Elijah Leake, Pedro Morales | 2023-01-05T14:40:31Z | http://arxiv.org/abs/2301.02082v1 | # Linking number of monotonic cycles in random book embeddings of complete graphs
###### Abstract.
A book embedding of a complete graph is a spatial embedding whose planar projection has the vertices located along a circle, consecutive vertices are connected by arcs of the circle, and the projections of the remaining "interior" edges in the graph are straight line segments between the points on the circle representing the appropriate vertices. A random embedding of a complete graph can be generated by randomly assigning relative heights to these interior edges. We study a family of two-component links that arise as the realizations of pairs of disjoint cycles in these random embeddings of graphs. In particular, we show that the distribution of linking numbers can be described in terms of Eulerian numbers. Consequently, the mean of the squared linking number over all random embeddings is \(\frac{i}{6}\), where \(i\) is the number of interior edges in the cycles. We also show that the mean of the squared linking number over all pairs of \(n\)-cycles in \(K_{2n}\) grows linearly in \(n\).
Key words and phrases:book embeddings of graphs, linking in spatial graphs, Eulerian numbers 2020 Mathematics Subject Classification: 57M15, 57K10, 05C10 The authors were supported in part by NSF Grant DMS-1852132.
## 1. Introduction
Random knot models have been used to study the spatial configurations of polymers such as DNA, whose length is 1,000 to 500,000 times the length of the diameter of the nucleus [12]. With such a long molecule confined to a compact space, DNA can become knottted, tangled, or linked. In order for cell replication to occur, DNA must unknot itself with the aid of a special enzyme known as topoisomarase that cuts through the knotted parts of the DNA molecule and reconnects any loose ends, and problems can arise during cellular replication if topoisomarase enzymes do not work properly [14]. By comparing the topological invariants of DNA before and after enzymes act on it, we can learn more about mechanisms of these enzymes and their effects on the structure of DNA [15]. Because many polymers are too small to image in detail, several authors have used mathematical models to study configurations of long polymer chains by introducing versions of uniform random distributions of polygonal chains in a cube [1, 2, 6, 7, 18, 20, 22]. Even-Zohar, et al. introduced a random model based on petal diagrams of knots and links where the distribution of links can be studied in terms of random permutations, achieving an explicit description of the asymptotic distribution for the linking number [11].
Random graph embeddings can be thought of as generalizations of random knot embeddings to molecules with non-linear structures. In [13], a random graph embedding model generalizing the uniform random distributions of polygonal chains in a cube was used study the behavior of linking numbers and writhe. In this paper,
we study an alternate random embedding model similar to the Petaluma model in [11] in that the distribution of random embeddings can be described in terms of a random choice of permutations. This model is based on book embeddings of the complete graph \(K_{n}\). Rowland has classified all possible links that could appear in book embeddings of \(K_{6}\)[21], and we consider the more general case of links in \(K_{2n}\). In particular, we study a special class of two-component links that appear in book embedding which are unions of disjoint monotonic cycles, and we describe the behavior of the linking number in terms of the combinatorial properties of the length of the cycles and the number of interior edges in the book embedding. We show that the mean value of the squared linking number grows linearly with respect to both quantitites in Theorem 10 and Theorem 11.
## 2. Random book embeddings
Given a graph \(G\), Atneosen [3] and Persinger [19] introduced the notion of a _book embedding_ of \(G\), which is a particular class of spatial embedding of a graph in which the vertices of the graph are placed along a fixed line in \(\mathbb{R}^{3}\) called the _spine_ of the book. The edges of \(G\) are embedded on half-planes, called _sheets_, which are bounded by the spine. Classically, the edges are drawn as disjoint circular arcs on their respective sheets. Instead, we will consider the _circular diagram_ for a book embedding of \(K_{n}\) introduced by Endo and Otsuki in which the spine is a circle consisting of the vertices and edges between consecutive vertices, the pages are discs bounded by the spine, and the remaining edges are straight lines between vertices of a given page [8, 9].
We focus on book embeddings of the complete graph \(K_{2n}\) (or sometimes \(K_{m+n}\)) on \(2n\) vertices. In our model, the \(2n\) vertices will be labeled as \(v_{1},\ldots,v_{2n}\) in clockwise order around the circular spine. The perimeter of the circle will form the edges between consecutive vertices \(v_{j}\) and \(v_{j+1}\) for all \(j\in\{1,2,\cdots,2n\}\), where the indices are taken modulo \(2n\). We denote these edges as _exterior edges_. The remaining \(\binom{2n}{2}-2n\) edges are _interior edges_, and a book embedding is determined by dividing the interior edges among a finite number of sheets so that no two edges within a page intersect.
In order to generate a _random book embedding_, we embed each interior edge on its own separate sheet. The ordering of sheets can then be determined by a random permutation \(\sigma\) of \(\{1,\ldots,\binom{2n}{2}-2n\}\) with the uniform distribution. We can think of the permutation as giving the height order of the sheets, so that edge \(e_{i}\) is in a sheet above edge \(e_{j}\) if \(\sigma(i)>\sigma(j)\). Note that a random book embedding will typically be equivalent to a book embedding with far fewer sheets. When edges in two adjacent sheets do not cross in a circular diagram, the two sheets can be combined to a single sheet in which the two edges are embedded without intersecting, obtaining an equivalent embedding with one fewer sheet.
## 3. Preliminary definitions
The image of two disjoint cycles in a graph \(G\) under an embedding forms a two-component link. We can compute the linking number of any oriented link \(L\) in \(\mathbb{R}^{3}\) by considering the signed crossings of the two components in a planar projection with the rule indicated in Figure 1. We will denote half of the sum of the signed crossings as the linking number \(\ell(L)\) of a link \(L\). This gives a quantitative measure of how interwined the two components are. In an abuse of notation, given two
oriented cycles \(P\) and \(Q\) of a graph \(G\) and a fixed embedding, we will let \(\ell(P\cup Q)\) mean the linking number of the image of the two cycles under the embedding.
We introduce a special class of links in book embeddings of a graph.
**Definition 1**.: _Let \(K_{2n}\) be a complete graph with vertices enumerated as \(\{v_{1},\ldots,v_{2n}\}\) in cyclic order along the spine of a book embedding of \(K_{2n}\). An oriented cycle with consecutive edges \(\{\overrightarrow{v_{i_{1}}v_{i_{2}}},\overrightarrow{v_{i_{2}}v_{i_{3}}}, \ldots,\overrightarrow{v_{i_{k-1}}v_{i_{k}}},\overrightarrow{v_{i_{k}}v_{i_{1} }}\}\) is_
1. _strictly increasing if there is a cyclic permutation_ \(i_{1}^{\prime},\ldots,i_{k}^{\prime}\) _of_ \(i_{1},\ldots,i_{k}\) _such that_ \(i_{j}^{\prime}<i_{j+1}^{\prime}\) _for all_ \(j\in\{1,2,\ldots,k-1\}\)_._
2. _strictly decreasing if there is a cyclic permutation_ \(i_{1}^{\prime},\ldots,i_{k}^{\prime}\) _of_ \(i_{1},\ldots,i_{k}\) _such that_ \(i_{j}^{\prime}>i_{j+1}^{\prime}\) _for all_ \(j\in\{1,2,\ldots,k-1\}\)_._
3. _monotonic if the cycle is either strictly increasing or strictly decreasing._
The 4-cycle on the left in Figure 2 is monotonic because beginning with the vertex \(v_{1}\), the vertices in the cycle in order are \(v_{1},v_{2},v_{3},v_{4}\), which has strictly increasing indices. However, the order of the vertices in the 4-cycle on the right is \(v_{1},v_{3},v_{2},v_{4}\). The indices are not monotonic even up to cyclic permutation, so this cycle is not monotonic.
Finally, we also introduce the Eulerian numbers, which arise in combinatorics as coefficients of Eulerian polynomials [4, 10, 16].
**Definition 2**.: _Let \(\sigma\in S_{n}\) be a permutation on \(\{1,\ldots,n\}\). An ascent of the permutation is a value \(1\leq k\leq n-1\) such that \(\sigma(k)<\sigma(k+1)\)._
**Definition 3**.: _The Eulerian number \(A(n,m)\) is the number of permutations \(\sigma\in S_{n}\) that have exactly \(m\) ascents._
As an example, we have the following exhaustive list of permutations in \(S_{3}\):
(1,2,3); (1,3,2); (2,1,3); (2,3,1); (3,1,2); (3,2,1).
Figure 1. A positive crossing (left) and a negative crossing (right)
Figure 2. Monotonic (left) and non-monotonic (right) cycles
Among these permutations, (1,2,3) has two ascents, (1,3,2), (2,1,3), (2,3,1), and (3,1,2) each have one ascent, and (3,2,1) has no ascents. Hence, \(A(3,2)=1\), \(A(3,1)=4\), and \(A(3,0)=1\). Note that \(A(n,n)=0\) for all \(n>0\). Additionally, there is always exactly one permutation in \(S_{n}\) with no ascents and exactly one permutation in \(S_{n}\) with \(n-1\) descents, which are (\(n\),\(n-1\),...,1) and (1,2,...,\(n\)), respectively. Hence, \(A(n,0)=A(n,n-1)=1\).
Eulerian numbers are coefficients of Eulerian polynomials,
\[A_{n}(t)=\sum_{m=0}^{n}A(n,m)t^{m},\]
where \(A_{n}(t)\) is recursively defined by the relations,
\[A_{0}(t) =1,\] \[A_{n}(t) =t(1-t)A_{n-1}^{\prime}(t)+A_{n-1}(t)(1+(n-1)t),\qquad\quad\text{ for }n>0.\]
It is also known that
\[A(n,m)=\sum_{k=0}^{m+1}(-1)^{k}\binom{n+1}{k}(m+1-k)^{n},\]
and the exponential generating function for the Eulerian numbers is
\[\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}A(n,m)t^{m}\frac{x^{n}}{n!}=\frac{t-1}{t -e^{(t-1)x}}.\]
From the definition, it is also evident that for a fixed \(n\), the sum of Eulerian numbers \(A(n,m)\) over all possible values of \(m\) gives the number of all permutations, \(|S_{n}|\), so that
\[\sum_{m=0}^{n}A(n,m)=n!.\]
## 4. Linking numbers of disjoint monotonic cycles
In this paper, we will consider the distribution of linking numbers of two disjoint monotonic cycles in random book embeddings. First, note the following fact about the number of interior edges of two monotonic cycles in a book embedding.
**Lemma 4**.: _Two disjoint monotonic cycles of length \(m\) and \(n\) in a book embedding of \(K_{m+n}\) must have an equal number of interior edges, which is also equal to half the number of crossings between the two cycles._
Proof.: Let \(P\) and \(Q\) be an \(m\)-cycle and \(n\)-cycle in a book embedding, respectively, and suppose that \(P\) has \(i\) interior edges. Let \(\overrightarrow{v_{j}v_{k}}\) be an interior edge of \(P\). Then \(v_{k-1}\) must be a vertex in \(Q\), and there is a smallest \(h>k\) such that \(v_{h}\) is a vertex in \(Q\). Then \(\overrightarrow{v_{k-1}v_{h}^{\flat}}\) is an edge in \(Q\) which crosses the edge \(\overrightarrow{v_{j}v_{k}}\) of \(P\). Similarly, there is an edge \(\overrightarrow{v_{s}v_{j+1}}\) in \(Q\) that crosses \(\overrightarrow{v_{j}v_{k}}\), and no other edge in \(Q\) can cross \(\overrightarrow{v_{j}v_{k}}\). Hence, the number of crossings between \(P\) and \(Q\) is twice the number of interior edges in \(P\). By symmetry, this is also equal to twice the number of interior edges in \(Q\).
Lemma 4 implies that if \(P\) and \(Q\) are both \(n\)-cycles and \(P\) consists of \(n\) interior edges, then all edges in \(Q\) must also be interior. We now relate the number of disjoint cycles with fixed linking number to the Eulerian numbers \(A(m,n)\).
**Theorem 5**.: _Suppose \(P\) and \(Q\) are both strictly increasing \(n\)-cycles in \(K_{2n}\) so that \(P\) and \(Q\) both consist of \(n\) interior edges. The proportion of random book embeddings of \(K_{2n}\) for which \(P\) and \(Q\) have linking number equal to \(\ell\) is_
\[\frac{A(2n-1,n+\ell-1)}{(2n-1)!}.\]
Proof.: Let \(P\) and \(Q\) be two strictly increasing cycles, each with \(n\) interior edges. Consider a permutation of all of the interior edges of \(K_{2n}\), which determines the ordering of their respective sheets in a book embedding. As we are only concerned with the linking number \(\ell(P\cup Q)\), we only need the relative orderings of the edges of \(P\) and \(Q\) in order to resolve the signs of any crossings between interior edges of \(P\) and \(Q\). By designating these edges as \(e_{1},\ldots,e_{2n}\), we may consider the permutation \(\sigma\) as a permutation of \(\{1,\ldots,2n\}\).
Without loss of generality, we label the topmost edge of the permutation of interior edges as edge \(e_{2n}\). Since the edges in the cycle are directed so that the cycle is strictly increasing, we may begin numbering the vertices of \(K_{2n}\) so that the initial vertex of \(e_{2n}\) is vertex \(v_{2n}\). We then number the vertices in cyclic order, so that the vertex in \(K_{2n}\) that lies next in the clockwise direction from \(v_{2n}\) is \(v_{1}\), the following vertex (which is the terminal vertex of \(e_{2n}\)) is \(v_{2}\), and so on. The edge indices will then also be identified with their initial vertex, so that the edge \(\overrightarrow{v_{1}v_{3}}\) is \(e_{1}\), the edge \(\overrightarrow{v_{2}v_{4}}\) is \(e_{2}\), and so on, until the edge \(\overrightarrow{v_{2n-1}v_{1}}\) is labeled \(e_{2n-1}\) and edge \(\overrightarrow{v_{2n}v_{2}}\) is labeled \(e_{2n}\). Under this labeled scheme, edge \(e_{j}\) will have crossings with edges \(e_{j-1}\) and \(e_{j+1}\), where indices are taken modulo \(2n\).
The bijective function \(\sigma\) from \(\{1,\ldots,2n\}\) to itself determines the relative heights of the edges so that whenever \(\sigma(j)>\sigma(k)\), then \(e_{j}\) is in a sheet above the sheet containing \(e_{k}\), and whenever \(\sigma(j)<\sigma(k)\), \(e_{j}\) is embedded in a sheet below the sheet containing \(e_{k}\). Since both cycles are strictly increasing, the sign of the crossing between edge \(e_{j}\) and edge \(e_{j+1}\) can be determined by \(\sigma(j)\) and \(\sigma(j+1)\). When \(\sigma(j)>\sigma(j+1)\), the sign of the crossing is negative. When \(\sigma(j)<\sigma(j+1)\), the sign of the crossing is positive, as seen in Figure 3. Therefore, the linking number is half the quantity of the number of times \(\sigma(j)<\sigma(j+1)\) minus the number of times \(\sigma(j)>\sigma(j+1)\).
By construction, \(\sigma(2n)=2n\), so that \(\sigma(2n-1)<\sigma(2n)\) and \(\sigma(2n)>\sigma(1)\). Since this results in exactly one positive crossing and one negative crossing, crossings involving the edge \(e_{2n}\) have zero net effect on the linking number. We may ignore edge \(2n\) in the permutation and consider only a further restriction of the permutation to a permutation \(\sigma^{\prime}\) of \(\{1,\ldots,2n-1\}\). Topologically, this can be thought of as applying a Reidemeister Move 2, sliding the topmost edge away to the exterior of the binding so that the edge \(e_{2n}\) no longer has any crossings with edges \(e_{2n-1}\) and \(e_{1}\)
Notice that \(\sigma^{\prime}(j)<\sigma^{\prime}(j+1)\) is the same as an ascent in \(\sigma^{\prime}\) and \(\sigma^{\prime}(j)>\sigma^{\prime}(j+1)\) is the same as a descent in \(\sigma^{\prime}\). So the linking number of \(P\) and \(Q\) depends on the number of ascents of the permutation \(\sigma^{\prime}\). If \(\sigma^{\prime}\) has \(m\) ascents, it has \(2n-2-m\) descents, so that the linking number is \(\frac{1}{2}[m-(2n-2-m)]\). Setting this equal to \(\ell\), then \(m=n+\ell-1\). Thus, we conclude that the number of permutations in \(S_{2n-1}\) that lead to a linking number of \(\ell\) is \(A(2n-1,n+\ell-1)\). For each permutation \(\sigma^{\prime}\in S_{2n-1}\), there are an equal number of permutations of the edges of \(K_{2n}\) that restrict to \(\sigma^{\prime}\), so that the proportion of random book embeddings in which \(P\) and
\(Q\) have linking number \(\ell\) is
\[\frac{A(2n-1,n+\ell-1)}{(2n-1)!}.\]
An example of the connection between ascents, descents, crossing signs, and linking number is shown in Figure 4 and Table 1. Observe in Table 1 that \(\sigma(5)<\sigma(6)\). Thus \(j=5\) would be an ascent. However, as \(\sigma(6)>\sigma(1)\), the signed crossing between \(e_{5}\) and \(e_{6}\) is canceled out with the signed crossing between \(e_{6}\) and \(e_{1}\). Considering only \(j=1\),2, 3, 4 we are left with four descents, which lead to four negative crossings and a linking number of \(-2\).
Figure 4. Solomon’s link as a union of two monotonic 3-cycles in \(K_{6}\).
Figure 3. A negative crossing (left) and a positive crossing (right) in terms of \(\sigma(j)\) and \(\sigma(j+1)\)
We remark that the results from Theorem 5 extend to the more general case of two monotonic cycles of length \(m\) and \(n\) with \(i\) interior edges each. The sign of the linking number will flip whenever we reverse the orientation of one of the cycles, so if we have two monotonic cycles \(P\) and \(Q\) of length \(n\) which are not necessarily strictly increasing, this would result in replacing \(\ell\) with \(-\ell\) in the result of Theorem 5. However, the Eulerian numbers have the symmetry property that \(A(n,m)=A(n,n-1-m)\), so that \(A(2n-1,n-\ell-1)=A(2n-1,n+\ell-1)\). This results in an identical proportion of book embeddings in which the cycles have linking number \(\ell\), thus whether the cycles are strictly increasing or strictly decreasing has no net effect on the distribution of linking numbers as long as they are both monotonic.
In the case where \(P\) and \(Q\) have lengths \(m\) and \(n\), respectively, Lemma 4 states that both \(P\) and \(Q\) have the same number of interior edges, which we will denote by \(i\). Contracting \(K_{m+n}\) along all of the exterior edges in \(P\) and \(Q\) does not alter the topological type of the link \(P\cup Q\), and the proportion of random book embeddings of \(K_{m+n}\) for which the linking number of \(P\cup Q\) is equal to \(\ell\) will be the same as the proportion of book embeddings of the contracted graph \(K^{\prime}\) in which the linking number of \(P\cup Q\) is equal to \(\ell\) by a similar argument as in Theorem 5. Hence, we arrive at the following when \(i\geq 3\).
**Corollary 6**.: _Let \(P\) and \(Q\) be monotonic cycles of length \(m\) and \(n\), respectively, in \(K_{m+n}\). The proportion of random book embeddings of \(K_{m+n}\) in which the linking number of \(P\cup Q\) is equal to \(\ell\) is_
\[\frac{A(2i-1,i+\ell-1)}{(2i-1)!},\]
_where \(i\geq 2\) is the number of interior edges of both \(P\) and \(Q\)._
The exceptional case when \(i=2\) can be verified to follow the same formula as in Corollary 6 by contracting to two \(3\)-cycles with two interior edges and one exterior edge each, then applying the argument in Theorem 5 to the interior edges only. Table 2 gives the values of \(A(2i-1,i+\ell-1)\) for \(1\leq i\leq 5\). The proportion of random book embeddings for which two cycles with \(i\) interior edges have a linking number of \(\ell\) can be obtained by dividing the entries by \((2i-1)!\).
The following theorem describes the number of disjoint \(m\)- and \(n\)-cycles with a given number of interior edges. In combination with the previous corollary, this will allow for calculation of the frequency with which a random \(m\)-cycle \(P\) and disjoint \(n\)-cycle \(Q\) has linking number \(\ell\) in a random book embedding of \(K_{m+n}\).
\begin{table}
\begin{tabular}{c|c|c|c} \(j\) & \(\sigma(j)\) & crossing of \(e_{j}\) and \(e_{j+1}\) & ascent or descent \\ \hline
1 & 5 & \(-\) & descent \\
2 & 4 & \(-\) & descent \\
3 & 3 & \(-\) & descent \\
4 & 2 & \(-\) & descent \\
5 & 1 & \(+\) & ascent \\
6 & 6 & \(-\) & \\ \end{tabular}
\end{table}
Table 1. Signed crossings and ascents/descents in height function \(\sigma\) for the example in Figure 4.
**Theorem 7**.: _Let \(m,n\geq 3\). Then the number of disjoint (undirected) monotonic cycles \(P\) and \(Q\) in a book embedding of \(K_{m+n}\) so that \(P\) is an \(m\)-cycle and \(Q\) is a \(n\)-cycle, each with \(2\leq i\leq\min\{m,n\}\) interior edges is_
\[\binom{m}{m-i}\binom{n-1}{n-i}+\binom{n}{n-i}\binom{m-1}{m-i},\]
_if \(m\neq n\). In the case that \(m=n\), the number of disjoint cycles is_
\[\binom{n}{n-i}\binom{n-1}{n-i}.\]
Proof.: Fix a labeling of the vertices of \(K_{m+n}\) in cyclic order \(v_{1},\ldots,v_{m+n}\). Suppose \(P\) is a \(m\)-cycle and \(Q\) is a \(n\)-cycle.
First, suppose \(P\) contains \(v_{1}\). If \(P\) has \(i\) interior edges, there are \(\binom{m}{i}\) ways to choose which of the \(m\) edges in \(P\) are interior edges. For each of the \(i\) chosen edges in \(P\), in order for it to be interior, there must be a vertex in the cycle \(Q\) lying between the initial and terminal vertices of the edge in \(P\). Moreover, for each of the external edges in the cycle \(P\), there cannot be any vertices of \(Q\) lying between the initial and terminal vertices. This create \(i\) areas in which the vertices of \(Q\) must be located, one between the initial and terminal vertices of each internal edge in \(P\), with each containing at least one vertex. A stars and bars argument, in which there are \(n-i\) vertices of \(Q\) to allocate after placing one vertex of \(Q\) into each of the \(i\) spots, and \(i-1\) bars to separate the \(i\) spots, leads to \(\binom{n-1}{n-i}\) ways of choosing the vertices of \(Q\). This results in \(\binom{m}{m-i}\binom{n-1}{n-i}\) choices of \(P\) and \(Q\) so that \(P\) contains \(v_{1}\) and both cycles have \(i\) interior edges.
By an analogous argument, there are \(\binom{n}{n-i}\binom{m-1}{m-i}\) ways to choose \(P\) and \(Q\) so that \(Q\) contains \(v_{1}\), completing the proof when \(m\neq n\).
If \(m=n\), there is no distinction between the cases when \(v_{1}\) is in \(P\) and \(v_{1}\) is in \(Q\).
The number of disjoint \(n\) cycles in \(K_{2n}\) with \(i\) interior edges is tabulated in Table 3 for \(3\leq n\leq 10\).
The values \(\binom{n}{n-i}\binom{n-1}{n-i}\) appear as OEIS sequence A103371 [17] up to a shift in indices due to the cyclic symmetry in the circular diagrams of book embeddings. The sum over all \(i\) gives the number of ways to choose two disjoint monotonic \(n\)-cycles in \(K_{2n}\). An undirected monotonic cycle is determined by the vertices in the cycles, so this amounts to choosing two disjoint subsets of \(n\) vertices from the \(2n\) vertices in \(K_{2n}\). The number of ways in which this choice can be made is given by \(\binom{2n-1}{n-1}=\binom{2n-1}{n}\).
Combining Theorem 7 with Theorem 5 yields the following corollary.
**Corollary 8**.: _The proportion of links \(P\cup Q\) with linking number \(\ell\) among pairs of \(n\)-cycles \(P\) and \(Q\) in a random book embedding of \(K_{2n}\) is_
\[\frac{\sum_{i=1}^{n}\frac{A(2i-1,\ell+i-1)}{(2i-1)!}{n\choose n-i}{n-1\choose n-i }}{{2n-1\choose n-1}}.\]
The values from Corollary 8 for \(n=3\), \(4\), \(5\), and \(6\) are computed and illustrated in Figure 5. Notice that for two \(n\)-cycles in \(K_{2n}\), the maximum number of crossings that can appear is \(2n\), meaning that an upper bound for the absolute value of the linking number is \(n\). Thus, we can normalize the linking number of two monotonic cycles by dividing by \(n\). The distribution of links with a given normalized linking number when \(n=100\), \(200\), \(500\), and \(1000\), are shown in Figure 6. As \(n\) increases, the proportion of links with linking number \(0\) decreases. However, this behavior is misleading as links are distributed among a larger range of possible values for
Figure 5. Proportion of disjoint pairs of \(n\)-cycles with a given linking number in a random book embedding of \(K_{2n}\).
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \(n\ \backslash\ i\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline
3 & 3 & 6 & 1 & & & & & & & \\
4 & 4 & 18 & 12 & 1 & & & & & & \\
5 & 5 & 40 & 60 & 20 & 1 & & & & & \\
6 & 6 & 75 & 200 & 150 & 30 & 1 & & & & \\
7 & 7 & 126 & 525 & 700 & 315 & 42 & 1 & & & \\
8 & 8 & 196 & 1176 & 2450 & 1960 & 588 & 56 & 1 & & \\
9 & 9 & 288 & 2352 & 7056 & 8820 & 4704 & 1008 & 72 & 1 & \\
10 & 10 & 405 & 4320 & 17640 & 31752 & 26460 & 10080 & 1620 & 90 & 1 \\ \end{tabular}
\end{table}
Table 3. Number of pairs of monotonic \(n\)-cycles each with \(i\) interior edges in \(K_{2n}\).
the linking number as \(n\) increases. Normalizing the graph to a density plot as in Figure 7 gives a very different picture of the behavior of linking numbers of disjoint \(n\)-cycles in random book embeddings of \(K_{2n}\). As the number of vertices increases, the normalized linking numbers tend closer to \(0\) as \(n\) increases. This model behaves differently from other models where the mean squared linking number grows as \(\theta(n^{2})\), as in [1, 2, 18]).
In fact, using the exponential generating function for the Eulerian numbers, we can determine an explicit formula for the mean squared linking number in terms
Figure 6. Proportion of links with specified normalized linking number for two monotonic \(n\)-cycles in a random book embedding of \(K_{2n}\)
Figure 7. Density of links with specified normalized linking number for two monotonic \(n\)-cycles in a random book embedding of \(K_{2n}\)
of the number of interior edges \(i\). We will need the following fact from differential calculus.
**Lemma 9**.: _Let \(g(x)=\frac{x^{n}}{(1-x)^{m}}\). Then for \(k\geq 1\), \(g^{(k)}(0)=k!\binom{k-n+m-1}{m-1}\)._
Proof.: For \(|x|<1\), we can express \(\frac{1}{1-x}\) as the power series
\[\frac{1}{1-x}=x^{0}+x^{1}+x^{2}+x^{3}+\dots.\]
Then,
\[g(x)=x^{n}(x^{0}+x^{1}+x^{2}+x^{3}+\dots)^{m},\]
so that \(\frac{g^{(k)}(0)}{k!}\) is the coefficient of \(x^{k}\) in the power series expansion of \(g(x)\). This is the \(x^{k-n}\) coefficient of \((x^{0}+x^{1}+x^{2}+x^{3}+\dots)^{m}\), which is the number of ways to choose \(m\) non-negative integers that add up to \(k-n\). A stars and bars argument counts this as \(\binom{k-n+m-1}{m-1}\), with this binomial coefficient defined to be \(0\) if \(k<n\).
We are now ready to show that the mean squared linking number of two disjoint cycles grows linearly in the number of interior edges \(i\). Heurestically, this means that we expect that the linking number grows roughly as the square root of the number of internal edges.
**Theorem 10**.: _Let \(P\cup Q\) be a union of disjoint \(n\) cycles with \(i\) interior edges each. Then the mean squared linking number of \(P\cup Q\) in a random book embedding is \(\frac{i}{6}\)._
Proof.: The exponential generating function for the Eulerian numbers is
\[\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}A(n,m)t^{m}\frac{x^{n}}{n!}=\frac{t-1}{ t-e^{(t-1)x}}.\]
Multiplying both sides by \(t^{-i+1}\), we arrive at,
\[\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}A(n,m)t^{m-i+1}\frac{x^{n}}{n!}=\frac{t ^{-i+1}(t-1)}{t-e^{(t-1)x}}.\]
Notice that differentiating the left-hand side twice with respect to \(t\) and taking the limit as \(t\to 1\) yields
\[\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\left((m-i+1)^{2}-(m-i+1)\right)A(n,m) \frac{x^{n}}{n!}.\]
Differentiating this expression \(2i-1\) times with respect to \(x\) and evaluating at \(x=0\) results in
\[\sum_{m=0}^{\infty}(m-i+1)^{2}A(2i-1,m)-(m-i+1)A(2i-1,m).\]
After a substitution of \(\ell=m-i+1\), this becomes
\[\sum_{\ell=-i+1}^{i-1}A(2i-1,i+\ell-1)\ell^{2}-A(2i-1,i+\ell-1)\ell =\sum_{\ell=-i+1}^{i-1}A(2i-1,i+\ell-1)\ell^{2}\] \[=(2i-1)!E[\ell(P\cup Q)^{2}],\]
as the symmetry in the Eulerian triangle means that the expected value of the linking number is \(0\). Hence, the second part of the summation vanishes.
We now repeat the differentiation on the exponential generating function to find an equivalent expression utilizing logarithmic differentiation. We set \(f(t,x)\) to be the exponential generating function,
\[f(t,x)=\frac{t^{-i+1}(t-1)}{t-e^{(t-1)x}},\]
and first compute using L'Hopital's rule,
\[\lim_{t\to 1}f(t,x)=1\cdot\lim_{t\to 1}\frac{t-1}{t-e^{(t-1)x}}=\lim_{t\to 1}\frac{1}{1-xe^{(t-1)x}}=\frac{1}{1-x}.\]
Using logarithmic differentiation, we find that,
\[\frac{f_{t}(t,x)}{f(t,x)} =\frac{-i+1}{t}+\frac{1}{t-1}-\frac{1-xe^{(t-1)x}}{t-e^{(t-1)x}}\] \[=\frac{-i+1}{t}+\frac{(t-e^{(t-1)x})-(t-1)(1-xe^{(t-1)x})}{(t-1)( t-e^{(t-1)x})}\] \[=\frac{-i+1}{t}+\frac{1-e^{(t-1)x}+(t-1)xe^{(t-1)x}}{(t-1)(t-e^{( t-1)x})}.\]
Taking the limit as \(t\to 1\) using L'Hopital's rule twice, we obtain,
\[\lim_{t\to 1}\frac{f_{t}(t,x)}{f(t,x)} =(-i+1)+\lim_{t\to 1}\frac{(t-1)x^{2}e^{(t-1)x}}{(t-e^{(t-1)x})+(t-1)(1- xe^{(t-1)x})}\] \[=(-i+1)+\lim_{t\to 1}\frac{x^{2}e^{(t-1)x}+(t-1)x^{3}e^{(t-1)x}}{1 -xe^{(t-1)x}+1-xe^{(t-1)x}+(t-1)(-x^{2}e^{(t-1)x})}\] \[=(-i+1)+\frac{x^{2}}{2}\cdot\frac{1}{1-x}.\]
The second derivative of \(\log f(t,x)\) is
\[\frac{f_{tt}(t,x)}{f(t)} -\left(\frac{f_{t}(t,x)}{f(t,x)}\right)^{2}=-\frac{-i+1}{t^{2}}- \frac{1}{(t-1)^{2}}+\frac{x^{2}e^{(t-1)x}}{t-e^{(t-1)x}}+\frac{(1-xe^{(t-1)x}) ^{2}}{(t-e^{(t-1)x})^{2}}\] \[=-\frac{-i+1}{t^{2}}+\frac{-(t-e^{(t-1)x})^{2}+(t-1)^{2}[(t-e^{( t-1)x})x^{2}e^{(t-1)x}+(1-xe^{(t-1)x})^{2}]}{(t-1)^{2}(t-e^{(t-1)x})^{2}}.\]
Taking the limit as \(t\to 1\) using L'Hopital's rule four times yields,
\[\lim_{t\to 1}\frac{f_{tt}(t,x)}{f(t)}-\left(\frac{f_{t}(t,x)}{f(t,x)} \right)^{2}=-(-i+1)+\frac{x^{3}}{3}\cdot\frac{1}{(1-x)^{2}}-\frac{x^{4}}{12} \cdot\frac{1}{(1-x)^{2}}.\]
We can then find,
\[\lim_{t\to 1}f_{tt}(t,x) =\lim_{t\to 1}f(t)\left(\frac{f_{tt}(t,x)}{f(t)}-\left(\frac{f_{t}(t,x)}{f(t,x)}\right)^{2}+\left(\frac{f_{t}(t,x)}{f(t,x)}\right)^{2}\right)\] \[=\frac{i(i-1)}{1-x}+\frac{(-i+1)x^{2}}{(1-x)^{2}}+\left(\frac{x^{3 }}{3}+\frac{x^{4}}{6}\right)\frac{1}{(1-x)^{3}}.\]
By Lemma 9, the \((2i-1)\)-th derivative in \(x\) evaluated at \(x=0\) is
\[(2i-1)!\left(i(i-1)+(-i+1)(2i-2)+\frac{1}{3}{2i-2\choose 2}+\frac{1}{6 }{2i-3\choose 2}\right)\] \[=(2i-1)!\left((i-1)(-i+2)+\frac{(2i-2)(2i-3)}{6}+\frac{(2i-3)(2i- 4)}{12}\right)\] \[=(2i-1)!\frac{i}{6}.\]
Hence,
\[(2i-1)!E[\ell(P\cup Q)^{2}]=(2i-1)!\frac{i}{6},\]
completing the proof of the theorem.
Using Theorem 10, we can find the asymptotic behavior of the mean squared linking number over all pairs of disjoint \(n\) cycles in \(K_{2n}\). Recall that a function \(f(n)\) is in order \(\theta(n)\) if there are positive constants \(a\), \(A\), and \(N\) such that \(an\leq f(n)\leq An\) for all \(n>N\).
**Theorem 11**.: _Let \(n\geq 3\). Then the mean squared linking number of two cycles \(P\) and \(Q\) taken over all pairs of disjoint \(n\)-cycles across all random book embeddings of \(K_{2n}\) is in order \(\theta(n)\)._
Proof.: By combining Theorem 7 and Theorem 10 and summing over the number of interior edges, the mean squared linking number is
\[\frac{1}{{2n-1\choose n-1}}\sum_{i=2}^{n}{n\choose n-i}{n-1\choose n-i}\frac {i}{6}.\]
Since
\[i{n\choose n-i}=i{n\choose i}=n{n-1\choose i-1},\]
this becomes
\[\frac{1}{{2n-1\choose n-1}}\sum_{i=2}^{n}\frac{n}{6}{n-1\choose i-1}^{2}= \frac{n}{6}\cdot\frac{1}{{2n-1\choose n-1}}\sum_{i=2}^{n}{n-1\choose i-1}^{2}. \tag{1}\]
Using Vandermonde's identity, the summation part of the right-hand side becomes
\[\sum_{i=2}^{n}{n-1\choose i-1}^{2}=\left(\sum_{i=0}^{n-1}{n-1\choose i}^{2} \right)-{n-1\choose 0}^{2}={2n-2\choose n-1}-1.\]
Thus, Equation (1) yields
\[\frac{n}{6}\cdot\frac{1}{{2n-1\choose n-1}}\left({2n-2\choose n-1}-1\right)= \frac{n}{6}\left(\frac{n}{2n-1}-\frac{1}{{2n-1\choose n-1}}\right).\]
For an upper bound, we have
\[\frac{n}{6}\left(\frac{n}{2n-1}-\frac{1}{{2n-1\choose n-1}}\right)\leq\frac{n} {6}\cdot\frac{n}{2n-1}\leq\frac{n}{6}.\]
For a lower bound, we note that if \(n\geq 3\),
\[{2n-1\choose n-1}=\frac{2n-1}{1}\cdot\frac{2n-2}{2}\cdot\ldots\cdot\frac{n+1} {n-1}\cdot\frac{n}{n}\geq(2n-1)(n-1)\geq 2(2n-1).\]
Hence,
\[\frac{n}{6}\left(\frac{n}{2n-1}-\frac{1}{\binom{2n-1}{n-1}}\right)\geq\frac{n}{6} \left(\frac{n}{2n-1}-\frac{1}{2(2n-1)}\right)=\frac{n}{6}\cdot\frac{n-\frac{1} {2}}{2n-1}=\frac{n}{6}\cdot\frac{1}{2}=\frac{n}{12}.\]
Sample calculations of the mean squared linking number of two \(n\)-cycles in \(K_{2n}\) can be seen to asymptotically approach \(\frac{n}{12}\), as seen from the nearly linearly relationship between \(n\) and the mean squared linking number in Figure 8. When \(n=100\) and \(n=1000\), the approximate value of the mean squared linking number can be computed from the summation formula in Theorem 11 to be \(\approx 8.37521\) and \(\approx 83.375\), respectively.
## 5. Links in random book embeddings of \(K_{6}\)
In this section, we consider the special case of random book embeddings of \(K_{6}\). Rowland has studied all possible topological types of book embeddings of \(K_{6}\), showing that the set of non-trivial knots and links that appear are the trefoil knot, figure-eight knot, the Hopf link, and the Solomon's link [21]. Any two-component link in \(K_{6}\) must consist of two disjoint 3-cycles, and every 3-cycle is necessarily monotonic. Moreover, the trivial link has linking number 0, the Hopf link has linking number \(\pm 1\), and the Solomon's link (shown in Figure 4) has linking number \(\pm 2\). Hence, we can utilize Theorems 7 and 10 and in the case that \(n=3\) to determine the probabilities of each type of link occuring in a random book embedding.
We separately consider the cases when the number of interior edges in the 3-cycles is \(i=1\), 2, and 3 as in Figure 9, and determine the probability of each type of link occuring in each case. We can then combine with the counts in Table 3 to compute the overall probability that a randomly selected two-component link is either trivial, a Hopf link, or a Solomon's link.
Figure 8. Mean squared linking number of two disjoint \(n\)-cycles in a random book embedding of \(K_{2n}\)
When \(i=1\), it is evident that since the projection of the two cycles has no crossings, then the two-component link is trivial.
When \(i=2\), Table 2 implies that the probability that the two cycles are the Hopf link is \(p_{2}=\frac{1}{3}\), and the probability that the two cycles are the trivial link is \(1-p_{2}=\frac{2}{3}\).
When \(i=3\), Table 2 implies that the probability that the two cycles form the Solomon's link is \(q_{3}=\frac{1}{60}\), the probability that the two cycles form the Hopf link is \(p_{3}=\frac{13}{30}\), and the probability that the two cycles form the trivial link is \(1-p_{3}-q_{3}=\frac{11}{20}\).
Table 3 details the frequency with each the 10 cycles in \(K_{6}\) have 1, 2, or 3 interior edges. From this, we determine that the probability that a randomly chosen pair of disjoint 3-cycles in a random book embedding of \(K_{6}\) is trivial is
\[\frac{1}{10}\left(3\cdot 1+6\cdot\frac{2}{3}+1\cdot\frac{11}{20}\right)=\frac{15 1}{200}.\]
Similarly, the probability that a randomly chosen pair of disjoint 3-cycles in a random book embedding of \(K_{6}\) is the Hopf link is
\[\frac{1}{10}\left(3\cdot 0+6\cdot\frac{1}{3}+1\cdot\frac{13}{30}\right)=\frac{7 3}{300}.\]
Finally, the probability that a randomly chosen pair of disjoint 3-cycles in a random book embedding of \(K_{6}\) is the Solomon's link is
\[\frac{1}{10}\left(3\cdot 0+6\cdot 0+1\cdot\frac{1}{60}\right)=\frac{1}{600}.\]
Since \(K_{6}\) contains 10 distinct disjoint pairs of 3-cycles, this implies that in a random book embedding of \(K_{6}\), the expected number of trivial links is \(\frac{151}{20}\), the expected number of Hopf links is \(\frac{73}{30}\), and the expected number of Solomon's links is \(\frac{1}{60}\). It is a classical result in spatial graph theory that every embedding of \(K_{6}\) contains at least one non-trivial link [5]. In a random book embedding of \(K_{6}\), the expected number of non-trivial links is \(\frac{49}{20}\), with nearly all of the non-trivial links represented by Hopf links.
## 6. Acknowledgments
The authors would like to thank the National Science Foundation for supporting this work. This research was partially supported by National Science Foundation Grant DMS-1852132.
In addition, the authors would like to thank the Department of Mathematics at Rose-Hulman Institute of Technology for their hospitality and for hosting the
Figure 9. Projections of two 3-cycles in \(K_{6}\) with \(i=1\) (left), \(i=2\) (middle), and \(i=3\) (right) interior edges.
Rose-Hulman Institute of Technology Mathematics Research Experience for Undergraduates, where most of this work was completed.
|
2303.14099 | Do Random and Chaotic Sequences Really Cause Different PSO Performance? | Our topic is performance differences between using random and chaos for
particle swarm optimization (PSO). We take random sequences with different
probability distributions and compare them to chaotic sequences with different
but also with same density functions. This enables us to differentiate between
differences in the origin of the sequences (random number generator or chaotic
nonlinear system) and statistical differences expressed by the underlying
distributions. Our findings (obtained by evaluating the PSO performance for
various benchmark problems using statistical hypothesis testing) cast
considerable doubt on previous results which compared random to chaos and
suggested that the choice leads to intrinsic differences in performance. | Paul Moritz Nörenberg, Hendrik Richter | 2023-02-09T17:11:34Z | http://arxiv.org/abs/2303.14099v2 | # Do Random and Chaotic Sequences Really Cause Different PSO Performance?
###### Abstract
Our topic is performance differences between using random and chaos for particle swarm optimization (PSO). We take random sequences with different probability distributions and compare them to chaotic sequences with different but also with same density functions. This enables us to differentiate between differences in the origin of the sequences (random number generator or chaotic nonlinear system) and statistical differences expressed by the underlying distributions. Our findings (obtained by evaluating the PSO performance for various benchmark problems using statistical hypothesis testing) cast considerable doubt on previous results which compared random to chaos and suggested that the choice leads to intrinsic differences in performance.
## 1 Introduction
If and under what circumstances chaotic sequences give different performance results than random sequences in evolutionary computation is a topic of intensive debate [2, 3, 4, 5, 9, 10, 12, 13, 15]. This debate has mainly been conducted by taking a chaotic map and comparing for a set of benchmark problems the performance results obtained by sequences from the map with the results involving pseudo random number generators (PRNGs). In most of these works we find the claim that chaos somehow improves the performance as compared to random but different results have also been reported [1, 2, 5, 6, 10, 11, 12, 14, 15]. With the focus on particle swarm optimization (PSO), this paper adds to the discussion but proposes a different approach.
Suppose there are differences in PSO performance between using chaotic and random sequences and given the PSO implementations and parameters are the same except the sequences used. Then, these differences in performance must correspond to differences between the sequences, and a question worth asking is how these sequences could possibly differ. Apart from the difference in origin of both types of sequences, the main reason which could make chaotic sequences different from random ones concerns the sequences as a whole and involves differences in the underlying distribution or density.
Our addition to the discussion about performance differences between chaotic and random sequences is to disentangle the effect of the underlying distribution and effects connected to how the sequences are generated (PRNGs or chaotic maps). It is well-known that PRNGs can produce random sequences associated with different probability distributions, thus offering to study the effect that different distributions have on PSO performance, see Figure 1(a) for the distributions considered in this study. A first observation is that chaotic sequences obtained from a certain variety of nonlinear maps also have different invariant densities, thus enabling to study the same effect, see Figure 1(b) for the densities considered here. Another key
observation is that certain chaotic sequences have a density which has the same algebraic description as the distribution of known random sequences. For instance, the time evolution of the Logistic map with \(r=4\) has the density \(\varrho(z)=\left(\pi\sqrt{z(1-z)}\right)^{-1}\) which is the same function as the probability density of the Beta distribution \(\mathcal{B}(\alpha/\beta)\) with \(\alpha=\beta=0.5\). In other words, for certain parameter settings chaotic sequences from the Logistic map and random sequences from the Beta distribution are statistically equivalent as they can be described by one and the same distribution. These observations suggest the following experimental setup. We take chaotic and random sequences which have either the same distribution or different ones. We test these sequences in otherwise identical PSO implementations. In this way, we can differentiate the effect that different or same distributions have on performance for either random or chaotic sequences. Our main result obtained by null hypothesis testing with the Wilcoxon rank-sum test is that the underlying distribution is the key factor in performance, while the origin (either chaotic or random) is secondary.
## 2 Experimental setup
_Particle swarm optimization_ (PSO) is a generational population-based algorithm used for calculating optima of single- and multidimensional functions \(f(x)\). A PSO particle holds 4 variables for each generation \(t\): its position \(x(t)\in\mathbb{R}^{D}\) and velocity \(v(t)\in\mathbb{R}^{D}\) in search space, its own best position (local best position) \(p^{l}(t)\) and the best position of any particle of the population that occurred during all generations (global best position) \(p^{g}(t)\). The particle movement is described by
\[x(t+1)= x(t)+v(t+1) \tag{1}\] \[v(t+1)= wv(t)+c_{1}r_{1}(p^{l}(t)-x(t))+c_{2}r_{2}(p^{g}(t)-x(t)). \tag{2}\]
The parameters \(w\), \(c_{1}\) and \(c_{2}\) are the inertial, cognitive and social weights. The random variables \(r_{1},r_{2}\) are taken from the sequences produced by either the PRNGs or chaotic maps. At \(t=0\), each particle \(i\), \(i=1,2,\ldots I\), of swarm size \(I\) is initialized randomly at position \(x(0)\) with velocity \(v(0)\). The fitness function is evaluated for the position of every particle. The position with the highest fitness that occurred in this evaluation is stored as global best position in each particle.
In the simulation we take standard values of PSO parameters frequently used: \(w=0.79\), \(c_{1}=1.49\), \(c_{2}=1.49\), \(200\) generations per run, and \(1000\) runs per sample. The performance of PSO runs is quantified by the mean distance error, which is the mean of the absolute difference in search space between the found best fitness value and the global optimum of the function \(f(x)\). A good performance means a low mean distance between the found best result and the actual optimum of the fitness function.
To compare the impact of chaotic and random sequences on PSO the following test functions \(f(x)\) from the IEEE CEC 2013 test suite [7] are employed:
* 1 Equal Maxima (1 dimension)
* 2 Uneven Decreasing Maxima (1 dimension)
* 3 Himmelblau (2 dimension)
* 4 Six-Hump Camel Back (2 dimension)
* 5 Shubert (2 dimension)
* 6 Vincent (2 dimension)
* 7-9 Rastrigin (10, 20 and 30 dimensions)
* 10-12 Rosenbrook (10, 20 and 30 dimensions)
* 13-15 Sphere (10, 20 and 30 dimensions).
Random sequences with the following distributions are used:
* \(\mathcal{B}(0.5/0.5)\) Beta distribution
* \(\mathcal{B}(1/5)\) Beta distribution
* \(\mathcal{B}(5/1)\) Beta distribution
* \(\mathcal{N}(0.5/0.1)\) normal distribution.
Chaotic sequences from the following chaotic maps are taken:
* Logistic map, \(z(k+1)=rz(k)(1-z(k))\), \(r=4\)
* Cubic map, \(z(k+1)=rz(k)(1-z(k)^{2})\), \(r=2.62\)
* Bellows map, \(z(k+1)=rz(k)(1+z(k))^{-6}\), \(r=2.0\).
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Test} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ & \multicolumn{3}{c|}{Logistic Map} & \multicolumn{3}{c|}{Cubic Map} & \multicolumn{3}{c|}{Bellows Map} & \multicolumn{3}{c|}{\(\mathcal{B}(0.5/0.5)\)} & \multicolumn{3}{c|}{\(\mathcal{N}(0.5/0.1)\)} & \multicolumn{3}{c|}{\(\mathcal{B}(1/5)\)} & \multicolumn{3}{c|}{\(\mathcal{B}(5/1)\)} \\ \hline & \(\mu\) & \(\sigma\) & \(\mu\) & \(\sigma\) & \(\mu\) & \(\sigma\) & \(\mu\) & \(\sigma\) & \(\mu\) & \(\sigma\) & \(\mu\) & \(\sigma\) \\ \hline
1 & 0.44e-4 & 0.16e-3 & 0.32e-4 & 0.18e-3 & 0.03e-4 & 0.01e-3 & 0.41e-4 & 0.16e-3 & **0.00e-4** & 0.00e-3 & **0.00e-4** & 0.00e-3 \\ \hline
2 & **0.10e-2** & 1.05e-2 & 0.36e-2 & 1.95e-2 & 0.15e-2 & 1.38e-2 & 1.16e-2 & 1.12e-2 & 4.12e-2 & 1.13e-2 & 2.42e-2 & 1.19e-2 & 2.42e-2 \\ \hline
3 & **1.58e-1** & 0.60e-2 & **1.58e-1** & 0.70e-2 & 1.60e-1 & 0.94e-2 & **1.58e-1** & 0.68e-2 & 1.63e-1 & 1.17e-2 & 1.59e-1 & 0.87e-2 & 1.59e-1 & 0.86e-2 \\ \hline
4 & **1.66e-1** & 2.61e-2 & **1.66e-1** & 2.59e-2 & **1.66e-1** & 2.62e-2 & **1.66e-1** & 2.60e-2 & **1.66e-1** & 2.61e-2 & 1.68e-1 & 2.54e-2 & **1.66e-1** & 2.61e-2 \\ \hline
5 & 5.16e-2 & 3.26e-2 & 5.22e-2 & 3.06e-2 & 5.23e-2 & 3.19e-2 & 5.22e-2 & 2.31e-2 & 4.93e-2 & 3.09e-2 & **4.82e-2** & 2.28e-2 & 4.98e-2 & 3.07e-2 \\ \hline
6 & 3.62e-6 & 0.22e-5 & 3.06e-6 & 0.22e-5 & 4.11e-6 & 1.41e-5 & 3.68e-6 & 0.22e-5 & **3.44e-6** & 0.22e-5 & 3.43e-6 & 0.22e-5 & 3.45e-6 & 0.22e-5 \\ \hline
7 & 7.74e-2 & 1.21e-2 & **1.72e-2** & 7.19e-2 & 7.73e-2 & 2.11e-2 & 2.76e-2 & 2.22e-2 & 7.39e-2 & 1.92e-2 & 3.74e-2 & 1.88e-2 & 7.51e-2 & 2.02e-2 \\ \hline
8 & 8.03e-2 & 1.75e-2 & 7.72e-2 & 1.77e-2 & 8.09e-2 & 1.99e-2 & 7.95e-2 & 1.79e-2 & 7.65e-2 & 1.74e-2 & **7.60e-2** & 1.54e-2 & 7.84e-2 & 1.85e-2 \\ \hline
9 & 8.38e-2 & 1.78e-2 & 8.02e-2 & 1.73e-2 & 8.84e-2 & 1.98e-2 & 8.35e-2 & 1.77e-2 & 8.02e-2 & 1.84e-2 & **7.75e-2** & 1.44e-2 & 8.39e-2 & 2.11e-2 \\ \hline
10 & 7.28e-2 & 1.73e-2 & 7.29e-2 & 1.79e-2 & 8.05e-2 & 1.55e-2 & **7.20e-2** & 1.80e-2 & 8.93e-2 & 1.28e-2 & 8.87e-2 & 1.12e-2 & 8.65e-2 & 1.35e-2 \\ \hline
11 & 9.14e-2 & 0.74e-2 & 9.18e-2 & 0.69e-2 & 9.97e-2 & 0.78e-2 & 9.12e-2 & 0.74e-2 & 9.05e-2 & 0.77e-2 & 9.06e-2 & 0.70e-2 & **9.04e-2** & 0.76e-2 \\ \hline
12 & 9.06e-2 & 0.63e-2 & 9.05e-2 & 0.66e-2 & 9.10e-2 & 0.66e-2 & 9.12e-2 & 0.61e-2 & **9.03e-2** & 0.67e-2 & 9.08e-2 & 0.64e-2 & 9.04e-2 & 0.70e-2 \\ \hline
13 & **0.01e-4** & 0.02e-4 & 0.03e-4 & 0.05e-4 & 0.05e-4 & 0.38e-4 & **0.01e-4** & 0.02e-4 & 0.36e-4 & 1.73e-4 & 0.36e-4 & 1.72e-4 & 1.36e-4 & 2.26e-4 \\ \hline
14 & 0.64e-2 & 0.28e-2 & **0.51e-2** & 0.27e-2 & 0.81e-2 & 0.33e-2 & 0.68e-2 & 0.29e-2 & 0.73e-2 & 0.30e-2 & 0.86e-2 & 0.33e-2 & 0.94e-2 & 0.35e-2 \\ \hline
15 & 1.67e-2 & 0.41e-2 & **1.55e-2** & 0.40e-2 & 1.83e-2 & 0.43e-2 & 1.71e-2 & 0.42e-2 & 1.67e-2 & 0.43e-2 & 1.91e-2 & 0.44e-2 & 1.97e-2 & 0.44e-2 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between chaotic and random sequences for the test function according to Section 3.7 and the distributions in Section 4. Mean and standard deviation of the PSO performance over 1000 runs each. Bold numbers indicate the best performance for each of the 15 test functions.
Figure 1: Distribution of sequences used in the study (a) Natural invariant densities of chaotic maps. (b) Probability density functions of random variables.
Results
Table 1 presents a comparison between chaotic and random sequences for the test functions and the distributions in Section 2. For each combination of test function and distribution the mean performance and the standard deviation of the performance are given for 1000 PSO runs each. Bold numbers indicate the best performance for each of the 15 test functions. The results vary not much for a given test function; there are several test functions where more than one distribution scores best results, but there are also subtle differences in performance. For instance, PSO driven by sequences associated with the Cubic map performs best for 5 out of the 15 test functions, but Bellows map only scores 1 out of 15 and the Logistic map has 4 out of 15. For the random sequences the results are a little more balanced with PSO driven by sequences from \(\mathcal{B}(0.5/0.5)\) and \(\mathcal{N}(0.5/0.1)\) performing best 4 times out of 15, while \(\mathcal{B}(1/5)\) and \(\mathcal{B}(5/1)\) come up with 3 best results. At least for the selection considered, every distribution is best for at least one test function. Moreover, if we were to compare chaotic sequences from the Cubic map and random sequences from \(\mathcal{B}(1/5)\) (or \(\mathcal{B}(5/1)\)), we might be inclined to conclude that chaos and random give different performances with chaos somewhat being better. In the following, we analyse the performance results and argue that these differences are rather caused by differences in the underlying distribution and to a lesser degree by the fact that the sequences come from either PRNGs or chaotic maps.
A first step is by displaying the distribution of the 1000 PSO runs as boxplots to evaluate not only mean and standard deviation of the results, but also quartiles and outliers. With the two dimensional Shubert function we take an example showing some systematic differences of the PSO performance which are typical for all test functions considered, see Figure 2. In addition to the data in Table 1, we also give the result of a re-run with \(\mathcal{B}(0.5/0.5)\), for which we get the same mean and standard deviation, but a slight difference in the lower whisker. Although the means and standard deviations and also the median, first and third quartiles for all distributions are almost equal, for PSO run samples that were driven by \(\mathcal{B}(0.5/0.5)\), the Logistic map and Bellows map, the lower quartile of the performance data is significantly narrower than their counterparts resulting from \(\mathcal{B}(1/5)\), \(\mathcal{B}(5/1)\), \(\mathcal{N}(0.5/0.1)\) and the Cubic map. In other words, we may conclude that PSO performs slightly worse when using \(\mathcal{B}(0.5/0.5)\), Logistic map or Bellows map, respectively, as for sequences from the other 3 distributions very good results fall into the quartile group 1. Moreover, for these 3 distributions the performance is almost the same. Furthermore, \(\mathcal{B}(0.5/0.5)\) in both runs and the Logistic map are most similar.
Our second analysis step is to compare the performance with a nonparametric statistical test of the null hypothesis. For this task, the Wilcoxon rank-sum test is widely used to verify performance differences between variants of metaheuristic algorithms. It is here taken to evaluate performance differences between
Figure 2: Box plot for PSO performance data for the Shubert function.
chaotic and random sequences. The two sided test evaluates the null hypothesis that two given data samples are samples from continuous distributions with the same median. We use a \(0.05\) significance level. The Wilcoxon rank-sum test is used on every combination of PSO performance samples. Table 2 states for each comparison between differently driven PSO the following tuple \([+,-,\thickapprox]\) with:
* \(+\)\(\ldots\)number of test functions where the PSO driven by the sequence in the _row_ performs better
* \(-\)\(\ldots\)number of test functions where the PSO driven by the sequence in the _column_ performs better
* \(\thickapprox\)\(\ldots\)number of test functions where the PSO performances are not distinguishable by the Wilcoxon rank-sum test.
The results show that performance differences primarily occur when the used sequences differ in the underlying probability distribution. For instance, compare the results for the Logistic map with \(\mathcal{B}(0.5/0.5)\), which gives \(5\) superior, \(6\) inferior and \(4\) equal performances, or the results for \(\mathcal{B}(0.5/0.5)\) vs. \(\mathcal{B}(1/5)\) with \(4\) superior, \(5\) inferior and \(6\) equal. In contrast the PSO using sequences from \(\mathcal{B}(0.5/0.5)\) and the Logistic map have very small deviations in performance with \(1\) superior, \(2\) inferior and \(12\) equal. This is almost on a par with a re-run with another set of realizations of \(\mathcal{B}(0.5/0.5)\), which gives \(1\) inferior and \(14\) equal. Another interesting comparison is the overall performance of chaotic sequences vs. random sequences, see the bold numbers in Table 2 and the summation in the last row and column. Here, we find for chaos vs. random that the Cubic map gives \(22\) superior, \(11\) inferior and \(27\) equal performances, while the Logistic and Bellows map score \(14\) superior vs. \(18\) (and \(22\)) inferior and \(28\) (and \(24\)) equal performances. Similar results can be recorded from the perspective of random sequences vs. chaotic sequences. For instance, \(\mathcal{B}(0.5/0.5)\) has \(7\) superior, \(9\) inferior and \(29\) equal performances, but \(\mathcal{N}(0.5/0.1)\) scores \(16\) superior, \(11\) inferior and \(18\) equal. To summarize some chaotic sequences perform better than random, but there are also some random sequences which perform better than chaos, which agrees with previously reported result [1, 2, 5, 6, 10, 11, 12, 14, 15]. In view of the results presented here, however, it does not appear plausible to assume the presence of general and systematic differences in performance between chaotic and random sequences irrespective and independent of taking into account the underlying distribution. Given the fact that for random and chaotic sequences with the same distribution, Logistic map and \(\mathcal{B}(0.5/0.5)\), we get the smallest difference in performance, while for all other combinations of different sources, we get different performances, the results rather support the conclusion that the underlying distribution rather than the origin is the main influential factor in PSO performance.
## 4 Discussion
There is an ongoing debate about using random and chaotic sequences for driving population-based meta-heuristic search algorithms and the effect on behaviour and performance such a selection has [2, 3, 4, 5, 9, 10, 12, 13, 15]. This paper adds to this discussion with the focus on PSO and proposed the following approach.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\mathcal{B}(0.5/0.5)\) & \begin{tabular}{c} cubic \\ map \\ \end{tabular} &
\begin{tabular}{c} Below \\ map \\ \end{tabular} & \(\mathcal{N}(0.5/0.1)\) & \(\mathcal{B}(1/5)\) & \(\mathcal{B}(5/1)\) & chaotic vs. random \\ \hline \(\mathcal{B}(0.5/0.5)\) & [1,0.14] & & [4,7.4] & [5,6.4] & [5,4.6] & \\ \hline logistic map & **[2,1,12]** & [1,5.9] & [5,2.8] & **[3,6,6]** & **[5,6,4]** & **[4,5,6]** & 14 (+), 18 (-), 28 (\(\thickapprox\)) \\ \hline cubic map & **[5,2,8]** & & [9,1.4] & **[5,3.7]** & **[5,4.6]** & **[7,2.6]** & 22 (+), 11 (-), 27 (\(\thickapprox\)) \\ \hline \(\mathcal{B}(1/5)\) & & & & **[3,7,5]** & **[5,6,4]** & **[4,5,6]** & 14 (+), 22 (-), 24 (\(\thickapprox\)) \\ \hline \(\mathcal{B}(1/5)\) & & & & & [2,3,10] & [5,3.7] & \\ \hline random vs. chaotic & \(7\) (+), \(9\) (\(-\)), \(29\) (\(\thickapprox\)) & & 16 (+), 11 (\(-\)), 18 (\(\thickapprox\)) & 16 (+), 15 (\(-\)), 14 (\(\thickapprox\)) & 12 (+), 15 (\(-\)), 18 (\(\thickapprox\)) & \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison between chaotic and random sequences by Wilcoxon rank-sum test for all \(15\) test function according to Section 3 and the \(7\) distributions in Section 4. For each cell the tuple \([+,-,\thickapprox]\) indicates for \(+\) that the sequences from the distribution in the row performs better, for \(-\) that the sequences from the distribution in the column performs better, while for \(\thickapprox\) there is no difference in performance. Bold numbers indicate comparisons between chaotic and random sequences.
We not only use random sequences associated with different probability distributions, but also compare to chaotic sequences with different density distributions and complete the experimental setup by random and chaotic sequences with the same distribution. This enables us to differentiate between differences in the origin of the sequences (PRNGs or chaotic maps) and statistical differences expressed by the underlying distributions.
These differences in statistical properties can potentially have profound influences on PSO search dynamics, and thus on behaviour and performance. The particle movement directly depends on the sequences as successive particle positions are calculated via cognitive and social components weighted by the elements from the sequences. The distribution associated with the sequences defines global, long-term search preferences of the particle swarm. If, for instance, the distribution is rather bell shaped, as a normal distribution, then the search is more likely centered to the interior of the search space region where the swarm is currently situated. If, on the other hand, the distribution is rather U-shaped, as Beta distributions with \(\alpha=\beta<1\), the search more likely concentrates on the exterior of the current region. How such a varying search bias interacts with unimodal or multimodal functions of the optimization problem causes different types of swarm diversity decline. By this mechanism does the distribution of the numbers in the random and chaotic sequences influence the ratio of exploration and exploitation during the PSO run and thus behavior and performance.
In the experimental setup, we considered 3 sources of chaotic sequences, Logistic, Cubic and Bellows map, and 4 sources of random sequences, a normal distribution \(\mathcal{N}(0.5/0.1)\) and 3 Beta distributions, \(\mathcal{B}(0.5/0.5)\), \(\mathcal{B}(1/5)\) and \(\mathcal{B}(5/1)\). With this selection, we took distributions which are not constant but have shape over the sample space. This is in contrast to an uniform distribution, which is constant and produces random sequences which are equally spread on a fixed interval. For such a type of random distribution results similar to those discussed here are known. The invariant density of the tent map equals an uniform distribution and differences in performance between chaotic sequences generated by the tent map as compared to sequences generated by the Logistic map have been shown [6, 11], which is consistent with and supplements our results.
## 5 Conclusions
We considered PSO and address the question whether or not using chaotic or random sequences really causes different performance. Thus, our discussion is not so much about what sequence performs best, but rather about identifying reasons why some sequences are better than others. Our findings cast considerable doubt on previous results which compared random to chaos and suggested that there are intrinsic differences in PSO performance, irrespective and independent of the associated distributions. We may choose either random or chaotic sequences to drive PSO and observe differences in performance. But most likely these differences occur because the underlying distributions of the chosen sequences are different. In other words, if chaotic sequences for a certain optimization problem (or also for a certain collection of optimization problems) lead to better performance then this is primary the case because the distribution is more fitting to the problem.
|
2308.01080 | Leveraging Few-Shot Data Augmentation and Waterfall Prompting for
Response Generation | This paper discusses our approaches for task-oriented conversational
modelling using subjective knowledge, with a particular emphasis on response
generation. Our methodology was shaped by an extensive data analysis that
evaluated key factors such as response length, sentiment, and dialogue acts
present in the provided dataset. We used few-shot learning to augment the data
with newly generated subjective knowledge items and present three approaches
for DSTC11: (1) task-specific model exploration, (2) incorporation of the most
frequent question into all generated responses, and (3) a waterfall prompting
technique using a combination of both GPT-3 and ChatGPT. | Lea Krause, Selene Báez Santamaría, Michiel van der Meer, Urja Khurana | 2023-08-02T11:04:27Z | http://arxiv.org/abs/2308.01080v1 | # Leveraging Few-Shot Data Augmentation and Waterfall Prompting for Response Generation
###### Abstract
This paper discusses our approaches for task-oriented conversational modelling using subjective knowledge, with a particular emphasis on response generation. Our methodology was shaped by an extensive data analysis that evaluated key factors such as response length, sentiment, and dialogue acts present in the provided dataset. We used few-shot learning to augment the data with newly generated subjective knowledge items and present three approaches for DSTC11: (1) task-specific model exploration, (2) incorporation of the most frequent question into all generated responses, and (3) a waterfall prompting technique using a combination of both GPT-3 and ChatGPT.
## 1 Introduction
Task-Oriented Dialogue (TOD) Systems are traditionally designed to facilitate users in achieving specific objectives, such as looking up train times or booking a flight in a dialogue setting. For these tasks, the models are often given access to a database of factual information to complete the task. However, other tasks necessitate not only factual but also subjective insights, which are derived from other users' opinions. Handling subjective knowledge and using it for generating dialogue responses is the core of the _Subjective-Knowledge-based Task-Oriented Dialogue (SK-TOD)_Zhao et al. (2023) challenge. The challenge is set up as conversations between users and artificial assistants, inquiring about and potentially booking hotels or restaurants. The organisers provided dialogue snapshots and a knowledge base with subjective reviews and FAQs related to said hotels and restaurants.
The challenge consists of three interlinked subtasks: 1) _Knowledge Seeking Turn Detection_, where it is determined whether a turn needs knowledge to create an appropriate response; 2) _Knowledge Selection_, where relevant items are selected from the knowledge base; and 3) _Knowledge-Grounded Response Generation_, where the dialogue history and the selected knowledge items must be aggregated to generate a concise response for the user.
Our primary interest was in sub-task 3, but due to the high correlation of performance with sub-task 2, we simultaneously worked on improving the knowledge selection (see Figure 1). Furthermore, we experimented with splitting sub-task 3 into two smaller tasks. First, we perform few-shot data augmentation, supported by a detailed data analysis, to substantially increase the knowledge base and address generalisation to new domains. Despite its marginal impact on the final test set, given no new domains were added, it forms a robust strategy for future generalisations. Second, we undertook the exploration of task-specific models, we found that using a larger model leads to slight improvements in task performance. In addition, we augmented the generated responses by integrating the most frequently asked question, aiming to
Figure 1: Overview of the approaches for _Knowledge Selection_ and _Response Generation_. For approach 3a only the best-performing model is depicted.
enhance conversation coherence and engagement. Lastly, we also employed a waterfall prompting technique that integrated a combination of Large Language Models. This strategy, though it scored lower in quantitative metrics compared to the baseline, possibly due to the abstractive nature of these models, is a promising avenue for further research1.
Footnote 1: Our code is available at [https://github.com/lkra/dstcl1-track5/tree/main/CLTeamL](https://github.com/lkra/dstcl1-track5/tree/main/CLTeamL).
## 2 Related work
Data AugmentationIn low-resource data scenarios, synthetic data augmentation has been shown to be a cost-efficient and effective way to increase dataset size (Anaby-Tavor et al., 2020). Traditional data augmentation techniques for natural language processing often include simple transformations like synonym substitution, random insertion, deletion, and swapping of words (Wei and Zou, 2019). A more complex approach is paraphrasing, where the meaning stays the same but words and syntax change, a popular example of this is backtranslation in machine translation (Sennrich et al., 2016). Data augmentation was also used in successful approaches of last year's edition of DSTC (Tian et al., 2021; Thulke et al., 2022)
Recently, the emergence of large language models (LLMs) like GPT-3 (Brown et al., 2020) has opened up new avenues for data augmentation. One approach is to prompt these models to generate additional data that fits the distribution of the original dataset (Kumar et al., 2020; Xia et al., 2020; Lee et al., 2021). This leverages the capacity of LLMs to generate coherent and contextually appropriate sentences, thereby creating augmented data that closely mirrors real-world linguistic diversity.
Waterfall PromptingCombining prompts or rerunning the same prompt has shown increased performance and robustness. For example, Singhal et al. (2023) coin the term _ensemble refinement_ which involves using a two-step prompting approach. In the first iteration, multiple answers are generated for a prompt, and in the second iteration, these answers are incorporated into the prompt to develop a final solution that is more robust for the given task. Similar efforts have been explored such as self-consistency (Wang et al., 2023), recitation-augmentation (Sun et al., 2023), and rational-augmentation (Wang et al., 2022). Pitis et al. (2023) expand on this framework by incorporating multiple rounds of prompting, each building upon the improvements made in the previous round. However, these still focus on prompting a single model and differ from a typical ensemble where different models' output is aggregated (Wang et al., 2023). To the best of our knowledge, there is limited work on waterfall prompting or assembling the responses from different language models.
Previous DSTC editionsAccording to system reports from the DSTC9 and DSTC10 challenges, enhancing the performance of sub-task 2 significantly affects the overall task performance (Kim et al., 2020). In the DSTC9 iteration, it was found that ensemble model approaches are effective for knowledge selection (Kim et al., 2020). In DSTC10, successful approaches used a separate entity tracking component for knowledge selection to narrow down the search space before document ranking (Kim et al., 2022).
## 3 Data Analysis
Before developing our approach to the DSTC tasks, we analyse the available data. We 1. inspect the correlation between features, 2. investigate the sentiment of the available data, and 3. examine the distribution of dialogue act types.
### Feature correlation
To steer our approach, we investigate the potential influence of various factors on automatic metric scores. We examine correlations between the following variables: (1) the number of dialogue turns, (2) the number of selected knowledge items, and
Figure 2: Correlation between reference data, baseline predictions, and automatic scores, on the validation set.
(3) the lengths of both the reference and prediction in terms of characters and sentences (Figure 2).
After not finding strong correlations among these parameters, we decide to split the response into two sub-tasks. Namely, we process system responses by separating them into two parts:
summary + [optional] question
These two parts are defined as follows:
* Summary: This part of the response focuses on summarising the selected knowledge items.
* Question: If included, this part of the response pertains to dialogue management, where the system can offer assistance in continuing with the current task, such as making a reservation.
### Summary analysis
We enrich the dataset by adding two types of sentiment scores. Firstly, we calculate a sentiment score using the spaCy library2 across all knowledge items selected for that dialogue. Secondly, we calculate a sentiment score specifically for the summary part of the ground truth responses. Our intuition is that the sentiment of the summary should align with the sentiment of the knowledge items used to generate the response. Our belief is supported by positive correlations observed in all data splits (train=0.41, val=0.53, test=0.54) and overall (all=0.45).
Footnote 2: [https://spacy.io/](https://spacy.io/)
Furthermore, we examined the data distributions of these new features, as depicted in Figure 3. The average sentiment of the knowledge items and the sentiment of the responses have similar distributions. They both have positive medians, positive interquartile ranges, and a long tail extending into the negative range, indicating that the sentiment of the system's responses tends to be neutral to positive. The average sentiment of the knowledge items is slightly higher than the sentiment of the responses. The standard deviation of the spread of the knowledge sentiment scores inside dialogues is around 0.25, indicating that most knowledge items are not highly polarised, and their sentiment values are relatively close to each other.
### Dialogue management analysis
Dialogue actsWe generate a dialogue act tag MIDAS Yu and Yu (2019) for each of the user's last utterances in the dialogue context and explore the baseline performance per act type (Table 1). Our findings reveal that questions are quite frequent, as users often rely on them to interact with the artificial assistant in task-oriented dialogues. It is worth noting that _yes/no questions_ are the most frequent type and typically elicit short system responses, consisting of simple answers like "yes" or "no." Furthermore, we observe that some questions seek opinions, which may require subjective knowledge derived from reviews for an appropriate response, in contrast to factual questions that can potentially be answered using FAQs.
Optional questionsWe examined how frequently questions appear in the system's responses. Our findings indicate that 34.94% of the responses contain a question at the end.
Figure 4: Question frequency distribution across all data splits. Only the first 100 elements are plotted.
Figure 3: Sentiment distributions across dialogues for knowledge items (left) and responses (right). For knowledge items, we also show the distribution of standard deviation sentiment scores inside dialogues (centre).
When analysing the questions, we observe that the data distribution exhibits a single peak and a long tail (Figure 4). Out of the 2,522 unique questions, 79% (n=1,994) appear only once. The most commonly occurring question is _"Would you like to know more about them?_ accounting for 19% of the occurrences. Moreover, we notice that the most frequent questions are generic. The top five questions, which make up 42% of the data, include: _"Do you have any other questions?", "Is there anything else I can help you with?", "Would you like to make a reservation?"_, and _"Is there anything else you'd like to know about them?"_.
To evaluate the impact of optional questions, we conducted an ablation study where we remove the questions at the end of the responses (Table 2). This leads to slightly lower scores. Additionally, we experiment with adding the most frequent question (MFQ) to the responses that did not already contain a question. This improves the performance for longer responses.
## 4 Method
### Sub-task 2: Knowledge Selection
Given the limited scope of reviews, traveller types and domains, we expand the knowledge base with few-shot data augmentation. Utilising ChatGPT, we synthesise artificial data, supplementing the MultiWoz 2.1 dataset's augmented version Eric et al. (2020). This process involves creating new reviews for existing items within existing domains, in addition to the introduction of three new domains from the DSTC10 data set, all enriched with subjective knowledge. In total, our additional data contains 1,006 extra reviews. We combine these with the existing data for use in both the Knowledge Seeking Turn Detection and Knowledge Selection tasks. For a data example see A.
Extension of item reviewsThe original knowledge data had 143 entities and 1,430 reviews (8,013 sentences) in the _hotel_ and _restaurant_ domains. We added 715 reviews to this by adding five different traveller types and producing five sentences for each type. This led to 25 extra sentences per entity, for a total of 2,145 reviews and 11,609 sentences. We significantly expanded and diversified the present traveller types. The original data had only 6 types, we increased this to 26, though some like _"Group of friends"_ and _"Friends getaway"_ could be merged into the same type. For a complete overview see Table 7. The average sentence length for the original reviews is 73.73 compared to 76.93 for the added reviews. The prompt to generate these reviews can be found in Prompt 4.1
Extension to new domainsInspired by Thulke et al. (2022), we expand our knowledge base by adding reviews from three different domains: _attraction, taxi,_ and _train_.
### Prompt 4.1: extension of new reviews
Please provide new reviews for the **[entity_name]** but for five different travel_type.
Contine the counting of the reviews and make sure the new reviews are in a dict format like this:
"<id>": {{"traveler_type": "<traveler_type", " sentences": {"<id>": "<revieww>", "<id>": "<revieww>", "<id>": "<revieww>", "<id>": "<revieww>", "<id>": "<revieww>"}}},
These are the existing reviews: **[reviews_all]**.
Take this start and continue. Use double quotes to comply with json format.
"10": {{"traveler_type":
\begin{table}
\begin{tabular}{l|c c|c c c c} \hline \hline
**Act** & **Freq.** & **Res. Len.** & **BLEU** & **METER** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline yes\_no\_question & 1078 & 131.85 & 0.097 & 0.187 & 0.368 & 0.151 & 0.292 \\ open\_question\_opinion & 288 & 133.52 & 0.107 & 0.189 & 0.379 & 0.163 & **0.301** \\ open\_question\_factual & 203 & 134.51 & 0.094 & 0.179 & 0.358 & 0.148 & 0.285 \\ statement & 188 & 137.44 & 0.081 & 0.170 & 0.345 & 0.131 & 0.260 \\ command & 184 & 132.82 & 0.087 & 0.182 & 0.355 & 0.141 & 0.280 \\ opinion & 166 & 139.07 & 0.089 & 0.183 & 0.366 & 0.143 & 0.280 \\ pos\_answer & 14 & 135.78 & 0.053 & 0.151 & 0.312 & 0.108 & 0.230 \\ complaint & 5 & 138.20 & **0.110** & 0.160 & 0.333 & 0.122 & 0.277 \\ comment & 1 & 181.00 & 0.038 & 0.084 & 0.196 & 0.081 & 0.117 \\ neg\_answer & 1 & **104.00** & 0.069 & **0.319** & **0.611** & **0.294** & 0.5 \\ nonsense & 1 & 189.00 & 0.014 & 0.089 & 0.184 & 0.0 & 0.123 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data statistics (frequency and average system response length) and baseline performance per dialogue act.
We use the validation data from DSTC103 which contains entities and respective FAQs from these different domains. The data did not contain subjective reviews for any of the domains. Hence, we generate three reviews for each entity by prompting ChatGPT with their entity name and FAQs. Due to the limited context width of the ChatGPT API, we restrict the reviews to two sentences.
Footnote 3: Data downloaded using the following script: [https://tinyurl.com/j8pvcknx](https://tinyurl.com/j8pvcknx)
We generate the reviews with Prompt 4.2, which resulted in \(291\) reviews for the _attraction_ domain.
Although we prepared the data for the _taxi_ and _train_ reviews, we excluded these two domains as there were no transportation-related domains present in the released test set.
### Sub-task 3: Response Generation
#### 4.2.1 Model exploration
The generation baseline provided by the organisers uses BART Lewis et al. (2020), specifically part-base4, fine-tuned for ten epochs using Adam with a learning rate of 3e-5 and epsilon of 1e-8.
Footnote 4: [https://huggingface.co/facebook/bart-base](https://huggingface.co/facebook/bart-base)
Instead of sticking to the base BART model, we opted to browse publicly available trained models on different tasks and investigate their performance after fine-tuning on the DSTC data. This approach is inspired by _model recycling_Choshen et al. (2022), which suggests that there exist models that are generally better at being adapted to perform domain-specific tasks. We browsed the HuggingFace Hub5 for models with a better fit (in the task they were trained for or in the data they used for pre-training). Ultimately, we retrained the baseline architecture, as well as Flan-T5 Chung et al. (2022) in its small, base and large variants, and a trained BART model, trained on the SAM-Sum dataset Gliwa et al. (2019). See Table 3 for the results. For training the larger architectures, including flan-t5-large, bart-large and flan-t5-base, we modified the optimisation parameters (learning rate 4e-05, a warm-up ratio of 0.2). We found that using a larger model has small benefits over the original baseline model in most evaluation metrics. We did not find that training on intermediate tasks that are related to our objective improved performance substantially.
Footnote 5: [https://huggingface.co/models](https://huggingface.co/models)
#### 4.2.2 Prompting
We used OpenAI's APIs6 to generate system responses given the baseline output for sub-task 1 and 2. To ensure consistent results, we set the temperature value to 0 for all configurations.
Footnote 6: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference)
Model comparisonTo compare GPT-3 and ChatGPT, we conducted experiments using simple prompts while adhering to the respective API requirements. The prompts consisted of two components: a) the dialogue history and b) the selected subjective knowledge. The subjective knowledge was formatted as a list, explicitly specifying whether each item was a **Review** or an **FAQ**.
{knowledge_type}: {text}
The initial prompt for GPT-3 is shown in Prompt 4.3
```
{knowledge_type}:{text}
```
**Prompt 4.3: GPT-3, initial prompt
ChatGPT requires API requests to be given as a series of messages with a "role" that can have three values: _System_, _User_ or _Assistant_. As they explain in their documentation, _System_ messages can be used to give instructions to the model, while _User_ and _Assistant_ messages can be used to carry on a dialogue with the model. As such, we use the _System_ messages to give the subjective knowledge to the model and then send each of the dialogue history messages as either _User_ or _Assistant_ messages. The initial prompt for ChatGPT is shown in Prompt 4.4.
Although ChatGPT is optimised for dialogue and we expected it to perform better, it scored lower than GPT-3 in the automatic metric evaluation, as seen in Table 4. Upon further examination of the generated responses, we observed that ChatGPT tends to produce much longer responses. In fact, 367 API requests were truncated due to their length. On the other hand, GPT-3 generates more concise responses compared to both the baseline and the ground truth (Table 5). This difference in response length has a negative impact on most natural language generation metrics, as they tend to penalise longer and wordier responses.
Chain Of ThoughtWe focus on optimising the prompts for the best performance of a single model, specifically ChatGPT. To achieve this, we adopt recommended techniques from the OpenAI Cookbook7 for prompting ChatGPT, such as Chain of Thought (CoT) (Wei et al., 2022). For this, we divide the task into two sub-tasks: summarisation and dialogue management (See Section 3). The resulting prompt is shown in Prompt 4.5.
Footnote 7: [https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md](https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md)
To accommodate CoT answers, which tend to be longer, we double the value of the "max_tokens"
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model name** & **BLEU** & **METEOR** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline ChatGPT & 0.041 & 0.147 & 0.280 & 0.085 & 0.207 \\ GPT-3 & 0.051 & 0.137 & 0.302 & 0.105 & 0.237 \\ ChatGPT + CoT & 0.050 & 0.155 & 0.305 & 0.099 & 0.228 \\ ChatGPT + CoT + 3 examples & **0.069** & **0.161** & **0.325** & **0.117** & **0.243** \\ Waterfall & 0.056 & 0.152 & 0.312 & 0.100 & 0.236 \\ Waterfall + 3 examples & 0.057 & 0.160 & 0.311 & 0.103 & 0.225 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Prompting settings explored, shown performance is on the validation set.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model name** & **BLEU** & **METEOR** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline \hline \multicolumn{1}{l}{bart-large-cnn-samsum} & 0.095 & **0.184** & **0.363** & 0.144 & 0.278 \\ \multicolumn{1}{l}{bart-base} & 0.096 & 0.173 & 0.347 & 0.139 & 0.271 \\ \multicolumn{1}{l}{bfan-t5-base} & 0.096 & 0.173 & 0.345 & 0.140 & 0.274 \\ \multicolumn{1}{l}{bfan-t5-large} & **0.104** & 0.179 & 0.360 & **0.147** & **0.281** \\ \multicolumn{1}{l}{bfan-t5-small} & 0.098 & 0.164 & 0.337 & 0.136 & 0.272 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Models explored, shown performance is on the test set.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Approach** & \multicolumn{2}{c}{**\#Sentences**} & \multicolumn{2}{c}{**\#Characters**} \\ & **Avg.** & **Std.** & **Avg.** & **Std.** \\ \hline GPT-3 & 1.55 & 0.77 & 120.04 & 70.16 \\ Ground truth (val) & 1.69 & 0.66 & 133.55 & 32.05 \\ Ground truth (test) & 1.70 & 0.66 & 135.55 & 32.12 \\ Ground truth (train) & 1.71 & 0.65 & 136.61 & 33.49 \\ Baseline (val) & 1.91 & 0.53 & 129.66 & 29.07 \\ ChatGPT & 2.09 & 0.79 & 182.62 & 74.61 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Length statistics on responses, ordered by ascending average length
parameter. This modification leads to improved performance (refer to Table 4); however, it also increases the number of truncated API responses to 1,110.
**Prompt 4.5: ChatGPT, Chain Of Thought**
You are assisting a user. Create a response for the user, using the following procedure:
_(1) First, summarise the available knowledge into a couple sentences._
_(2) Then, create a short follow-up question given the dialogue history._
_(3) Create the final response to the user as <summary> <follow-up>_
Knowledge:
**[knowledge]**
Dialogue history:
**[dialogue_context]**
Solution:
_(1) summary_;
#### Few-shot learning
In order to further enhance the prompt, we incorporate few-shot learning [1]. After initial exploration, we select three examples that pose the most difficulty. To identify these examples, we analyse instances with the lowest performance and examine their data trends. We observe that ChatGPT struggles with dialogues that involve a high number of items and have a predominantly negative sentiment. Consequently, we select dialogues from the training dataset that meet the following criteria:
1. Filter by the number of knowledge items that are "higher than usual", between [mean+std, mean+2std]
2. Filter by the average sentiment that is "lower than usual", between [mean-2std, mean-std]
3. Since around one-third of responses contain a question, we sample accordingly
By applying these selection criteria, we focus on dialogues that require better summarisation due to the increased number of knowledge items with slightly negative sentiment, and a mix of dialogues with and without a question. This setting yields the best performance and reduces the number of truncated API responses to 380.
#### Waterfall prompting
Additionally, we explore combining prompts and models by utilising the output of one prompted model as part of the input prompt for a subsequent model, in a waterfall manner. We use GPT-3 for summarisation since it behaves better at this part of the task [1]. We also separate the FAQs, containing factual information, from the reviews, which have opinionated information. The prompt we use for this task is shown in Prompt 4.6.
**Prompt 4.6: GPT-3, Summarisation**
Summarize the following into one or two sentences
max:
FAQs:
**[faqs]**
Reviews:
**[reviews]**
We then use ChatGPT to generate the system response in dialogue, using the output of GPT-3 as part of the prompt. The final prompt is shown in Prompt 4.7. Interestingly, we observed a decline in performance when using this waterfall approach compared to a single model setting.
**Prompt 4.7: ChatGPT, Final prompt**
You are assisting a user. Create a response for the user, using the following procedure:
_(1) First, summarise the available knowledge into a couple sentences._
_(2) Then, create a short follow-up question given the dialogue history._
_(3) Create a final brief response to the user as <summary> <follow-up>_
Knowledge:
**[knowledge]**
Dialogue history:
**[dialogue_context]**
Solution:
_(1) summary:_
**[GPT-3 summary]**
_(2) follow-up_:
## 5 Results
Table 6 shows the submitted approaches' results for all three sub-tasks. We ranked as the 8th best team; hence we did not get official human evaluation results on our submissions. Two of our approaches perform better than the baseline on automatic evaluation metrics, and our identifier in the official results is Team 10.
In the sub-task of **Knowledge Seeking Turn Detection**, our added knowledge induced a slight decrement in performance, with precision declining
from 0.998 to 0.995 and F1 score from 0.998 to 0.997, albeit recall experienced a marginal increase (refer to Table 6 for further details).
Conversely, **Knowledge Selection** saw an improvement with the augmentation, resulting in a small uplift across all metrics: precision from 0.790 to 0.796, recall from 0.788 to 0.794, F1 from 0.789 to 0.795, and Exact Match from 0.91 to 0.418.
We submitted three different approaches for the **Response Generation** task. (1) Our first submission is based on the model exploration mentioned in 4.2.1. After fine-tuning flan-t5-large this approach outperforms the baseline on all metrics, leading to a mean reciprocal rank (MRR) of 0.068 compared to 0.039 of the baseline. On BLEU this approach ranked 7th out of 49 submissions. (2) Based on our data analysis (3.3), we extended this approach in post-processing with the most frequently appearing question from the training set appended to the end of the review summaries. This approach, although underperforming relative to our first except in terms of METEOR, still surpassed the baseline on all metrics, apart from BLEU, with an MRR of 0.045. (3) Despite lower performance relative to the baseline in quantitative metrics, our waterfall prompting approach involving a combination of ChatGPT and GPT-3 gave qualitatively promising results. Presented with non-matching knowledge items it did not generate a summary, but flagged them as not relevant to the questions at hand. A full human evaluation would be an interesting step to investigate the perceived accuracy and appropriateness of such responses.
### Submissions
CLTeamL-0Baseline with augmented data for Knowledge-seeking Turn Detection and Knowledge Selection, flan-t5-large for Response Generation
CLTeamL-1Baseline with augmented data for Knowledge-seeking Turn Detection and Knowledge Selection, flan-t5-large for Response Generation. Additionally, we do post-processing on the output, adding the most frequent question from the training set as a follow-up question to each review summary.
CLTeamL-2Baseline for Knowledge-seeking Turn Detection and Knowledge Selection. For the Response Generation, we prompted ChatGPT with Chain Of Thought instructions, to first generate a summary of the knowledge and then append a follow-up question given the dialogue history. We provided three examples, selected from the most challenging cases in the training data according to the data trends on the validation set. Unfortunately, due to server problems, the data samples after 4500, are lacking a review summary. A full run (see Approach 2b in Table 6) after submission showed slight improvements across all metrics but still did not outperform the baseline or our other approaches.
## 6 Conclusion
In conclusion, our work on the Task-Oriented Conversational Modeling with Subjective Knowledge task produced three key contributions: Firstly, our detailed data analysis could serve as a basis for future dataset scoping within this domain. Secondly, we significantly expanded the knowledge dataset size utilising few-shot data augmentation. Lastly, our most successful model was a fusion of the baseline with augmented data and flan-t5-large. Our waterfall prompting approach incorporating a blend of Large Language Models demonstrated lower metrics compared to the baseline, but upon a qualitative assessment, the results were deemed satisfactory, albeit the absence of official human evaluation impedes a definitive judgement.
A preliminary qualitative analysis shows that ChatGPT can spot mistakes during knowledge selection. Thus, future work could explore incorporating an initial step where ChatGPT is employed to evaluate the relevance of the selected knowledge items. Additionally, we plan to experiment with summarising positive and negative reviews separately before creating a consolidated summary. This method may enhance performance when dealing with polarised reviews.
### Limitations
The APIs provided by OpenAI come with particular length parameters which pose certain constraints in terms of flexibility and generalisability to larger datasets and longer sequences which are prevalent in knowledge-grounded dialogue.
There's also a financial aspect to consider, as the use of these APIs involves a monetary cost. We chose to proceed with ChatGPT instead of GPT-3 for parts of our research, even though preliminary explorations indicated higher performance by the latter, due to the additional cost associated with the use of GPT-3. Hence, there might be potential
performance enhancements our approaches were not able to capture due to this cost-based decision.
## Ethics Statement
We artificially increased the size of our knowledge data set with few-shot data augmentation to cover a broader range of traveller types and domains. The synthetic reviews were created using GPT-3 and ChatGPT. There exists a potential for the introduction of biases in this newly generated data due to inherent model biases that might have been propagated into the reviews. This is especially concerning for commercial LLMs, like the ones used, since their training data is not made public.
## Acknowledgements
This research was funded by the Vrije Universiteit Amsterdam and the Netherlands Organisation for Scientific Research (NWO) through the _Hybrid Intelligence Centre_ via the Zwaartekracht grant (024.004.022), and the _Spinoza_ grant (SPI 63-260) awarded to Piek Vossen.
|
2307.07274 | On Almost Regular Mappings | Ioffe's criterion and various reformulations of it have become a~standard
tool in proving theorems guaranteeing various regularity properties such as
metric regularity, i.e., the openness with a linear rate around the reference
point, of a~(set-valued) mapping. We derive an analogue of it guaranteeing the
almost openness with a linear rate of mappings acting in incomplete spaces and
having non-closed graphs, in general. The main tool used is an approximate
version of Ekeland's variational principle for a function that is not
necessarily lower semi-continuous and is defined on an abstract (possibly
incomplete) space. Further, we focus on the stability of this property under
additive single-valued and set-valued perturbations. | Radek Cibulka | 2023-07-14T11:00:51Z | http://arxiv.org/abs/2307.07274v2 | # On almost regular mappings
###### Abstract.
Ioffe's criterion and various reformulations of it have become a standard tool in proving theorems guaranteeing various regularity properties such as metric regularity, i.e., the openness with a linear rate around the reference point, of a (set-valued) mapping. We derive an analogue of it guaranteeing the almost openness with a linear rate of mappings acting in incomplete spaces and having non-closed graphs, in general. The main tool used is an approximate version of Ekeland's variational principle for a function that is not necessarily lower semi-continuous and is defined on an abstract (possibly incomplete) space. Further, we focus on the stability of this property under additive single-valued and set-valued perturbations.
Key words and phrases:Ekeland variational principle, metric regularity, almost openness, Ioffe criterion, perturbation stability 2020 Mathematics Subject Classification: 47J05 47J07 49J52 49J53 58C15 The author was supported by the grant of GACR no. 20-11164L.
## 1. Introduction
Ekeland's variational principle (EVP) plays a fundamental role in modern nonlinear analysis and was generalized by many authors in various ways. Although our notation is fairly standard, in case of any difficulty, the reader is encouraged to consult the end of this introduction.
**Theorem 1.1**.: _Let \((X,d)\) be a complete metric space. Consider a point \(x\in X\) and a lower semi-continuous function \(\varphi:X\to[0,\infty]\) such that \(0<\varphi(x)<\infty\). Then there exists a point \(u\in X\) such that \(\varphi(u)+d(u,x)\leq\varphi(x)\), hence \(d(u,x)\leq\varphi(x)-\inf\varphi(X)\), and_
\[\varphi(u^{\prime})+d(u^{\prime},u)\geq\varphi(u)\quad\text{for every}\quad u ^{\prime}\in X. \tag{1}\]
This formulation of the EVP, which is enough for applications in regularity theory, will be often referred to as a _weak form_ of the principle while the full version of it means that the inequality in (1) is strict provided that \(u^{\prime}\neq u\). We focus on the setting of the so-called extended quasi-metric spaces [12] which allows to cover and unify the results by S. Cobzas in quasi-metric spaces [13]; a directional version of the principle in Banach spaces by M. Durea, M. Pantiruc, and R. Strugariu [15]; and a statement by L.P. Hai and P.Q. Khanh in partial metric spaces [18]. We emphasize that we leave aside many important works
not having a tight connection with our investigation. In the second section, we establish an approximate version of the EVP without the assumption that the domain space is complete and that the function under consideration is lower semi-continuous. We also show that under these two additional assumptions our statement implies the (weak form of the) principles mentioned above. As noted in [4] quasimetric (instead of metric) structure of the topological space of arguments is one of absolutely mandatory requirements for an appropriate extension of the EVP for its possible application to the psychological model considered therein.
Metric regularity, linear openness, and Aubin/pseudo-Lipschitz property of the inverse of a given set-valued mapping \(F:X\rightrightarrows Y\) from a metric space \((X,d)\) into another metric space \((Y,\varrho)\) are three _equivalent_ properties playing a fundamental role in modern variational analysis [3, 6, 14, 20, 22, 25]. Let us consider the problem:
\[\text{Given }y\in Y\text{ find an }x\in X\text{ such that }F(x)\ni y. \tag{2}\]
We focus on a weaker version of the second mentioned property called _almost openness with a linear rate_ guaranteeing that there is a \(u\in X\) such the distance \(\operatorname{dist}(y,F(u))\), often called a _residual_, is arbitrarily small and the inverse \(F^{-1}\) of \(F\) verifies a kind of Lipschitz continuity, that is, (2) admits an approximate solution and the distance between \(u\) and the solution set can be estimated by a multiple of the residual. In the third section, we define an abstract concept of the almost openness with a linear rate relative to a fixed set, introduce the remaining two notions corresponding to metric regularity and a certain continuity property of the inverse, and prove their equivalence. In this way, we cover not only the approximate version of full metric regularity but also of two important and distinct weaker concepts called metric semiregularity and subregularity. In the spirit of [20], we call them simply _almost (semi/sub)regularity_ while the triad(s) of stronger properties we call _(semi/sub)regularity_.
Ioffe's criteria provide transparent proofs of regularity statements from the literature, e.g., see [17, 20, 9]. Theorem 1.1 implies the following example of such a statement, which can be traced back to [17]:
**Proposition 1.2**.: _Let \((X,d)\) be a complete metric space and \((Y,\varrho)\) be a metric space. Consider \(c>0\) and \(r>0\), a point \(\bar{x}\in X\), and a continuous mapping \(g:X\to Y\) defined on the whole \(X\). Assume that for each \(u\in B_{X}(\bar{x},r)\) and each \(y\in B_{Y}(g(\bar{x}),cr)\), with \(0<\varrho(g(u),y)<cr\), there is a point \(u^{\prime}\in X\) satisfying_
\[c\,d(u,u^{\prime})<\varrho(g(u),y)-\varrho(g(u^{\prime}),y). \tag{3}\]
_Then \(B_{Y}(g(x),ct)\cap B_{Y}(g(\bar{x}),cr)\subset g\big{(}B_{X}(x,t)\big{)}\) for each \(x\in B_{X}(\bar{x},r)\) and each \(t\in\big{(}0,r-d(x,\bar{x})\big{)}\)._
In the fourth section, we show that sufficient conditions for almost regularity of a set-valued mapping \(F\) require none of the following commonly used assumptions playing the role of completeness/continuity from the above statement:
1. the graph of \(F\) is (locally) complete;
2. the domain space \(X\) is complete and the graph of \(F\) is (locally) closed.
We do not employ any additional tools such as lower semi-continuous envelopes, slope-based conditions, completion of the spaces and the graph of the mapping under consideration, etc., see [11]. This essentially simplifies the proofs. Moreover, if either (S\({}_{1}\)) or (S\({}_{2}\)) holds true then we obtain full regularity by using [20, Proposition 2.40].
In the fifth section, we investigate the stability of almost regularity around the reference point with respect to additive single-valued and set-valued perturbations. We make the presentation as close as possible to [11]. Despite the similarity to the case of full (semi/sub)regularity [20, 11], the results presented are new and require neither (S\({}_{1}\)) nor (S\({}_{2}\)). In particular, we cover (and slightly generalize) a global statement from [2].
In the sixth section, we gather several probably well-known results on almost openness of linear and affine mappings scattered in the literature. These (simpler) mappings can be used to approximate the non-linear ones. Proposition 6.5 seems to be new and it may be of the independent interest.
In order to keep the consideration as clear as possible we do not state many results in the full generality. In the last section, we comment on possible extensions of the statements presented. Our work seem to provide a theoretic background for several application fields. For example, the (inexact non-smooth) Newton-type methods request to find an "approximate" solution to the linear (sub)problem at each step; hence almost (sub/semi)regularity seem to be able to replace the (sub/semi)regularity in the corresponding convergence results, cf. [10, Theorem 5.1]. In control theory, the constrained (approximate) controllability of a non-linear dynamic system can be deduced from the same property of the appropriate (linear) approximation of it. Roughly speaking, this corresponds to the statements guaranteeing the perturbation stability of (almost) openness applied to the reachability operators of the dynamic systems, which to a control assign the final state of the corresponding system, e.g. see [7, Theorem 11] along with comments and references therein. All this goes beyond the scope of the current work and is left for the future consideration.
### Notation and terminology
The notation \(a:=b\) or \(b=:a\) means that \(a\) is defined by \(b\). We use the convention that \(\inf\emptyset:=\infty\); as we work with non-negative quantities that \(\sup\emptyset:=0\); and that \(0\cdot\infty=1\). If a set is a singleton we identify it with its only element, that is, we write \(a\) instead of \(\{a\}\). A subset \(C\) of a vector space \(X\) is said to be _symmetric_ if \(-C=C\); _circled_ if \(\lambda C\subset C\) for each \(\lambda\in[-1,1]\); _absorbing_ if for every \(x\in X\) there is a \(\delta>0\)
such that \(\lambda x\in C\) for each \(\lambda\in(-\delta,\delta)\). A convex set is circled if and only if it is symmetric. The _core_ of \(C\) and the _cone generated by \(C\)_ are the sets \(\operatorname{core}C:=\{x\in C:\,C-x\text{ is absorbing}\}\) and \(\operatorname{cone}C:=\bigcup_{t\geq 0}tC\). In a metric space \((X,d)\), by \(B_{X}[x,r]\) and \(B_{X}(x,r)\) we denote, respectively, the closed and the open ball centered at an \(x\in X\) with a radius \(r\in[0,\infty]\); in particular, \(B_{X}(x,0)=\emptyset\), \(B_{X}[x,0]=\{x\}\), and \(B_{X}[x,\infty]=B_{X}(x,\infty)=X\); the _distance from a point_\(x\in X\) to a set \(C\subset X\) is \(\operatorname{dist}\,(x,C):=\inf\{d(x,u):\ u\in C\}\); the _diameter_ of \(C\) is \(\operatorname{diam}C:=\sup\{d(u,x):\ u,x\in C\}\); and \(\overline{C}\) denotes the _closure_ of \(C\). The _closed unit ball_ and the _unit sphere_ in a normed space \(X\) are denoted by \(B_{X}\) and \(\mathbb{S}_{X}\), respectively. The _domain_ of an extended real-valued function \(\varphi:X\to\mathbb{R}\cup\{\infty\}\) is the set \(\operatorname{dom}\varphi:=\{x\in X:\ \varphi(x)<\infty\}\).
By \(f:X\to Y\) we denote a single-valued mapping \(f\) acting from a non-empty set \(X\) into another non-empty set \(Y\); while \(F:X\rightrightarrows Y\) denotes a mapping from \(X\) into \(Y\) which may be set-valued. The set \(\operatorname{dom}F:=\{x\in X:\ F(x)\neq\emptyset\}\) is the _domain_ of \(F\), the _graph_ of \(F\) is the set \(\operatorname{gph}F:=\{(x,y)\in X\times Y:\ y\in F(x)\}\), and the _inverse_ of \(F\) is the mapping \(Y\ni y\longmapsto\{x\in X:\ y\in F(x)\}=:F^{-1}(y)\subset X\); thus \(F^{-1}:Y\rightrightarrows X\). Given a subset \(M\) of \(X\), the _image_ of \(M\) under \(F\) is the set \(F(M):=\bigcup_{x\in M}F(x)\). Let \((X,d)\) be a metric space, let \((Y,\varrho)\) be a linear metric space, let \(U\times V\subset X\times Y\) be non-empty, and let \((\bar{x},\bar{y})\in\operatorname{gph}F\). The mapping \(F\) is said to have _Aubin property on \(U\times V\) with a constant \(\ell>0\)_ if
\[F(u)\cap V\subset F(u^{\prime})+B_{Y}[0,\ell\,d(u,u^{\prime})]\quad\text{for each}\quad u,u^{\prime}\in U;\]
to have _Aubin property around \((\bar{x},\bar{y})\) with a constant \(\ell>0\)_ if there is a neighborhood \(U\times V\) of \((\bar{x},\bar{y})\) in \(X\times Y\), equipped with the product (box) topology, such that \(F\) has Aubin property on \(U\times V\) with the constant \(\ell\); and to be _Hausdorff-Lipschitz on \(U\) with a constant \(\ell>0\)_, if \(F\) has Aubin property on \(U\times Y\) with the constant \(\ell\). The _Lipschitz modulus_ of \(F\) around \((\bar{x},\bar{y})\), denoted by \(\operatorname{lip}F(\bar{x},\bar{y})\), is the infimum of the set of all \(\ell>0\) such that \(F\) has Aubin property around \((\bar{x},\bar{y})\) with the constant \(\ell\). In particular, for a mapping \(f:X\to Y\), defined in a vicinity of \(\bar{x}\), we have \(\bar{y}=f(\bar{x})\) and we simply write \(\operatorname{lip}f(\bar{x})\); for each \(\ell>\operatorname{lip}f(\bar{x})\), if there is any, \(f\) is _Lipschitz continuous around_\(\bar{x}\) with the constant \(\ell\), i.e., there is an \(r>0\) such that
\[\varrho(f(u),f(x))\leq\ell\,d(u,x)\quad\text{for each}\quad u,x\in B_{X}(\bar{x},r). \tag{4}\]
By \(\mathcal{BL}(X,Y)\) we denote the (vector) space of all bounded linear operators \(A\) from a normed space \((X,\|\cdot\|)\) into another normed space \((Y,\|\cdot\|)\); and \(X^{*}:=\mathcal{BL}(X,\mathbb{R})\).
## 2. Variational principles of Ekeland type
Since the terminology concerning the extended quasi-metric spaces is not unified we prefer to postulate several properties of a non-negative function \(\eta\) defined
on \(X\times X\), where \(X\) is a non-empty set; instead of giving particular names to the pair \((X,\eta)\) when \(\eta\) obeys (some of) these properties.
**Definition 2.1**.: _Consider a non-empty set \(X\) and a function \(\eta:X\times X\to[0,\infty]\). We say that \(\eta\) satisfies assumption_
1. _provided that_ \(\eta(x,x)=0\) _for each_ \(x\in X\)_;_
2. _provided that_ \(\eta(u,x)\leq\eta(u,z)+\eta(z,x)\) _whenever_ \(u\)_,_ \(x\)_,_ \(z\in X\)_;_
3. _provided that_ \(\eta(u,x)>0\) _whenever_ \(u\)_,_ \(x\in X\) _are distinct;_
4. _provided that for each sequence_ \((u_{k})_{k\in\mathbb{N}}\) _in_ \(X\) _such that for each_ \(\varepsilon>0\) _there is an index_ \(k_{0}=k_{0}(\varepsilon)\) _such that for each_ \(j\)_,_ \(k\in\mathbb{N}\)_, with_ \(k_{0}\leq k<j\)_, we have_ \(\eta(u_{j},u_{k})<\varepsilon\)_; there is a point_ \(u\in X\) _such that_ \(\eta(u,u_{k})\to 0\) _as_ \(k\to\infty\)_._
_The conjugate of \(\eta\) is the function \(\eta^{*}:X\times X\to[0,\infty]\) defined by \(\eta^{*}(x,u):=\eta(u,x)\), \(x\), \(u\in X\)._
We will employ the following easy and well-known fact, cf. [13, 18].
**Remark 2.2**.: Let \(X\) be a non-empty set and let \(\eta:X\times X\to[0,\infty]\) satisfy \((\mathcal{A}_{2})\). Consider points \(u\), \(x\in X\) and a sequence \((u_{k})_{k\in\mathbb{N}}\) in \(X\) such that \(\eta(u,u_{k})\to 0\) as \(k\to\infty\). Then \(\liminf_{k\to\infty}\eta(u_{k},x)=\liminf_{k\to\infty}\big{(}\eta(u_{k},x)+ \eta(u,u_{k})\big{)}\geq\eta(u,x)\) and \(\limsup_{k\to\infty}\eta(x,u_{k})\leq\lim_{k\to\infty}\big{(}\eta(x,u)+\eta(u, u_{k})\big{)}=\eta(x,u)\). If, in addition, \(\eta=\eta^{*}\), then the previous estimates imply that \(\eta(x,u)\geq\limsup_{k\to\infty}\eta(x,u_{k})\geq\liminf_{k\to\infty}\eta(x, u_{k})=\liminf_{k\to\infty}\eta(u_{k},x)\geq\eta(u,x)=\eta(x,u)\); hence \(\eta(x,u_{k})\to\eta(x,u)\) as \(k\to\infty\).
Consider a non-empty set \(X\) and a function \(\eta:X\times X\to[0,\infty]\) satisfying \((\mathcal{A}_{1})\) and \((\mathcal{A}_{2})\). Given \(x\in X\) and \(r>0\), we define sets
\[B^{\eta}_{X}(x,r):=\{u\in X:\eta(x,u)<r\}\quad\text{and}\quad B^{\eta}_{X}[x,r ]:=\{u\in X:\eta(x,u)\leq r\}.\]
We define the topology \(\tau_{\eta}\) on \(X\) starting from the family \(\mathcal{V}_{\eta}(x)\) of neighborhoods of an arbitrary point \(x\in X\):
\[V\in\mathcal{V}_{\eta}(x)\quad\Longleftrightarrow\quad V\supset B^{\eta}_{X} (x,r)\quad\text{for some}\quad r>0.\]
The convergence of a sequence \((x_{k})_{k\in\mathbb{N}}\) to an \(x\in X\) in \((X,\tau_{\eta})\) is characterized as follows: \(\lim_{k\to\infty}x_{k}=x\) if and only if \(\eta(x,x_{k})\to 0\) as \(k\to\infty\). Note that \(B^{\eta}_{X}(x,r)\) is \(\tau_{\eta}\)-open and \(B^{\eta}_{X}[x,r]\) is \(\tau_{\eta^{*}}\)-closed but not necessarily \(\tau_{\eta}\)-closed. By Remark 2.2, for each \(x\in X\) the mapping \(X\ni u\longmapsto\eta(u,x)\) is \(\tau_{\eta}\)-lower semi-continuous while the mapping \(X\ni u\longmapsto\eta(x,u)\) is \(\tau_{\eta}\)-upper semi-continuous.
Of course, any (positive multiple of a) metric \(d\) on \(X\) satisfies \((\mathcal{A}_{1})-(\mathcal{A}_{3})\); while \((\mathcal{A}_{4})\) holds true when \((X,d)\) is a complete space. In this case, we omit the superscript in the definition of the "balls" above, that is, we write \(B_{X}(x,r)\) and \(B_{X}[x,r]\), respectively. Let us present two less trivial examples [8, 15, 18].
**Example 2.3**.: Let \((X,\|\cdot\|)\) be a Banach space. Given a non-empty set \(L\subset\mathbb{S}_{X}\), the _directional minimal time function with respect to \(L\)_ is the function
\[X\times X\ni(x,u)\longmapsto T_{L}(x,u):=\inf\left\{t\geq 0:u-x\in tL\right\}. \tag{5}\]
Clearly, for each \(u\), \(x\in X\), if \(T_{L}(x,u)<\infty\) (which is equivalent to \(u-x\in\operatorname{cone}L\)), then
\[T_{-L}(u,x)=T_{L}(x,u)=\|u-x\|.\]
Hence \(T_{L}=(T_{-L})^{*}\) and \(B_{X}^{T_{L}}(x,r)=B_{X}(x,r)\cap(x+\operatorname{cone}L)\) for each \(x\in X\) and \(r>0\). Clearly, \(T_{L}\) satisfies \((\mathcal{A}_{1})\) and \((\mathcal{A}_{3})\). If \(\operatorname{cone}L\) is convex, then \(T_{L}\) also satisfies \((\mathcal{A}_{2})\). If \(L\) is closed (thus so is \(\operatorname{cone}L\)), then the function \(T_{L}\) is lower semi-continuous and satisfies \((\mathcal{A}_{4})\).
**Example 2.4**.: Let \(X\) be a non-empty set. Assume that \(\zeta:X\times X\to[0,\infty)\) is such that for each \(u\), \(x\), \(z\in X\) we have
\[\zeta(x,x)\leq\zeta(x,u)\quad\text{and}\quad\zeta(x,u)\leq\zeta(x,z)+\zeta(z,u )-\zeta(z,z). \tag{6}\]
The above two conditions, called small self-distances and the triangle inequality, appear in the definition of a partial metric, e.g., see [18, Definition 2.1]. If \(\zeta\) satisfies the latter inequality in (6) then so does \(\zeta^{*}\). In particular, both \(\zeta\) and \(\zeta^{*}\) satisfy \((\mathcal{A}_{2})\). Assumption \((\mathcal{A}_{4})\) holds for \(\zeta\) provided that \(X\) is \(0\)_-complete_[18, Definition 2.2].
Put \(\eta(x,u):=\zeta(x,u)-\zeta(x,x)\), \(u\), \(x\in X\). Let \(r>0\) and \(u\), \(x\), \(z\in X\) be arbitrary. Then \(B_{X}^{\eta}(x,r)=\{u\in X:\zeta(x,u)<\zeta(x,x)+r\}\); and
\[(0\leq)\quad\eta(x,u) = \zeta(x,u)-\zeta(x,x)\leq\zeta(x,z)+\zeta(z,u)-\zeta(z,z)-\zeta(x,x)\] \[= \eta(x,z)+\eta(z,u).\]
Hence the non-negative function \(\eta\) satisfies \((\mathcal{A}_{2})\); and, of course, also \((\mathcal{A}_{1})\). Given a sequence \((u_{k})_{k\in\mathbb{N}}\) in \(X\), we have \(\lim_{k\to\infty}\eta(u,u_{k})=0\) if and only if \(\lim_{k\to\infty}\zeta(u,u_{k})=\zeta(u,u)\).
Define the topology \(\tau_{\zeta}\) on \(X\) starting from the family \(\mathcal{V}_{\zeta}(x)\) of neighborhoods of an arbitrary point \(x\in X\):
\[V\in\mathcal{V}_{\zeta}(x)\ \Leftrightarrow\ V\supset B_{X}^{\eta}(x,r)=\{u\in X: \ \zeta(x,u)<r+\zeta(x,x)\}\text{ for some }r>0.\]
Then the convergence of a sequence \((u_{k})_{k\in\mathbb{N}}\) to a \(u\) in \((X,\tau_{\zeta})\) is characterized as follows: \(\lim_{k\to\infty}u_{k}=u\) if and only if \(\lim_{k\to\infty}\zeta(u,u_{k})=\zeta(u,u)\).
Let us present the main result in this section.
**Theorem 2.5**.: _Let \(X\) be a non-empty set such that there is a function \(\eta:\ X\times X\to[0,\infty]\) satisfying \((\mathcal{A}_{2})\). Consider a point \(x\in X\) and a function \(\varphi:X\to[0,\infty]\) such that \(0<\varphi(x)<\infty\). Then there exist an \(N\in\mathbb{N}\cup\{\infty\}\) and points \((u_{k})_{k=1}^{N}\) in \(X\) such that \(u_{1}=x\); that_
* \(\varphi(u_{j})+\eta(u_{j},u_{k})<\varphi(u_{k})\) _for each_ \(j\)_,_ \(k\in\mathbb{N}\) _with_ \(1\leq k<j\leq N\)_; and_
* \(\varphi(u_{j})+\eta(u_{j},u_{k})<\varphi(u_{k})\) _for each_ \(j\)_,_ \(k\in\mathbb{N}\) _with_ \(1\leq k<j\leq N\)_;_
\((ii)\) _for each \(\varepsilon\in\big{(}0,\varphi(x)\big{)}\) there is an index \(n=n(\varepsilon)\leq N\) such that_
\[\varphi(u^{\prime})+\eta(u^{\prime},u_{k})>\varphi(u_{k})-\varepsilon\quad\text {whenever}\quad u^{\prime}\in X\text{ and }k\in\mathbb{N}\text{ with }N\geq k\geq n.\]
Proof.: We follow a standard pattern and construct inductively appropriate points \(u_{1},u_{2},\dots\) in \(\operatorname{dom}\varphi\). Let \(u_{1}:=x\). Assume that \(u_{k}\in\operatorname{dom}\varphi\) is already defined for an index \(k\in\mathbb{N}\). Let
\[\alpha_{k}:=\inf\{\varphi(u^{\prime}):\ u^{\prime}\in X\quad\text{and}\quad \varphi(u^{\prime})+\eta(u^{\prime},u_{k})<\varphi(u_{k})\}\hskip 30.0pt(\geq 0).\]
If \(\alpha_{k}=\infty\), that is, \(\varphi(u^{\prime})+\eta(u^{\prime},u_{k})\geq\varphi(u_{k})\) for each \(u^{\prime}\in X\); then stop the construction. Otherwise, find a point \(u_{k+1}\in X\) such that
\[\varphi(u_{k+1})+\eta(u_{k+1},u_{k})<\varphi(u_{k})\quad\text{and}\quad \varphi(u_{k+1})<\alpha_{k}+1/k. \tag{7}\]
Note that \(\alpha_{k}\leq\varphi(u_{k+1})\leq\varphi(u_{k+1})+\eta(u_{k+1},u_{k})<\varphi (u_{k})<\infty\). Hence \(u_{k+1}\in\operatorname{dom}\varphi\) and we repeat the construction.
If the set of all iterates \(u_{k}\) generated by the above process is finite, then let \(N\) be the cardinality of this set; otherwise put \(N:=\infty\). By \((\mathcal{A}_{2})\), for all \(j\), \(k\in\mathbb{N}\), with \(1\leq k<j\leq N\), we have that
\[0\leq\eta(u_{j},u_{k}) \leq \eta(u_{j},u_{j-1})+\dots+\eta(u_{k+1},u_{k})\] \[< \big{(}\varphi(u_{j-1})-\varphi(u_{j})\big{)}+\dots+\big{(}\varphi (u_{k})-\varphi(u_{k+1})\big{)}=\varphi(u_{k})-\varphi(u_{j}).\]
This proves (i).
Let \(\varepsilon\in\big{(}0,\varphi(x)\big{)}\) be arbitrary. If \(N<\infty\), then put \(n:=N\) and we are done, because the iterative process stopped after a finite number of steps, that is, \(\varphi(u^{\prime})+\eta(u^{\prime},u_{N})\geq\varphi(u_{N})\) for each \(u^{\prime}\in X\). Further assume that \(N=\infty\). The first inequality in (7) implies that a (bounded from below) sequence \(\big{(}\varphi(u_{k})\big{)}\) is strictly decreasing, so it converges. Find an index \(n=n(\varepsilon)>2/\varepsilon\) such that \(0<\varphi(u_{n})-\varphi(u_{n+1})<\varepsilon/2\). Assume, on the contrary, that there are \(k\geq n\) and \(u^{\prime}\in X\) such that
\[\varphi(u^{\prime})+\eta(u^{\prime},u_{k})\leq\varphi(u_{k})-\varepsilon.\]
If \(k=n\), then \(\varphi(u^{\prime})+\eta(u^{\prime},u_{n})<\varphi(u_{n})\) trivially. If \(k>n\), then \((\mathcal{A}_{2})\), the last displayed estimate, and (8), with \(j:=k\) and \(k:=n\), imply that
\[\varphi(u^{\prime})+\eta(u^{\prime},u_{n}) \leq \varphi(u^{\prime})+\eta(u^{\prime},u_{k})+\eta(u_{k},u_{n})\] \[< \varphi(u_{k})-\varepsilon+\varphi(u_{n})-\varphi(u_{k})=\varphi (u_{n})-\varepsilon\qquad\big{(}<\varphi(u_{n})\big{)}.\]
Hence \(\alpha_{n}\leq\varphi(u^{\prime})\). The latter inequality in (7) and the choice of \(n\) yield that
\[\varphi(u^{\prime}) \leq \varphi(u^{\prime})+\eta(u^{\prime},u_{n})<\varphi(u_{n})- \varepsilon<\varphi(u_{n+1})-\varepsilon/2<\varphi(u_{n+1})-1/n\] \[< \alpha_{n}\leq\varphi(u^{\prime}),\]
a contradiction.
As an easy corollary we get:
**Corollary 2.6**.: _Let \(X\) be a non-empty set such that there is a function \(\eta:\ X\times X\to[0,\infty]\) satisfying \((\mathcal{A}_{2})\). Consider an \(\varepsilon>0\), a point \(x\in X\), and a function \(\varphi:X\to[0,\infty]\) such that \(\eta(x,x)<\infty\) and \(\varepsilon<\varphi(x)<\infty\). Then there exists a point \(u\in X\) such that \(\varphi(u)+\eta(u,x)\leq\varphi(x)+\eta(x,x)\) and_
\[\varphi(u^{\prime})+\eta(u^{\prime},u)>\varphi(u)-\varepsilon\quad\text{for every}\quad u^{\prime}\in X. \tag{9}\]
Proof.: Use Theorem 2.5 to find an \(N\in\mathbb{N}\cup\{\infty\}\), points \((u_{k})_{k=1}^{N}\) with \(u_{1}=x\), and an index \(n=n(\varepsilon)\) such that the conclusions of both (i) and (ii) hold true. If \(N=1\) then \(u:=u_{1}=x\) does the job. If \(N>1\), then put \(u:=u_{m}\), where \(m:=\max\{2,n\}\). Then (ii), with \(k:=m\), implies (9). By (i), with \(k:=1\) and \(j:=m\), we infer that \(\varphi(u)+\eta(u,x)<\varphi(x)\leq\varphi(x)+\eta(x,x)\).
Next statement slightly improves a weak form of the EVP from [12] covering results in [13, 15].
**Theorem 2.7**.: _Let \(X\) be a non-empty set such that there is a function \(\eta:\ X\times X\to[0,\infty]\) satisfying \((\mathcal{A}_{2})\) and \((\mathcal{A}_{4})\). Consider a point \(x\in X\) and a function \(\varphi:X\to[0,\infty]\) such that \(\eta(x,x)<\infty\) and \(0<\varphi(x)<\infty\). Assume that for each \(u\in X\) and each \((u_{k})_{k\in\mathbb{N}}\) in \(X\) such that \((\varphi(u_{k}))_{k\in\mathbb{N}}\) is strictly decreasing and \(\eta(u,u_{k})\to 0\) as \(k\to\infty\) we have that_
\[\varphi(u)\leq\lim_{k\to\infty}\varphi(u_{k})\qquad\qquad(\text{yes, the limit exists}). \tag{10}\]
_Then there exists a point \(u\in X\) such that \(\varphi(u)+\eta(u,x)\leq\varphi(x)+\eta(x,x)\) and_
\[\varphi(u^{\prime})+\eta(u^{\prime},u)\geq\varphi(u)\quad\text{for every}\quad u^{\prime}\in X. \tag{11}\]
Proof.: Using Theorem 2.5, we find an \(N\in\mathbb{N}\cup\{\infty\}\) and points \((u_{k})_{k=1}^{N}\) such that \(u_{1}=x\) and (i) holds true.
First, assume that \(N=\infty\). In view of (i), the sequence \(\big{(}\varphi(u_{k})\big{)}\) is strictly decreasing (and bounded from below); hence it converges. As \(\big{(}\varphi(u_{k})\big{)}\) is a Cauchy sequence, combining (i) and \((\mathcal{A}_{4})\), we find a \(u\in X\) such that \(\lim_{k\to\infty}\eta(u,u_{k})=0\). As \(u_{1}=x\), employing (10), Remark 2.2, and (i) with \(k:=1\), we get
\[\varphi(u)+\eta(u,x) \leq \lim_{j\to\infty}\varphi(u_{j})+\liminf_{j\to\infty}\eta(u_{j},x)\] \[= \liminf_{j\to\infty}\big{(}\varphi(u_{j})+\eta(u_{j},x)\big{)} \leq\varphi(x)\leq\varphi(x)+\eta(x,x).\]
To prove the latter inequality, let \(u^{\prime}\in X\) be arbitrary. Pick any \(\varepsilon>0\). Find an index \(n=n(\varepsilon)\) such that the conclusion in (ii) holds true. Then
\[\varphi(u^{\prime})+\eta(u^{\prime},u) = \lim_{k\to\infty}\big{(}\varphi(u^{\prime})+\eta(u^{\prime},u)+ \eta(u,u_{k})\big{)}\] \[\stackrel{{(\mathcal{A}_{2})}}{{\geq}} \liminf_{k\to\infty}\big{(}\varphi(u^{\prime})+\eta(u^{\prime},u_{ k})\big{)}\stackrel{{(ii)}}{{\geq}}\lim_{k\to\infty}\varphi(u_{k})- \varepsilon\stackrel{{(\ref{eq:10})}}{{\geq}}\varphi(u)-\varepsilon.\]
As \(\varepsilon>0\) is arbitrary, letting \(\varepsilon\downarrow 0\), we get that \(\varphi(u^{\prime})+\eta(u^{\prime},u)\geq\varphi(u)\).
Second, assume that \(N<\infty\). Put \(u:=u_{N}\). If \(N>1\) then (i), with \(k:=1\) and \(j:=N\), implies that \(\varphi(u)+\eta(u,x)<\varphi(x)\leq\varphi(x)+\eta(x,x)\). If \(N=1\) then the first desired inequality holds trivially because \(u=u_{1}=x\). By (ii), we have \(\varphi(u^{\prime})+\eta(u^{\prime},u)>\varphi(u)-\varepsilon\) for each \(\varepsilon\) small enough, which yields the second inequality as in case \(N=\infty\).
**Remark 2.8**.: Of course, Theorem 1.1 follows from the last result. More generally, let \(X\) be a non-empty set and let \(\eta:X\times X\to[0,\infty]\) satisfy \((\mathcal{A}_{1})\) and \((\mathcal{A}_{2})\). Assume that \(\varphi:X\to[0,\infty]\) a \(\tau_{\eta}\)-lower semi-continuous function. Then the assumption containing (10), called strict-decreasing-lower semi-continuity in [18], holds true; see [18, Example 2.9] illustrating that this notion is weaker than the usual lower semi-continuity in the setting from Example 2.4.
Note that the last theorem is equivalent to the most popular form of the EVP involving two constants and three conclusions (and the preceding results can be modified accordingly).
**Theorem 2.9**.: _Let \(X\) be a non-empty set such that there is a function \(\eta:\ X\times X\to[0,\infty]\) satisfying \((\mathcal{A}_{2})\) and \((\mathcal{A}_{4})\). Consider \(\delta>0\) and \(r>0\), a point \(x\in X\), and a function \(\varphi:X\to(\infty,\infty]\) such that \(\eta(x,x)<\infty\) and \(-\infty<\inf\varphi(X)<\varphi(x)\leq\inf\varphi(X)+\delta\). Assume that for each \(u\in X\) and each \((u_{k})_{k\in\mathbb{N}}\) in \(X\) such that \((\varphi(u_{k}))_{k\in\mathbb{N}}\) is strictly decreasing and \(\eta(u,u_{k})\to 0\) as \(k\to\infty\) inequality (10) holds true. Then there is a point \(u\in X\) such that_
1. \(\varphi(u)+\frac{\delta}{r}\,\eta(u,x)\leq\varphi(x)+\frac{\delta}{r}\,\eta(x,x)\)_;_
2. \(\varphi(u^{\prime})+\frac{\delta}{r}\,\eta(u^{\prime},u)\geq\varphi(u)\) _for every_ \(u^{\prime}\in X\)_; and_
3. \(\eta(u,x)\leq r+\eta(x,x)\)_._
Proof.: Applying Theorem 2.7, with \(\varphi:=\varphi-\inf\varphi(X)\) and \(\eta:=\frac{\delta}{r}\eta\), and adding \(\inf\varphi(X)\), we find a point \(u\in X\) such that \(\varphi(u)+\frac{\delta}{r}\,\eta(u,x)\leq\varphi(x)+\frac{\delta}{r}\,\eta(x,x)\) and \(\varphi(u^{\prime})+\frac{\delta}{r}\,\eta(u^{\prime},u)\geq\varphi(u)\) for every \(u^{\prime}\in X\). Hence both (a) and (b) hold true. By (a), we get that \(\delta\left(\eta(u,x)-\eta(x,x)\right)\leq r\big{(}\varphi(x)-\varphi(u)\big{)} \leq r\big{(}\varphi(x)-\inf\varphi(X)\big{)}\leq r\delta\), that is, \(\eta(u,x)\leq r+\eta(x,x)\).
To conclude this section, let us comment on several existing results in the literature in connection with Theorems 2.7 and 2.9.
**Remark 2.10**.: Choosing \(\eta:X\times X\to[0,\infty]\) verifying \((\mathcal{A}_{1})-(\mathcal{A}_{4})\), we get weak forms of Theorem 2.4 and Corollary 2.8 in [12]; hence, using the latter statement, of [5, Theorem 2.1] as well. In particular, we get weak forms of [15, Corollary 3.2], where \(\eta:=T_{L}\) from Example 2.3; and of [13, Theorem 2.4] if \(\eta\) is a (finitely-valued) quasi-semimetric (instead of a quasi-metric generating \(\mathrm{T}_{1}\)-topology used therein).
**Remark 2.11**.: Let \(\zeta\) and \(\eta\) be as in Example 2.4. Then \(\zeta\) satisfies \((\mathcal{A}_{2})\) and if, in addition, \(\zeta=\zeta^{*}\) and \(X\) is \(0\)-complete, then Theorem 2.9, with \(\eta:=\zeta\), yields
[18, Theorem 4.1]. Indeed, since \(\zeta(u,x)=\zeta(x,u)\), from (a) and (6) we get that \(\varphi(u)\leq\varphi(x)+\frac{\delta}{r}(\zeta(x,x)-\zeta(u,x))\leq\varphi(x)\) which is (ii) therein. Note that we do not need all the properties of the partial metric from [18, Definition 2.1].
Further, \(\eta\) verifies \((\mathcal{A}_{1})\) and \((\mathcal{A}_{2})\) by (6). If \((\mathcal{A}_{4})\) holds then Theorem 2.9 yields a point \(u\in X\) such that \(0\leq\zeta(u,x)-\zeta(u,u)\leq r\); that \(\varphi(u)\leq\varphi(u)+\frac{\delta}{r}(\zeta(u,x)-\zeta(u,u))=\varphi(u)+ \frac{\delta}{r}\,\eta(u,x)\leq\varphi(x)\); and that \(\varphi(u^{\prime})+\frac{\delta}{r}\,\zeta(u^{\prime},u)\geq\varphi(u^{ \prime})+\frac{\delta}{r}\,\eta(u^{\prime},u)\geq\varphi(u)\) for every \(u^{\prime}\in X\). We obtain a stronger conclusion than [18, Theorem 4.1]; but the assumption \((\mathcal{A}_{4})\) seems to be stronger than \(0\)-completeness.
## 3. Almost regularity, subregularity, and semiregularity
We start with approximate versions of the abstract properties from [20, Definitions 2.21, 2.22, and 2.23], which are equivalent to each other in the sense of Proposition 3.2 below; and such that each property covers three substantially different concepts defined later.
**Definition 3.1**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a \(\mu>0\), non-empty sets \(U\subset X\) and \(V\subset Y\), a function \(\gamma:X\to[0,\infty]\) defined and not identically zero on \(U\), and a set-valued mapping \(G:X\rightrightarrows Y\). The mapping \(G\) is said to have for \(\mu\) and \(\gamma\) on \(U\times V\) property_
* _provided that_ \(B_{Y}(y,\mu t)\cap V\subset\overline{G\big{(}B_{X}(x,t)\big{)}}\) _for each_ \(x\in U\)_, each_ \(y\in G(x)\)_, and each_ \(t\in(0,\gamma(x))\)_;_
* _provided that_ \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y, \varepsilon)\big{)}\big{)}\leq\mu\operatorname{dist}(y,G(x))\) _for each_ \((x,y)\in U\times V\) _such that_ \(\mu\operatorname{dist}(y,G(x))<\gamma(x)\)_;_
* _provided that_ \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y ^{\prime},\varepsilon)\big{)}\big{)}\leq\mu\,\varrho(y,y^{\prime})\) _for each_ \(y\in Y\)_, each_ \(y^{\prime}\in V\)_, and each_ \(x\in G^{-1}(y)\cap U\) _such that_ \(\mu\,\varrho(y,y^{\prime})<\gamma(x)\)_._
Now we present an analogue of [20, Theorem 2.25].
**Proposition 3.2**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a \(c>0\), non-empty sets \(U\subset X\) and \(V\subset Y\), a function \(\gamma:X\to(0,\infty]\) defined and not identically zero on \(U\), and a set-valued mapping \(G:X\rightrightarrows Y\). Then \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\) if and only if \(G\) has property \((\mathcal{R})\) on \(U\times V\) for \(1/c\) and \(\gamma\) if and only if \(G\) has property \((\mathcal{L}^{-1})\) on \(U\times V\) for \(1/c\) and \(\gamma\)._
Proof.: Put \(\mu:=1/c\). Assume that \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\). Fix arbitrary \(x\in U\) and \(y\in V\) with \(\mu\operatorname{dist}(y,G(x))<\gamma(x)\). Pick an arbitrary \(\varepsilon>0\) such that \(t:=\mu(\operatorname{dist}(y,G(x))+\varepsilon)\in(0,\gamma(x))\). Find a \(v\in G(x)\) such that \(\varrho(y,v)<\operatorname{dist}(y,G(x))+\varepsilon\). Then \(y\in B_{Y}(v,ct)\cap V\). By property \((\mathcal{O})\), there are \(w\in Y\) and \(u\in B_{X}(x,t)\) such that \(\varrho(y,w)<\varepsilon\) and \(w\in G(u)\). As \(u\in G^{-1}(w)\subset G^{-1}\big{(}B_{Y}(y,\varepsilon)\big{)}\), we have
\[\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y,\varepsilon)\big{)}\big{)} \leq d(x,u)<t=\mu(\operatorname{dist}(y,G(x))+\varepsilon)\qquad\big{(}< \gamma(x)\big{)}.\]
The quantity on the left-hand side of the inequality above is increasing and bounded from above by \(\gamma(x)\) when \(\varepsilon\downarrow 0\). Letting \(\varepsilon\downarrow 0\), and remembering that \(x\in U\) and \(y\in V\) with \(\mu\operatorname{dist}(y,G(x))<\gamma(x)\) are arbitrary, we conclude that \(G\) has property \((\mathcal{R})\) on \(U\times V\) for \(\mu\) and \(\gamma\).
Assume that \(G\) has property \((\mathcal{R})\) on \(U\times V\) for \(\mu\) and \(\gamma\). Fix arbitrary \(y\in Y\), \(y^{\prime}\in V\), and \(x\in G^{-1}(y)\cap U\) with \(\mu\,\varrho(y,y^{\prime})<\gamma(x)\). Then \(\mu\operatorname{dist}(y^{\prime},G(x))\leq\mu\,\varrho(y^{\prime},y)<\gamma (x)\). So \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y^{ \prime},\varepsilon)\big{)}\big{)}\leq\mu\operatorname{dist}(y^{\prime},G(x)) \leq\mu\varrho(y^{\prime},y)\). Hence \(G\) has property \((\mathcal{L}^{-1})\) on \(U\times V\) for \(\mu\) and \(\gamma\).
Assume that \(G\) has property \((\mathcal{L}^{-1})\) on \(U\times V\) for \(\mu\) and \(\gamma\). Let \(x\in U\), \(y\in G(x)\), and \(t\in(0,\gamma(x))\) be arbitrary. Pick any \(v\in B_{Y}\left(y,ct\right)\cap V\), if there is such. Then \(\mu\varrho(v,y)=\varrho(v,y)/c<t<\gamma(x)\). Using property \((\mathcal{L}^{-1})\), for each \(\varepsilon>0\), small enough, we have \(\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(v,\varepsilon)\big{)}\big{)}<t\); so there is a \(u_{\varepsilon}\in X\) such that \(G(u_{\varepsilon})\cap B_{Y}(v,\varepsilon)\neq\emptyset\) and \(d(x,u_{\varepsilon})<t\); and therefore
\[\operatorname{dist}\big{(}v,G(B_{X}(x,t))\big{)}\leq\operatorname{dist}(v,G(u_ {\varepsilon}))<\varepsilon.\]
Consequently, \(v\in\overline{G\big{(}B_{X}(x,t)\big{)}}\). Since \(v\in B_{Y}\left(y,ct\right)\cap V\) as well as \(x\in U\), \(y\in G(x)\), and \(t\in(0,\gamma(x))\) are arbitrary, we conclude that \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\). The loop is closed.
It is an old, well-known, although nowadays almost not used, fact that any regularity property of a set-valued mapping can be deduced from the same property of a particular single-valued mapping; namely, the restriction of the canonical projection from \(X\times Y\) into \(Y\) to the graph of the set-valued mapping, see e.g. [20, Proposition 2.33]. The same is true for our approximate version:
**Remark 3.3**.: Let all the assumptions of Proposition 3.2 be satisfied and let \(\alpha\in(0,1/c)\) be arbitrary. Define a (compatible) metric \(\omega\) on \(X\times Y\) by
\[\omega\big{(}(u,v),(x,y)\big{)}:=\max\big{\{}d(u,x),\alpha\varrho(v,y)\big{\}},\quad(u,v),(x,y)\in X\times Y.\]
_Then \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\) if and only if a mapping \(g:X\times Y\to Y\) defined by \(g(x,y)=y\), \((x,y)\in\operatorname{gph}G\), has property \((\mathcal{O})\) on \(\big{(}(U\times Y)\cap\operatorname{gph}G\big{)}\times V\) for \(c\) and \(\widetilde{\gamma}(x,y):=\gamma(x)\), \((x,y)\in X\times Y\)._
Assume that \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\). Fix arbitrary \((x,y)\in(U\times Y)\cap\operatorname{gph}G\) and \(t\in(0,\widetilde{\gamma}(x,y))\). So \(x\in U\), \(g(x,y)=y\in G(x)\), and \(t<\gamma(x)\). Fix any \(v\in B_{Y}(y,ct)\cap V\). Then \(v\in\overline{G\big{(}B_{X}(x,t)\big{)}}\). Let \(\varepsilon\in(0,t(1-\alpha c)/\alpha)\) be arbitrary. There are \(w\in Y\) and \(u\in B_{X}(x,t)\) such that \(\varrho(v,w)<\varepsilon\) and \(w\in G(u)\). Then \(\varrho(v,g(u,w))=\varrho(v,w)<\varepsilon\). As \(d(x,u)<t\) and \(\varrho(y,w)\leq\varrho(y,v)+\varrho(v,w)<ct+\varepsilon<t/\alpha\), we have \(\omega\big{(}(x,y),(u,w)\big{)}<t\). Since \(\varepsilon\) can be arbitrarily small, we get \(v\in\overline{g\big{(}B_{X\times Y}^{\omega}((x,y),t)\big{)}}\); for any \(v\in B_{Y}(y,ct)\cap V\).
On the other hand, assume that \(g\) has property \((\mathcal{O})\) on \(\big{(}(U\times Y)\cap\operatorname{gph}G\big{)}\times V\) for \(c\) and \(\widetilde{\gamma}\). Fix arbitrary \(x\in U\), \(y\in G(x)\), and \(t\in(0,\gamma(x))\). Pick any \(w\in B_{Y}(y,ct)\cap V=B_{Y}(g(x,y),ct)\cap V\). Given an \(\varepsilon>0\), there is a \((u,v)\in X\times Y\), with
\(v\in G(u)\), such that \(d(u,x)<t\), \(\alpha\varrho(v,y)<t\), and \(\varepsilon>\varrho(w,g(u,v))=\varrho(w,v)\); thus \(\operatorname{dist}\big{(}w,G(B_{X}(x,t))\big{)}\leq\operatorname{dist}(w,G(u)) \leq\varrho(w,v)<\varepsilon\). So \(w\in\overline{G(B_{X}(x,t))}\).
Letting \(U\) and \(V\) be the neighborhoods of a fixed reference point in the graph of \(G\), we get the notion of _almost regularity around_ this point:
**Definition 3.4**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a mapping \(G:X\rightrightarrows Y\) and a point \((\bar{x},\bar{y})\in\operatorname{gph}G\). Then \(G\) is said to_
1. _be_ almost open with a constant__\(c>0\) around__\((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ (12) _for each_ \(x\in B_{X}(\bar{x},\gamma)\)_, each_ \(y\in G(x)\)_, and each_ \(t\in(0,\gamma)\)_;_
2. _be_ almost metrically regular with a constant__\(\kappa>0\) around__\((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ (13) \[\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y,\varepsilon)\big{)}\big{)}\leq\kappa\,\operatorname{dist}(y,G(x))\] _for each_ \(x\in B_{X}(\bar{x},\gamma)\) _and each_ \(y\in B_{Y}(\bar{y},\gamma)\) _with_ \(\kappa\operatorname{dist}(y,G(x))<\gamma\)_;_
3. _have an_ almost pseudo-Lipschitz inverse with a constant__\(\ell>0\) _around_ \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ (14) \[\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y} (y^{\prime},\varepsilon)\big{)}\big{)}\leq\ell\,\varrho(y,y^{\prime})\] _for each_ \(y\in Y\)_, each_ \(y^{\prime}\in B_{Y}(\bar{y},\gamma)\)_, and each_ \(x\in G^{-1}(y)\cap B_{X}(\bar{x},\gamma)\) _such that_ \(\ell\,\varrho(y,y^{\prime})<\gamma\)_._
_The modulus of almost openness of \(G\) around \((\bar{x},\bar{y})\), denoted by \(\overline{\operatorname{sur}}\,G(\bar{x},\bar{y})\), is the supremum of the set of all \(c>0\) such that \(G\) is almost open with the constant \(c\) around \((\bar{x},\bar{y})\). The modulus of almost metric regularity of \(G\) around \((\bar{x},\bar{y})\), denoted by \(\overline{\operatorname{reg}}\,G(\bar{x},\bar{y})\), is the infimum of the set of all \(\kappa>0\) such that \(G\) is almost metrically regular with the constant \(\kappa\) around \((\bar{x},\bar{y})\). The approximate Lipschitz modulus of \(G^{-1}\) around \((\bar{y},\bar{x})\), denoted by \(\overline{\operatorname{lip}}\,G^{-1}(\bar{y},\bar{x})\), is the infimum of the set of all \(\ell>0\) such that \(G\) has an almost pseudo-Lipschitz inverse with the constant \(\ell\) around \((\bar{x},\bar{y})\)._
**Remark 3.5**.: Fix any \(x\in X\) and \(y\in Y\) with \(\operatorname{dist}(y,G(x))<\infty\). _Inequality_ (13) _holds true if and only if for each \(\varepsilon>0\) there is a sequence \((y_{k})_{k\in\mathbb{N}}\) converging to \(y\) such that_
\[\operatorname{dist}\big{(}x,G^{-1}(y_{k})\big{)}\leq\kappa\,\big{(} \operatorname{dist}(y,G(x))+\varepsilon\big{)}\quad\text{for each}\quad k\in \mathbb{N}. \tag{15}\]
Assume that (13) holds true. Let \(\varepsilon>0\) be arbitrary. For each \(k\in\mathbb{N}\) we have
\[\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y,\varepsilon/k) \big{)}\big{)} \leq \lim_{k\to\infty}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y} (y,\varepsilon/k)\big{)}\big{)}\leq\kappa\operatorname{dist}(y,G(x))\] \[< \kappa\,\big{(}\operatorname{dist}(y,G(x))+\varepsilon\big{)}.\]
Thus there is a sequence \((x_{k})_{k\in\mathbb{N}}\) such that for each \(k\in\mathbb{N}\) we have \(d(x,x_{k})<\kappa\left(\,\mathrm{dist}(y,G(x))+\varepsilon\right)\) and \(G(x_{k})\cap B_{Y}(y,\varepsilon/k)\neq\emptyset\), containing a point \(y_{k}\), say. Hence \(y_{k}\to y\) as \(k\to\infty\) and
\[\mathrm{dist}\left(x,G^{-1}(y_{k})\right)\leq d(x,x_{k})<\kappa\left(\, \mathrm{dist}(y,G(x))+\varepsilon\right)\quad\text{for each}\quad k\in\mathbb{N}.\]
On the other hand, pick any \(\varepsilon>0\). Find \((y_{k})_{k\in\mathbb{N}}\) satisfying (15) and converging to \(y\). Let \(k\in\mathbb{N}\) be such that \(y_{k}\in B_{Y}(y,\varepsilon)\). Then \(\mathrm{dist}\left(x,G^{-1}\big{(}B_{Y}(y,\varepsilon)\big{)}\right)\leq \mathrm{dist}\left(x,G^{-1}(y_{k})\right)\leq\kappa\left(\,\mathrm{dist}(y,G(x ))+\varepsilon\right)\). Letting \(\varepsilon\downarrow 0\), we get (13).
**Remark 3.6**.: Fix any \(x\in X\) and \(y\), \(y^{\prime}\in Y\). _Inequality (14) holds true if and only if for each \(\varepsilon>0\) there is a sequence \((y_{k})_{k\in\mathbb{N}}\) converging to \(y^{\prime}\) such that_
\[\mathrm{dist}\left(x,G^{-1}(y_{k})\right)\leq\ell\left(\varrho(y,y^{\prime})+ \varepsilon\right)\quad\text{for each}\quad k\in\mathbb{N}. \tag{16}\]
Assume that (14) holds true. Let \(\varepsilon>0\) be arbitrary. For each \(k\in\mathbb{N}\) we have
\[\mathrm{dist}\left(x,G^{-1}\big{(}B_{Y}(y^{\prime},\varepsilon/k) \big{)}\right) \leq \lim_{k\to\infty}\mathrm{dist}\left(x,G^{-1}\big{(}B_{Y}(y^{ \prime},\varepsilon/k)\big{)}\right)\leq\ell\,\varrho(y,y^{\prime})\] \[< \ell\left(\varrho(y,y^{\prime})+\varepsilon\right).\]
Thus there is a sequence \((x_{k})_{k\in\mathbb{N}}\) such that for each \(k\in\mathbb{N}\) we have \(d(x,x_{k})<\ell\left(\varrho(y,y^{\prime})+\varepsilon\right)\) and \(G(x_{k})\cap B_{Y}(y^{\prime},\varepsilon/k)\neq\emptyset\), containing a point \(y_{k}\), say. Hence \(y_{k}\to y^{\prime}\) as \(k\to\infty\) and
\[\mathrm{dist}\left(x,G^{-1}(y_{k})\right)\leq d(x,x_{k})<\ell\left(\varrho(y,y ^{\prime})+\varepsilon\right)\quad\text{for each}\quad k\in\mathbb{N}.\]
On the other hand, pick any \(\varepsilon>0\). Find \((y_{k})_{k\in\mathbb{N}}\) satisfying (16) and converging to \(y^{\prime}\). Let \(k\in\mathbb{N}\) be such that \(y_{k}\in B_{Y}(y^{\prime},\varepsilon)\). Then \(\mathrm{dist}\left(x,G^{-1}\big{(}B_{Y}(y^{\prime},\varepsilon)\big{)}\right) \leq\mathrm{dist}\left(x,G^{-1}(y_{k})\right)\leq\ell\left(\varrho(y,y^{ \prime})+\varepsilon\right)\). Letting \(\varepsilon\downarrow 0\), we get (14).
As in the case of regularity we have the following equivalence:
**Proposition 3.7**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a mapping \(G:X\rightrightarrows Y\) and a point \((\bar{x},\bar{y})\in\mathrm{gph}\,G\). Then \(\overline{\mathrm{sur}}\,G(\bar{x},\bar{y})\cdot\overline{\mathrm{reg}}\,G( \bar{x},\bar{y})=1\) and \(\overline{\mathrm{reg}}\,G(\bar{x},\bar{y})=\overline{\mathrm{lip}}\,G^{-1}( \bar{y},\bar{x})\)._
Proof.: For each \(c\in\left(0,\overline{\mathrm{sur}}\,G(\bar{x},\bar{y})\right)\), if there is any, Proposition 3.2 implies that \(1=c\cdot\frac{1}{c}\geq c\cdot\overline{\mathrm{reg}}\,G(\bar{x},\bar{y})\); letting \(c\uparrow\overline{\mathrm{sur}}\,G(\bar{x},\bar{y})\), we get \(1\geq\overline{\mathrm{sur}}\,G(\bar{x},\bar{y})\cdot\overline{\mathrm{reg}} \,G(\bar{x},\bar{y})\). By the convention \(0\cdot\infty=1\), the last inequality also holds when \(\overline{\mathrm{sur}}\,G(\bar{x},\bar{y})=0\). Similarly, for each \(\kappa>\overline{\mathrm{reg}}\,G(\bar{x},\bar{y})\), if there is any, Proposition 3.2 implies that \(1=\frac{1}{\kappa}\cdot\kappa\leq\overline{\mathrm{sur}}\,G(\bar{x},\bar{y}) \cdot\kappa\); letting \(\kappa\downarrow\overline{\mathrm{reg}}\,G(\bar{x},\bar{y})\), we get \(1\leq\overline{\mathrm{sur}}\,G(\bar{x},\bar{y})\cdot\overline{\mathrm{reg}}\,G (\bar{x},\bar{y})\). By our convention, the last inequality also holds when \(\overline{\mathrm{reg}}\,G(\bar{x},\bar{y})=\infty\). We proved the first relation; the latter one is obvious by Proposition 3.2.
Taking a neighborhood of the reference point as \(U\) and \(V\) being singleton containing the reference point only, we get the notion of _almost subregularity_:
**Definition 3.8**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a mapping \(G:X\rightrightarrows Y\) and a point \((\bar{x},\bar{y})\in\mathrm{gph}\,G\). Then \(G\) is said to_
1. _be_ almost pseudo-open with a constant \(c>0\) at \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ \(\bar{y}\in\overline{G\left(B_{X}(x,t)\right)}\) _for each_ \(x\in B_{X}(\bar{x},\gamma)\) _and each_ \(t\in(0,\gamma)\) _with_ \(G(x)\cap B_{Y}(\bar{y},ct)\neq\emptyset\)_;_
2. _be_ almost metrically subregular with a constant \(\kappa>0\) at \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}( \bar{y},\varepsilon)\big{)}\big{)}\leq\kappa\,\operatorname{dist}(\bar{y},G(x))\) _for each_ \(x\in B_{X}(\bar{x},\gamma)\) _with_ \(\kappa\,\operatorname{dist}(\bar{y},G(x))<\gamma\)_;_
3. _have an_ almost calm inverse with a constant \(\ell>0\) at \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}( \bar{y},\varepsilon)\big{)}\big{)}\leq\ell\,\varrho(\bar{y},y)\) _for each_ \(y\in Y\) _and each_ \(x\in G^{-1}(y)\cap B_{X}(\bar{x},\gamma)\) _with_ \(\ell\,\varrho(\bar{y},y)<\gamma\)_._
_The modulus of almost pseudo-openness of \(G\) at \((\bar{x},\bar{y})\), denoted by \(\overline{\operatorname{open}}\,G(\bar{x},\bar{y})\), is the supremum of the set of all \(c>0\) such that \(G\) is almost pseudo-open with the constant \(c\) at \((\bar{x},\bar{y})\). The modulus of almost metric subregularity of \(G\) at \((\bar{x},\bar{y})\), denoted by \(\overline{\operatorname{subreg}}\,G(\bar{x},\bar{y})\), is the infimum of the set of all \(\kappa>0\) such that \(G\) is almost metrically subregular with the constant \(\kappa\) at \((\bar{x},\bar{y})\). The modulus of almost calmness of \(G^{-1}\) at \((\bar{y},\bar{x})\), denoted by \(\overline{\operatorname{calim}}\,G^{-1}(\bar{y},\bar{x})\), is the infimum of the set of all \(\ell>0\) such that \(G\) has an almost calm inverse with the constant \(\ell\) at \((\bar{x},\bar{y})\)._
As in the case of subregularity, using Proposition 3.2 and the same steps from the proof of Proposition 3.7, we get:
**Proposition 3.9**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a mapping \(G:X\rightrightarrows Y\) and a point \((\bar{x},\bar{y})\in\operatorname{gph}G\). Then \(\overline{\operatorname{open}}\,G(\bar{x},\bar{y})\cdot\overline{ \operatorname{subreg}}\,G(\bar{x},\bar{y})=1\) and \(\overline{\operatorname{subreg}}\,G(\bar{x},\bar{y})=\overline{\operatorname{ calim}}\,G^{-1}(\bar{y},\bar{x})\)._
On the other hand, if \(U\) is a singleton containing the reference point only and \(V\) is a neighborhood of the reference point, we get the notion of _almost semiregularity_, see [10] for the historical overview of semiregularity property together with a list of different names used in the literature.
**Definition 3.10**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a mapping \(G:X\rightrightarrows Y\) and a point \((\bar{x},\bar{y})\in\operatorname{gph}G\). Then \(G\) is said to_
1. _be_ almost open with a constant_ \(c>0\) _at_ \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ \(B_{Y}(y,ct)\cap B_{Y}(\bar{y},\gamma)\subset\overline{G\big{(}B_{X}(\bar{x},t )\big{)}}\) _for each_ \(y\in G(\bar{x})\) _and each_ \(t\in(0,\gamma)\)_;_
2. _be_ almost metrically semiregular with a constant_ \(\kappa>0\) _at_ \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}\bar{x},G^{-1}\big{(}B_{Y}(y, \varepsilon)\big{)}\big{)}\leq\kappa\,\operatorname{dist}(y,G(\bar{x}))\) _for each_ \(y\in B_{Y}(\bar{y},\gamma)\) _with_ \(\kappa\,\operatorname{dist}(y,G(\bar{x}))<\gamma\)_;_
3. _have an_ almost inner-calim inverse with a constant \(\ell>0\) at \((\bar{x},\bar{y})\) _if there is a_ \(\gamma>0\) _such that_ \(\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}\bar{x},G^{-1}\big{(}B_{Y}(y ^{\prime},\varepsilon)\big{)}\big{)}\leq\ell\,\varrho(y,y^{\prime})\) _for each_ \(y\in G(\bar{x})\) _and each_ \(y^{\prime}\in B_{Y}(\bar{y},\gamma)\) _with_ \(\ell\,\varrho(y,y^{\prime})<\gamma\)_._
_The modulus of almost openness of \(G\) at \((\bar{x},\bar{y})\), denoted by \(\overline{\operatorname{open}}\,G(\bar{x},\bar{y})\), is the supremum of the set of all \(c>0\) such that \(G\) is almost open with the constant
_at \((\bar{x},\bar{y})\). The modulus of almost metric semiregularity of \(G\) at \((\bar{x},\bar{y})\), denoted by \(\overline{\operatorname{semireg}}\,G(\bar{x},\bar{y})\), is the infimum of the set of all \(\kappa>0\) such that \(G\) is almost metrically semiregular with the constant \(\kappa\) at \((\bar{x},\bar{y})\). The modulus of almost inner-calmness of \(G^{-1}\) at \((\bar{y},\bar{x})\), denoted by \(\overline{\operatorname{incalm}}\,G^{-1}(\bar{y},\bar{x})\), is the infimum of the set of all \(\ell>0\) such that \(G\) has an almost inner-calm inverse with the constant \(\ell\) at \((\bar{x},\bar{y})\)._
As in the case of semiregularity, using Proposition 3.2 and the same steps from the proof of Proposition 3.7, we get:
**Proposition 3.11**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a mapping \(G:X\rightrightarrows Y\) and a point \((\bar{x},\bar{y})\in\operatorname{gph}G\). Then \(\overline{\operatorname{lopen}}\,G(\bar{x},\bar{y})\cdot\overline{ \operatorname{semireg}}\,G(\bar{x},\bar{y})=1\) and \(\overline{\operatorname{semireg}}\,G(\bar{x},\bar{y})=\overline{ \operatorname{incalm}}\,G^{-1}(\bar{y},\bar{x})\)._
**Remark 3.12**.: For a single-valued mapping \(g:X\to Y\), we omit the point \(\bar{y}=g(\bar{x})\) in all the preceding definitions; and if any of the moduli is independent of the reference point, for example, in case of a linear operator acting in normed spaces, we omit \(\bar{x}\) as well; that is, we write \(\overline{\operatorname{surf}}g(\bar{x})\), \(\overline{\operatorname{reg}}\,g(\bar{x})\), etc.; or even \(\overline{\operatorname{surf}}g\), \(\overline{\operatorname{reg}}\,g\), etc.
**Remark 3.13**.: The value of \(\overline{\operatorname{surf}}\,G(\bar{x},\bar{y})\) remains the same if, one replaces (12) by \(B_{Y}[y,ct]\cap B_{Y}(\bar{y},\gamma)\subset\overline{G\big{(}B_{X}[x,t]\big{)}}.\) Of course, the definitions of \(\overline{\operatorname{lopen}}\,G(\bar{x},\bar{y})\) and \(\overline{\operatorname{lopen}}\,G(\bar{x},\bar{y})\) can be modified similarly. The use of the open balls provides the following advantage: _Given a \(c>0\), if a mapping \(G:X\rightrightarrows Y\) has property \((\mathcal{O})\) on \(U\times V\) for \(\gamma\) and \(\operatorname{each}\,c^{\prime}\in(0,c)\), then \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(\gamma\) and \(c\)._
## 4. Ioffe-type criteria
In this section, we focus on almost regularity of a single-valued mapping first. This allows us to display clearly the main ideas avoiding seemingly more general proofs for set-valued mappings popular nowadays.
**Proposition 4.1**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a \(c>0\), non-empty sets \(U\subset X\) and \(V\subset Y\), a function \(\gamma:X\to[0,\infty]\) being defined and not identically zero on \(U\), and a mapping \(g:X\to Y\) defined on the whole \(X\). Assume that for each \(y\in V\) there is a \(\lambda=\lambda(y)\in(0,1)\) such that for each \(u\in X\) satisfying_
\[0<\varrho(g(u),y)\leq\varrho(g(x),y)-c\,d(u,x)\quad\text{and}\quad\varrho(g(x),y)<c\gamma(x) \tag{17}\]
_for some \(x\in U\), there is a point \(u^{\prime}\in X\) such that_
\[\varrho(g(u^{\prime}),y)+c\,d(u,u^{\prime})\leq\lambda\,\varrho(g(u),y). \tag{18}\]
_Then \(g\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\)._
Proof.: Let \(x\in U\) and \(t\in(0,\gamma(x))\) be arbitrary. Pick any \(y\in B_{Y}(g(x),ct)\cap V\). Find a \(\lambda\in(0,1)\) such that the assumption containing (18) holds true. Let \(\varepsilon>0\) be arbitrary. We are about to find a \(u\in B_{X}(x,t)\) such that \(\varrho(g(u),y)\leq\varepsilon\). If \(\varrho(g(x),y)\leq\varepsilon\), then \(u:=x\) satisfies the conclusion. Assume that this is not the case. As \((1-\lambda)\varepsilon<\varepsilon<\varrho(g(x),y)<ct<\infty\), Corollary 2.6, with \(\varphi:=\varrho(g(\cdot),y)\), \(\eta:=cd\), and \(\varepsilon:=(1-\lambda)\varepsilon\), yields a point \(u\in X\) such that \(\varrho(g(u),y)+c\,d(u,x)\leq\varrho(g(x),y)\) and
\[\varrho(g(u^{\prime}),y)+c\,d(u^{\prime},u)>\varrho(g(u),y)-(1-\lambda) \varepsilon\quad\text{for every}\quad u^{\prime}\in X. \tag{19}\]
The first inequality implies that \(c\,d(u,x)\leq\varrho(g(x),y)<ct\) (\(<c\gamma(x)\)). Thus \(u\in B_{X}(x,t)\). Assume that \(\varrho(g(u),y)>\varepsilon\). Find a \(u^{\prime}\in X\) satisfying (18). Then
\[c\,d(u,u^{\prime}) \stackrel{{\eqref{eq:18}}}{{\leq}} \lambda\,\varrho(g(u),y)-\varrho(g(u^{\prime}),y)<\varrho(g(u),y) -(1-\lambda)\varepsilon-\varrho(g(u^{\prime}),y)\] \[\stackrel{{\eqref{eq:19}}}{{<}} \varrho(g(u^{\prime}),y)+c\,d(u^{\prime},u)-\varrho(g(u^{\prime }),y)=c\,d(u^{\prime},u),\]
a contradiction. Consequently, \(\varrho(g(u),y)\leq\varepsilon\). Hence \(\operatorname{dist}\big{(}y,g\big{(}B_{X}(x,t)\big{)}\big{)}\leq\varrho(y,g(u ))\leq\varepsilon\). Letting \(\varepsilon\downarrow 0\), we conclude that \(y\in\overline{g\big{(}B_{X}(x,t)\big{)}}\). Thus \(B_{Y}(g(x),ct)\cap V\subset\overline{g\big{(}B_{X}(x,t)\big{)}}\).
**Remark 4.2**.: We do not need any additional assumption on \(\gamma\) except for the ones ensuring that property (\(\mathcal{O}\)) is meaningful. Usually, the constraint (17) is stated in a weaker form. This excludes obtaining applicable conditions for (almost) semiregularity and, moreover, we need additional properties of \(\gamma\). Assume that \(\gamma\) is Lipschitz continuous on \(\widetilde{U}:=\cup_{x\in U}B_{X}(x,\gamma(x))\) with the constant \(1\). Given \(u\in X\), \(x\in U\), and \(y\in V\) satisfying (17), we have
\[\varrho(g(u),y)\leq\varrho(g(x),y)-c\,d(u,x)<c\,(\gamma(x)-d(u,x))\leq c\gamma (u);\]
so, in particular, \(d(u,x)<\gamma(x)\). Thus the assumption containing (18) can be replaced by a less precise and stronger one: _Assume that there is a \(\lambda\in(0,1)\) such that for each \(u\in\widetilde{U}\) and each \(y\in V\), with \(0<\varrho(g(u),y)<c\gamma(u)\), there is a point \(u^{\prime}\in X\) satisfying (18)._ If \(U\) is open and \(\gamma(x):=\operatorname{dist}(x,X\setminus U)\), \(x\in X\), then \(\gamma\) is positive on \(U\) and we arrive at _Milyutin almost regularity_.
An approximate version of Proposition 1.2 reads as:
**Corollary 4.3**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider \(c>0\) and \(r>0\), a point \(\bar{x}\in X\), and a mapping \(g:X\to Y\) defined on the whole \(X\). Assume that there is a \(\lambda\in(0,1)\) such that for each \(u\in B_{X}(\bar{x},r)\) and each \(y\in B_{Y}(g(\bar{x}),cr)\), with \(0<\varrho(g(u),y)<cr\), there is a point \(u^{\prime}\in X\) satisfying (18). Then \(g\) has property (\(\mathcal{O}\)) on \(B_{X}(\bar{x},r)\times B_{Y}(g(\bar{x}),cr)\) for \(c\) and \(\gamma(x):=r-d(x,\bar{x})\), \(x\in B_{X}(\bar{x},r)\). In particular, \(g\) is almost regular around \(\bar{x}\) with the constant \(c\)._
Proof.: Let \(U:=B_{X}(\bar{x},r)\) and \(V:=B_{Y}(g(\bar{x}),cr)\). Then \(\widetilde{U}:=\cup_{x\in U}B_{X}(x,\gamma(x))=B_{X}(\bar{x},r)=U\) and \(\gamma(u)\leq r\) for each \(u\in U\). In view of Remark 4.2, Proposition 4.1 implies the first conclusion. Let \(\delta:=\min\{r/2,cr\}\). For each \(x\in B_{X}(\bar{x},\delta)\) and each \(t\in(0,\delta)\) we have \(t<r/2<r-d(x,\bar{x})=\gamma(x)\) and, therefore,
\[B_{Y}(g(x),ct)\cap B_{Y}(g(\bar{x}),\delta)\subset B_{Y}(g(x),ct)\cap B_{Y}(g( \bar{x}),cr)\subset\overline{g\big{(}B_{X}(x,t)\big{)}}.\]
**Remark 4.4**.: Clearly, (18) implies (3). Hence if, in addition, the space \(X\) is complete and \(g\) is continuous then we get full regularity, cf. Proposition 1.2. In [20], (3) with a non-strict inequality is used under the additional assumption that the "better" point \(u^{\prime}\) is distinct from \(u\). Note that (3) yields this automatically.
A statement for set-valued mappings, _equivalent_ to Proposition 4.1, reads as:
**Proposition 4.5**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider \(c>0\) and \(\alpha\in(0,1/c)\), non-empty sets \(U\subset X\) and \(V\subset Y\), a function \(\gamma:X\to[0,\infty]\) being defined and not identically zero on \(U\), and a set-valued mapping \(G:X\rightrightarrows Y\). Assume that each \(y\in V\) there is a \(\lambda=\lambda(y)\in(0,1)\) such that for each \((u,v)\in\operatorname{gph}G\) satisfying_
\[0<\varrho(v,y)\leq\varrho(z,y)-c\,\max\{d(u,x),\alpha\varrho(v,z)\}\quad\text{ and}\quad\varrho(z,y)<c\gamma(x)\]
_for some \(x\in U\) and \(z\in G(x)\), there is a pair \((u^{\prime},v^{\prime})\in\operatorname{gph}G\) such that_
\[\varrho(v^{\prime},y)+c\,\max\{d(u,u^{\prime}),\alpha\varrho(v,v^{\prime})\} \leq\lambda\,\varrho(v,y). \tag{20}\]
_Then \(G\) has property \((\mathcal{O})\) on \(U\times V\) for \(c\) and \(\gamma\)._
Proof.: Let \(\omega\), \(g\), and \(\widetilde{\gamma}\) be as in Remark 3.3. In view of this remark, it suffices to observe that all the assumptions of Proposition 4.1, with \((X,d):=(\operatorname{gph}G,\omega)\), \(U:=(U\times Y)\cap\operatorname{gph}G\), \(\gamma:=\widetilde{\gamma}\), are satisfied; which is obvious.
A statement for set-valued mappings, _equivalent_ to Corollary 4.3, reads as:
**Corollary 4.6**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider \(c>0\), \(r>0\), and \(\alpha\in(0,1/c)\), a point \((\bar{x},\bar{y})\in X\times Y\), and a set-valued mapping \(G:X\rightrightarrows Y\) with \(\bar{y}\in G(\bar{x})\). Assume that there is a \(\lambda\in(0,1)\) such that for each \(u\in B_{X}(\bar{x},r)\), each \(v\in G(u)\cap B_{Y}(\bar{y},r/\alpha)\), and each \(y\in B_{Y}(\bar{y},cr)\), with \(0<\varrho(v,y)<cr\), there is a pair \((u^{\prime},v^{\prime})\in\operatorname{gph}G\) satisfying (20). Then \(G\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},r)\times B_{Y}(\bar{y},cr)\) for \(c\) and \(\gamma(x):=r-d(x,\bar{x})\), \(x\in B_{X}(\bar{x},r)\). In particular, \(G\) is almost regular around \((\bar{x},\bar{y})\) with the constant \(c\)._
Another frequent choice is \(\gamma\equiv r\), for some \(r>0\). As in the case of regularity, if, in addition, \(U\times V\) is a neighborhood of the reference point, then this third parameter \(r\) can be neglected in exchange for a possibly smaller neighborhood, cf. [20, Exercise 2.12]. We illustrate this on another form of property \((\mathcal{R})\), equivalent to property \((\mathcal{O})\), which have become popular in the last years:
**Remark 4.7**.: Given a point \((\bar{x},\bar{y})\in X\times Y\) and a mapping \(G:X\rightrightarrows Y\) with \(\bar{y}\in G(\bar{x})\), assume that there are positive constants \(a\), \(b\), \(c\), and \(r\) such that
\[c\,\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1}\big{(}B_{Y}(y,\varepsilon)\big{)}\big{)}\leq\operatorname{dist}(y,G(x)) \tag{21}\]
for each \(x\in B_{X}(\bar{x},a)\) and each \(y\in B_{Y}(\bar{y},b)\) with \(\operatorname{dist}(y,G(x))<cr\). Let \(\beta:=\min\{a,b,cr/(1+c)\}\). Then (21) holds for each \((x,y)\in B_{X}(\bar{x},\beta)\times B_{Y}(\bar{y},\beta)\). Indeed, fix any such \((x,y)\). Pick an arbitrary \(v\in G(x)\) (if there is any). Assume that \(\varrho(y,v)\geq cr\). As \(\operatorname{dist}(y,G(\bar{x}))\leq\varrho(y,\bar{y})<\beta<cr\) and \(\beta\leq b\), (21) with \(x:=\bar{x}\), implies that
\[c\,\lim_{\varepsilon\downarrow 0}\operatorname{dist}\big{(}x,G^{-1} \big{(}B_{Y}(y,\varepsilon)\big{)}\big{)} \leq c\,d(x,\bar{x})+c\,\lim_{\varepsilon\downarrow 0}\operatorname{ dist}\big{(}\bar{x},G^{-1}\big{(}B_{Y}(y,\varepsilon)\big{)}\big{)}\] \[\leq c\,d(x,\bar{x})+\varrho(y,\bar{y})<c\beta+\beta\leq cr\leq \varrho(y,v).\]
If \(\varrho(y,v)<cr\), the same inequality follows from (21) directly. As \(v\in G(x)\) is arbitrary, we are done.
## 5. Perturbation stability
Applying Corollary 4.6, we get a local approximate Lyusternik-Graves theorem, cf. [14, Theorem 5E.1], in not necessarily complete spaces.
**Theorem 5.1**.: _Let \((X,d)\) be a metric space and \((Y,\varrho)\) be a linear metric space with a translation-invariant metric. Consider a point \((\bar{x},\bar{z})\in X\times Y\), a set-valued mapping \(F:X\rightrightarrows Y\) with \(\bar{z}\in F(\bar{x})\), and a single-valued mapping \(h:X\to Y\) defined and Lipschitz continuous around \(\bar{x}\). Then_
\[\overline{\operatorname{sur}}(F+h)(\bar{x},\bar{z}+h(\bar{x}))\geq\overline{ \operatorname{sur}}\,F(\bar{x},\bar{z})-\operatorname{lip}h(\bar{x}). \tag{22}\]
Proof.: Assume that \(\overline{\operatorname{sur}}\,F(\bar{x},\bar{z})>\operatorname{lip}h(\bar{x})\) because otherwise there is nothing to prove. Fix constants \(c\), \(c^{\prime}\), and \(\ell\) such that
\[\operatorname{lip}h(\bar{x})<\ell<c^{\prime}<c<\overline{\operatorname{sur}} \,F(\bar{x},\bar{z}).\]
Find a \(\gamma>0\) such that for each \(u\in B_{X}(\bar{x},\gamma)\), each \(w\in F(u)\), and each \(t\in(0,\gamma)\) we have
\[B_{Y}[w,ct]\cap B_{Y}(\bar{z},\gamma)\subset\overline{F(B_{X}[u,t])};\qquad \qquad\text{(see Remark \ref{lem:1}).} \tag{23}\]
Pick an \(r\in\big{(}0,\min\{\gamma,\gamma/(3c)\}\big{)}\) such that
\[\varrho(h(u),h(u^{\prime}))\leq\ell\,d(u,u^{\prime})\quad\text{for all}\quad u,u^{\prime}\in B_{X}(\bar{x},2r). \tag{24}\]
Let \(\alpha:=1/(2c)\) and \(\lambda:=(c+c^{\prime})/(2c)\). Then \(\lambda\in(0,1)\) and \((c^{\prime}-\ell)\alpha<1\). Let \(G:=F+h\). Pick arbitrary \(u\in B_{X}(\bar{x},r)\), \(v\in G(u)\cap B_{Y}(\bar{z}+h(\bar{x}),2cr)\), and \(y\in B_{Y}(\bar{z}+h(\bar{x}),(c^{\prime}-\ell)r)\) with \(0<\varrho(v,y)<(c^{\prime}-\ell)r\). We shall find a pair \((u^{\prime},v^{\prime})\in\operatorname{gph}G\) such that
\[\varrho(v^{\prime},y)+(c^{\prime}-\ell)\max\{d(u,u^{\prime}),\alpha\varrho(v,v ^{\prime})\}<\lambda\varrho(v,y). \tag{25}\]
Clearly, \(u\in B_{X}(\bar{x},\gamma)\). Let \(w:=v-h(u)\in F(u)\) and \(t:=\varrho(v,y)/c\). Note that \(0<t<(c^{\prime}-\ell)r/c<r<\gamma\). Further, using (24), we get that
\[\varrho(w,\bar{z}) = \varrho(v-h(u)+h(\bar{x}),\bar{z}+h(\bar{x}))\leq\varrho(v-h(u)+h (\bar{x}),v)+\varrho(v,\bar{z}+h(\bar{x}))\] \[= \varrho(h(\bar{x}),h(u))+\varrho(v,\bar{z}+h(\bar{x}))\leq\varrho (h(\bar{x}),h(u))+\varrho(v,y)+\varrho(y,\bar{z}+h(\bar{x}))\] \[< \ell r+(c^{\prime}-\ell)r+(c^{\prime}-\ell)r<2c^{\prime}r.\]
Thus \(y-h(u)\in B_{Y}[w,ct]\cap B_{Y}(\bar{z},\gamma)\) because \(\varrho(y-h(u),w)=\varrho(y,v)=ct\) and
\[\varrho(y-h(u),\bar{z})\leq\varrho(y-h(u),w)+\varrho(w,\bar{z})<ct+2c^{\prime }r<3cr<\gamma.\]
By (23), there are \(u^{\prime}\in B_{X}[u,t]\) and \(w^{\prime}\in F(u^{\prime})\) such that \(\varrho(y-h(u),w^{\prime})<(c-c^{\prime})t/2\). Then \(d(u^{\prime},\bar{x})\leq d(u^{\prime},u)+d(u,\bar{x})<t+r<2r\). By (24), we have \(\varrho(h(u),h(u^{\prime}))\leq\ell t\). Let \(v^{\prime}:=w^{\prime}+h(u^{\prime})\in G(u^{\prime})\). Then
\[\varrho(v^{\prime},y) = \varrho(w^{\prime}+h(u^{\prime}),y)\leq\varrho(w^{\prime}+h(u^{ \prime}),w^{\prime}+h(u))+\varrho(w^{\prime}+h(u),y) \tag{26}\] \[= \varrho(h(u),h(u^{\prime}))+\varrho(y-h(u),w^{\prime})<\ell t+(c- c^{\prime})t/2\] (27) \[= (c+c^{\prime})t/2-(c^{\prime}-\ell)t=\lambda\,\varrho(v,y)-(c^{ \prime}-\ell)t.\]
Inequality in (26) reveals that
\[\varrho(v,v^{\prime}) \leq \varrho(v,y)+\varrho(y,v^{\prime})<ct+\ell t+(c-c^{\prime})t/2=(3 c/2-c^{\prime}/2+\ell)t\] \[< (3c/2+c^{\prime}/2)t<2ct=t/\alpha.\]
Hence \(\alpha\varrho(v,v^{\prime})<t\); and as \(d(u,u^{\prime})\leq t\), (27) implies (25). Corollary 4.6, with \(c:=c^{\prime}-\ell\) and \(\bar{y}:=\bar{z}+h(\bar{x})\), yields that \(\overline{\operatorname{sur}}G(\bar{x},\bar{z}+h(\bar{x}))\geq c^{\prime}-\ell\). Letting \(c^{\prime}\uparrow\overline{\operatorname{sur}}F(\bar{x},\bar{z})\) and \(\ell\downarrow\operatorname{lip}h(\bar{x})\) we finish the proof.
Theorem 5.1 implies the following Graves-type result:
**Theorem 5.2**.: _Let \((X,\|\cdot\|)\) and \((Y,\|\cdot\|)\) be normed spaces. Consider a point \(\bar{x}\in X\) and mappings \(f\), \(g:X\to Y\) both defined around \(\bar{x}\) and such that \(\operatorname{lip}(f-g)(\bar{x})=0\). Then \(\overline{\operatorname{sur}}f(\bar{x})=\overline{\operatorname{sur}}g(\bar{x})\)._
Proof.: Theorem 5.1, with \(F:=f\) and \(h:=g-f\); and \(F:=g\) and \(h:=f-g\), respectively, implies that \(\overline{\operatorname{sur}}g(\bar{x})\geq\overline{\operatorname{sur}}f(\bar {x})-\operatorname{lip}(g-f)(\bar{x})=\overline{\operatorname{sur}}f(\bar{x})\); and \(\overline{\operatorname{sur}}f(\bar{x})\geq\overline{\operatorname{sur}}g(\bar {x})-\operatorname{lip}(f-g)(\bar{x})=\overline{\operatorname{sur}}g(\bar{x})\).
Now, we consider set-valued perturbations as in [11, Theorems 3.1 and 3.10]. In contrast to [11], we employ neither the epigraphical multifunction nor the distance to the graph of the sum in the proof.
**Theorem 5.3**.: _Let \((X,d)\) be a metric space and \((Y,\varrho)\) be a linear metric space with a translation-invariant metric. Consider \(c\), \(\ell\in(0,\infty)\) such that \(\ell<c\) and \(a\), \(b\), \(r\), \(\delta\in(0,\infty]\), a point \((\bar{x},\bar{z},\bar{w})\in X\times Y\times Y\), and set-valued mappings \(F\), \(H:X\rightrightarrows Y\) with \(\bar{z}\in F(\bar{x})\) and \(\bar{w}\in H(\bar{x})\). Assume that_
* \(B_{Y}[z,ct]\cap B_{Y}(\bar{z},c(a+2r)+b+\delta)\subset\overline{F(B_{X}[u,t])}\) _for each_ \(u\in B_{X}(\bar{x},a+r)\)_, each_ \(z\in F(u)\cap B_{Y}(\bar{z},c(a+r)+b+\delta)\) _and each_ \(t\in(0,r)\)
* \(H(u)\cap B_{Y}(\bar{w},(a+r)\ell+\delta)\subset H(u^{\prime})+B_{Y}[0,\ell\,d(u,u^ {\prime})]\) _for each_ \(u\)_,_ \(u^{\prime}\in B_{X}(\bar{x},a+2r)\)_;_
* _for each_ \(u\in B_{X}(\bar{x},a+r)\) _and each_ \(v\in(F+H)(u)\cap B_{Y}(\bar{z}+\bar{w},b+cr)\) _there is a_ \(w\in H(u)\cap B_{Y}(\bar{w},(a+r)\ell+\delta)\) _such that_ \(v\in F(u)+w\)_._
_Then \(F+H\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},a)\times B_{Y}(\bar{z}+\bar{w},b)\) for \(c-\ell\) and \(\gamma\equiv r\)._
Proof.: Pick an arbitrary \(c^{\prime}\in(\ell,c)\). Let \(\alpha:=1/(2c)\) and \(\lambda:=(c+c^{\prime})/(2c)\). Then \(\lambda\in(0,1)\) and \((c^{\prime}-\ell)\alpha<1\). Let \(G:=F+H\). Pick arbitrary \(u\in B_{X}(\bar{x},a+r)\), \(v\in G(u)\), and \(y\in B_{Y}(\bar{z}+\bar{w},b)\) with \(0<\varrho(v,y)<(c^{\prime}-\ell)r\). We shall find a pair \((u^{\prime},v^{\prime})\in\operatorname{gph}G\) such that (25) holds true.
As \(\varrho(v,\bar{z}+\bar{w})\leq\varrho(v,y)+\varrho(y,\bar{z}+\bar{w})<(c^{ \prime}-\ell)r+b\), using (C), there is a \(w\in H(u)\cap B_{Y}(\bar{w},(a+r)\ell+\delta)\) such that \(v\in F(u)+w\). Let \(z:=v-w\in F(u)\) and \(t:=\varrho(v,y)/c\). Note that \(0<t<(c^{\prime}-\ell)r/c\leq r\) and
\[\varrho(z,\bar{z}) = \varrho(v-w,\bar{z})\leq\varrho(v-w,v-\bar{w})+\varrho(v-\bar{w},\bar{z})=\varrho(w,\bar{w})+\varrho(v,\bar{z}+\bar{w})\] \[< (a+r)\ell+\delta+(c^{\prime}-\ell)r+b\leq c(a+r)+b+\delta.\]
Thus \(y-w\in B_{Y}[z,ct]\cap B_{Y}(\bar{z},c(a+2r)+b+\delta)\) because \(\varrho(y-w,z)=\varrho(y,v)=ct\) and
\[\varrho(y-w,\bar{z})\leq\varrho(y-w,z)+\varrho(z,\bar{z})<ct+c(a+r)+b+\delta \leq c(a+2r)+b+\delta.\]
By (A), there is a \(u^{\prime}\in B_{X}[u,t]\) and \(z^{\prime}\in F(u^{\prime})\) such that \(\varrho(y-w,z^{\prime})<(c-c^{\prime})t/2\). So \(d(u^{\prime},\bar{x})\leq d(u^{\prime},u)+d(u,\bar{x})<t+a+r\leq a+2r\). As \(w\in H(u)\cap B_{Y}(\bar{w},(a+r)\ell+\delta)\), using (B), we find a \(w^{\prime}\in H(u^{\prime})\) such that \(\varrho(w,w^{\prime})\leq\ell\,d(u,u^{\prime})\leq\ell t\). Let \(v^{\prime}:=z^{\prime}+w^{\prime}\in G(u^{\prime})\). Then
\[\varrho(v^{\prime},y) = \varrho(z^{\prime}+w^{\prime},y)\leq\varrho(z^{\prime}+w^{\prime },z^{\prime}+w)+\varrho(z^{\prime}+w,y) \tag{28}\] \[= \varrho(w,w^{\prime})+\varrho(y-w,z^{\prime})<\ell t+(c-c^{\prime })t/2=(c+c^{\prime})t/2-(c^{\prime}-\ell)t\] (29) \[= \lambda\,\varrho(v,y)-(c^{\prime}-\ell)t.\]
Inequality in (28) reveals that
\[\varrho(v,v^{\prime}) \leq \varrho(v,y)+\varrho(y,v^{\prime})<ct+\ell t+(c-c^{\prime})t/2=(3c /2-c^{\prime}/2+\ell)t\] \[< (3c/2+c^{\prime}/2)t<2ct=t/\alpha.\]
Hence \(\alpha\varrho(v,v^{\prime})<t\); and as \(d(u,u^{\prime})\leq t\), (29) implies (25).
Proposition 4.5, with \(c:=c^{\prime}-\ell\), \(U:=B_{X}(\bar{x},a)\), \(V:=B_{Y}(\bar{z}+\bar{w},b)\), and \(\gamma\equiv r\), implies the conclusion, cf. Remark 4.2 and Remark 3.13.
The above statement covers approximate versions of several results. First, we consider global Lyusternik-Graves theorem, see [14, Theorem 5I.2], [11, Corollaries 3.2 and 3.11], or [23, Corollary 3.4]:
**Corollary 5.4**.: _Let \((X,d)\) be a metric space and \((Y,\varrho)\) be a linear metric space with a translation-invariant metric. Consider \(c\), \(\ell\in(0,\infty)\) such that \(\ell<c\), an \(r\in(0,\infty]\), and set-valued mappings \(F\), \(H:X\rightrightarrows Y\). Assume that \(F\) has
property \((\mathcal{O})\) on \(X\times Y\) for \(c\) and \(\gamma\equiv r\) and that \(H\) is Hausdorff-Lipschitz on \(X\) with the constant \(\ell\). Then the mapping \(F+H\) has property \((\mathcal{O})\) on \(X\times Y\) for \(c-\ell\) and \(\gamma\equiv r\)._
Proof.: Fix any \((\bar{x},\bar{v})\in\operatorname{gph}\left(F+H\right)\) and any \(a\in(0,\infty)\). Let \(b:=\infty\) and \(\delta:=\operatorname{diam}\,H(\bar{x})\). Find \(\bar{z}\in F(\bar{x})\) and \(\bar{w}\in H(\bar{x})\) such that \(\bar{v}=\bar{z}+\bar{w}\). Clearly, assumption (B) holds true. Given arbitrary \(u\in X\), \(z\in F(u)\), and \(t\in(0,r)\), we have \(B_{Y}(z,ct)\subset\overline{F(B_{X}(u,t))}\subset\overline{F(B_{X}[u,t])}\); hence \(B_{Y}[z,ct]\subset\overline{F(B_{X}[u,t])}\). Therefore (A) holds true. To show (C), fix any \(u\in B_{X}(\bar{x},a+r)\) and \(v\in(F+H)(u)\). There is a \(w\in H(u)\) such that \(v\in F(u)+w\). Find \(\bar{w}^{\prime}\in H(\bar{x})\) such that \(\varrho(w,\bar{w}^{\prime})\leq\ell d(u,\bar{x})<\ell(a+r)\). Then
\[\varrho(w,\bar{w})\leq\varrho(w,\bar{w}^{\prime})+\varrho(\bar{w}^{\prime}, \bar{w})<\ell(a+r)+\delta; \tag{30}\]
therefore (C) holds. Theorem 5.3 implies that \(F+H\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},a)\times Y\) for \(c-\ell\) and \(\gamma\equiv r\); where \(a>0\) is arbitrary.
**Remark 5.5**.: Let the assumptions of Corollary 5.4, with \(r=\infty\), be satisfied. Let \(\kappa:=1/c\). Then, in view of Proposition 3.2 and Remark 3.5, for each \(x\in X\), each \(y\in Y\), and each \(\varepsilon>0\) there is a sequence \((y_{k})_{k\in\mathbb{N}}\) converging to \(y\) such that
\[\operatorname{dist}\left(x,(F+H)^{-1}(y_{k})\right)\leq\frac{\kappa}{1-\kappa \ell}\left(\operatorname{dist}(y,(F+H)(x))+\varepsilon\right)\quad\text{for each}\quad k\in\mathbb{N}.\]
Hence we get the conclusion in [2, Theorem 13], where \(F\) is assumed to be globally regular (instead of _almost_ globally regular).
Second, we derive approximate analogues of [11, Corollaries 3.4 and 3.13] going back to [1, Theorem 3.2]:
**Corollary 5.6**.: _Let \((X,d)\) be a metric space and \((Y,\varrho)\) be a linear metric space with a translation-invariant metric. Consider positive \(a\), \(b\), \(c\), and \(\ell\) such that \(\ell<c\), a point \((\bar{x},\bar{z},\bar{w})\in X\times Y\times Y\), and set-valued mappings \(F\), \(H:X\rightrightarrows Y\) with \(\bar{z}\in F(\bar{x})\) and \(\bar{w}\in H(\bar{x})\). Assume that \(F\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},a)\times B_{Y}(\bar{z},b)\) for \(c\) and \(\gamma\equiv\infty\), that \(H\) is Hausdorff-Lipschitz on \(B_{X}(\bar{x},a)\) with the constant \(\ell\), and that \(\operatorname{diam}\,H(\bar{x})<b\). Then for each \(\beta>0\) such that_
\[\beta(3+2/(c-\ell))<a\text{ and }\beta(3c+2c/(c-\ell)+1)+\operatorname{diam}\,H( \bar{x})<b, \tag{31}\]
_the mapping \(F+H\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},\beta)\times B_{Y}(\bar{z}+\bar{w},\beta)\) for \(c-\ell\) and \(\gamma\equiv\infty\)._
Proof.: Let \(\beta\) satisfy (31). Put \(\delta:=\operatorname{diam}\,H(\bar{x})\) and \(r:=(1+c-\ell)\beta/(c-\ell)\). We check assumptions (A), (B), and (C) of Theorem 5.3 with \((a,b):=(\beta,\beta)\). Assumption (B) holds true because \(\beta+2r=\beta(3+2/(c-\ell))<a\). Given arbitrary \(u\in B_{X}(\bar{x},a)\), \(z\in F(u)\), and \(t\in(0,r)\), we have \(B_{Y}(z,ct)\cap B_{Y}(\bar{z},b)\subset\overline{F(B_{X}(u,t))}\subset\overline {F(B_{X}[u,t])}\); thus \(B_{Y}[z,ct]\cap B_{Y}(\bar{z},b)\subset\overline{F(B_{X}[u,t])}\). Since \(\beta+r=\beta(2+1/(c-\ell))<a\) and \(c(\beta+2r)+\beta+\delta=\beta(3c+2c/(c-\ell)+1)+\delta<b\), we
conclude that (A) is satisfied. To show that (C) holds, fix any \(u\in B_{X}(\bar{x},\beta+r)\) and \(v\in(F+H)(u)\). There is a \(w\in H(u)\) such that \(v\in F(u)+w\). As \(H\) is Hausdorff-Lipschitz on \(B_{X}(\bar{x},a)\) and \(\beta+r<a\), there is a \(\bar{w}^{\prime}\in H(\bar{x})\) such that \(\varrho(w,\bar{w}^{\prime})\leq\ell d(u,\bar{x})<\ell(\beta+r)\). Hence (30) with \(a:=\beta\) holds; therefore so does (C). Theorem 5.3 implies that \(F+H\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},\beta)\times B_{Y}(\bar{z}+\bar{w},\beta)\) for \(c-\ell\) and \(\gamma\equiv r=(1+c-\ell)\beta/(c-\ell)\). Noting that \(\min\{\beta,\beta,r(c-\ell)/(1+c-\ell)\}=\beta\), Remark 4.7 and Proposition 3.2 reveal that \(F+H\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},\beta)\times B_{Y}(\bar{y},\beta)\) for \(c-\ell\) and \(\gamma\equiv\infty\).
Instead of the condition on the diameter one can assume the so-called local sum-stability introduced by M. Durea and R. Strugariu [16].
**Definition 5.7**.: _Let \((X,d)\) be a metric space and \((Y,\varrho)\) be a linear metric space. Consider a point \((\bar{x},\bar{z},\bar{w})\in X\times Y\times Y\) and set-valued mappings \(F\), \(H:X\rightrightarrows Y\) such that \(\bar{z}\in F(\bar{x})\) and \(\bar{w}\in H(\bar{x}).\) We say that the pair \((F,H)\) is sum-stable around \((\bar{x},\bar{z},\bar{w})\) if for each \(\xi>0\) there exists a \(\beta>0\) such that, for each \(u\in B_{X}(\bar{x},\beta)\) and each \(v\in(F+H)(u)\cap B_{Y}(\bar{z}+\bar{w},\beta)\), there exist \(z\in F(u)\cap B_{Y}(\bar{z},\xi)\) and \(w\in H(u)\cap B_{Y}(\bar{w},\xi)\) such that \(v=z+w\)._
Using the above notion, we obtain an approximate version of [11, Corollaries 3.7 and 3.15] going back to [24, Theorem 8] and generalizing Theorem 5.1, because if a mapping \(h:X\to Y\) is single-valued and Lipschitz continuous around (even calm at) \(\bar{x}\), then the pair \((F,h)\) is sum-stable around \((\bar{x},\bar{z},h(\bar{x}))\) for any \(F:X\rightrightarrows Y\) with \(\bar{z}\in F(\bar{x})\).
**Corollary 5.8**.: _Let \((X,d)\) be a metric space and \((Y,\varrho)\) be a linear metric space with a translation-invariant metric. Consider a point \((\bar{x},\bar{z},\bar{w})\in X\times Y\times Y\) and set-valued mappings \(F\), \(H:X\rightrightarrows Y\) with \(\bar{z}\in F(\bar{x})\) and \(\bar{w}\in H(\bar{x})\). Assume that \(\operatorname{lip}H(\bar{x},\bar{w})<\infty\) and that the pair \((F,H)\) is sum-stable around \((\bar{x},\bar{z},\bar{w})\). Then_
\[\overline{\operatorname{sur}}(F+H)(\bar{x},\bar{z}+\bar{w})\geq\overline{ \operatorname{sur}}\,F(\bar{x},\bar{z})-\operatorname{lip}H(\bar{x},\bar{w}). \tag{32}\]
Proof.: Assume that \(\overline{\operatorname{sur}}\,F(\bar{x},\bar{z})>\operatorname{lip}H(\bar{x},\bar{w})\) because otherwise there is nothing to prove. Fix constants \(c\) and \(\ell\) such that \(\operatorname{lip}H(\bar{x},\bar{w})<\ell<c<\overline{\operatorname{sur}}\,F( \bar{x},\bar{z})\). Find a \(\beta>0\) such that \(F\) has property \((\mathcal{O})\) on \(B_{X}(\bar{x},\beta)\times B_{Y}(\bar{z},\beta)\) for \(c\) and \(\gamma\equiv\beta\) and that \(H\) has Aubin property on \(B_{X}(\bar{x},\beta)\times B_{Y}(\bar{w},\beta)\) with the constant \(\ell\). Let \(\delta:=\beta/2\). Find a \(\beta^{\prime}\in(0,\beta/2)\) such that for each \(u\in B_{X}(\bar{x},\beta^{\prime})\) and each \(v\in(F+H)(u)\cap B_{Y}(\bar{z}+\bar{w},\beta^{\prime})\), there exist \(z\in F(u)\cap B_{Y}(\bar{z},\delta)\) and \(w\in H(u)\cap B_{Y}(\bar{w},\delta)\) such that \(v=z+w\). Let \(r:=\beta^{\prime}/(3c+2)\).
We check assumptions (A), (B), and (C) of Theorem 5.3 with \((a,b):=(r,r)\). Assumption (C) holds true because \(r+r<\beta^{\prime}\), \(r+cr<\beta^{\prime}\), and \((r+r)\ell+\delta>\delta\). Since \((r+r)\ell+\delta<2rc+\delta<\beta^{\prime}+\delta<\beta\) and \(r+2r<3\beta^{\prime}/2<\beta\), assumption (B) is verified. Given arbitrary \(u\in B_{X}(\bar{x},\beta)\), \(z\in F(u)\), and \(t\in(0,\beta)\), we have \(B_{Y}(z,ct)\cap B_{Y}(\bar{z},\beta)\subset\overline{F(B_{X}(u,t))}\subset \overline{F(B_{X}[u,t])}\); thus \(B_{Y}[z,ct]\cap B_{Y}(\bar{z},\beta)\subset\overline{F(B_{X}[u,t])}\).
\(\overline{F(B_{X}[u,t])}\). Since \(r+r<\beta^{\prime}<\beta\) and \(c(r+2r)+r+\delta<\beta^{\prime}+\delta<\beta\), we conclude that (A) is satisfied.
Theorem 5.3 implies that \(\overline{\operatorname{sur}}\,(F+H)(\bar{x},\bar{z}+\bar{w})\geq c-\ell\). Letting \(c\uparrow\overline{\operatorname{sur}}\,F(\bar{x},\bar{z})\) and \(\ell\downarrow\operatorname{lip}H(\bar{x},\bar{w})\) we finish the proof.
## 6. Linear and Affine Mappings
A topological vector space \(X\), that is, a vector space over \(\mathbb{R}\) equipped with a Hausdorff topology such that the mappings \((u,x)\longmapsto u+x\) and \((\lambda,x)\longmapsto\lambda x\) are continuous on the spaces \(X\times X\) and \(\mathbb{R}\times X\) with respect to their product topologies, is said to be a _locally convex space_ if every neighborhood of the origin contains a convex neighborhood of the origin. Every neighborhood of the origin is an absorbing set. A locally convex space \(X\) is said to be _barreled_ if each closed convex circled absorbing subset of \(X\) is a neighborhood of the origin in \(X\). In every locally convex space there is a neighborhood basis of the origin consisting of closed convex circled (hence symmetric) sets. Every Frechet space, i.e., a complete metrizable locally convex space, and every Baire space, i.e. countable unions of closed sets with empty interior also have empty interior, is barreled.
A linear operator \(A\) acting between topological vector spaces \(X\) and \(Y\) is said to be _almost open at \(x\in X\)_ if for each neighborhood \(U\) of \(x\) in \(X\) the set \(\overline{A(U)}\) is a neighborhood of \(Ax\) in \(Y\). In metrizable topological vector spaces, \(A\) is almost open around each point if and only if \(A\) is almost open at each point if and only if \(A\) is almost open at the origin. Moreover, if \(A\) acts between normed spaces then
\[\overline{\operatorname{sur}}A=\sup\big{\{}c>0:\ \overline{A(B_{X})}\supset cB_{Y }\big{\}}.\]
The almost openness modulus is a Lipschitz continuous function on the space of bounded linear operators \(\mathcal{BL}(X,Y)\) equipped with the operator norm; this is a quantitative version of [19, Theorem 1.2].
**Corollary 6.1**.: _Let \((X,\|\cdot\|)\) and \((Y,\|\cdot\|)\) be normed spaces. The function \(\mathcal{BL}(X,Y)\ni A\longmapsto\overline{\operatorname{sur}}A\) is Lipschitz continuous with the constant \(1\). In particular, the almost open linear operators form an open subset of \(\mathcal{BL}(X,Y)\)._
Proof.: Let \(A\), \(B\in\mathcal{BL}(X,Y)\) be arbitrary. As \(\operatorname{lip}(A-B)=\|A-B\|\), Theorem 5.1 implies that
\[\overline{\operatorname{sur}}B\geq\overline{\operatorname{sur}}A-\|A-B\|\quad \text{and}\quad\overline{\operatorname{sur}}A\geq\overline{\operatorname{sur}} B-\|A-B\|.\]
Since \(\overline{\operatorname{sur}}A\leq\|A\|<\infty\) and \(\overline{\operatorname{sur}}B\leq\|B\|<\infty\), a combination of the displayed formulas above yields that
\[-\|A-B\|\leq\overline{\operatorname{sur}}A-\overline{\operatorname{sur}}B\leq \overline{\operatorname{sur}}A-\overline{\operatorname{sur}}A+\|A-B\|=\|A-B\|.\]
In particular, given an \(A\in\mathcal{BL}(X,Y)\) with \(\overline{\operatorname{sur}}A>0\), we have \(\overline{\operatorname{sur}}B>0\) for each \(B\in\mathcal{BL}(X,Y)\) with \(\|A-B\|<\overline{\operatorname{sur}}A\)
It is elementary that if an \(A\in\mathcal{BL}(X,Y)\) is almost open then \(\overline{A(X)}=Y\). On the other hand, a sufficient condition for almost openness of a linear bounded mapping is contained in [19, Theorem 1.3].
**Proposition 6.2**.: _Let \((X,\|\cdot\|)\) and \((Y,\|\cdot\|)\) be normed spaces. Consider an \(A\in\mathcal{BL}(X,Y)\) such that \(\overline{A(X)}=Y\) and that there is an \(\alpha>0\) such that \(\alpha\|x\|\leq\|Ax\|\) for each \(x\in X\). Then \(\overline{\mathrm{sur}}A\geq\alpha\)._
Proof.: We show that \(\overline{A(B_{X})}\supset\alpha B_{Y}\). Let a non-zero \(y\in\alpha B_{Y}\) be arbitrary. There is a sequence \((x_{k})\) in \(X\) such that \(\|y-Ax_{k}\|\to 0\) as \(k\to\infty\). Assume without any loss of generality that \(0<\|Ax_{k}\|<2\alpha\) for each \(k\in\mathbb{N}\). Let \(u_{k}:=\frac{\|y\|}{\|Ax_{k}\|}x_{k}\), \(k\in\mathbb{N}\). For each \(k\in\mathbb{N}\), we have \(\alpha\|u_{k}\|\leq\|Au_{k}\|=\frac{\|y\|}{\|Ax_{k}\|}\|Ax_{k}\|=\|y\|\leq\alpha\), thus \(u_{k}\in B_{X}\); and \(\|y-Au_{k}\|\leq\|y-Ax_{k}\|+\|Ax_{k}-Au_{k}\|=\|y-Ax_{k}\|+\big{|}\|Ax_{k}\| -\|y\|\big{|}\ \leq 2\|y-Ax_{k}\|\to 0\) as \(k\to\infty\).
Any \(A\in\mathcal{BL}(X,Y)\) satisfying the latter assumption in Proposition 6.2 is said to be _bounded below_ in [19] and by [19, Theorem 1.4]: _If \(A\in\mathcal{BL}(X,Y)\) is in the topological boundary of the set of all linear almost open mappings then \(A\) is not bounded below._
The common proof of the classical Banach's open mapping theorem, e.g. in Frechet spaces, is divided into two steps. First, the almost openness of the linear surjective mapping under consideration is established. Second, one shows that the almost openness implies the openness provided that this mapping is continuous. A variant of the open mapping theorem for generally non-metrizable locally convex spaces goes back to Ptak [26]. Let us focus on the first step and formulate a simple statement:
**Proposition 6.3**.: _Let \(X\) and \(Y\) be locally convex spaces and \(Y\) be barreled. Then each linear mapping from \(X\)_onto\(Y\)_is almost open._
Proof.: Pick an arbitrary linear mapping \(A:X\to Y\) such that \(A(X)=Y\). Let \(U\) be a neighborhood of \(0\) in \(X\). There exists a convex circled neighborhood \(U^{\prime}\) of \(0\) such that \(U^{\prime}\subset U\). Then \(V:=\overline{A(U^{\prime})}\) is a closed convex circled set. Being a neighborhood of \(0\), the set \(U^{\prime}\) is absorbing in \(X\), hence \(A(U^{\prime})\) is absorbing in \(A(X)=Y\); and therefore so is \(V\). Since \(Y\) is barreled, the set \(V\) is a neighborhood of \(0\) in \(Y\), hence so is \(\overline{A(U)}\).
Now, we focus on affine mappings.
**Definition 6.4**.: _Let \(X\) and \(Y\) be vector spaces. A mapping \(A:X\to Y\) is said to be affine if the domain of \(A\) is convex and \(A(\lambda u+(1-\lambda)x)=\lambda A(u)+(1-\lambda)A(x)\) for each \(u\), \(x\in\operatorname{dom}A\) and each \(\lambda\in[0,1]\)._
A restriction of a linear mapping on a convex subset \(K\) of \(X\) is affine. On the other hand, any affine mapping \(A\) can be extended to an affine mapping \(\widetilde{A}\) defined
on the whole space \(X\) such the restriction of \(\widetilde{A}\) to \(\operatorname{dom}A\) is \(A\) and the mapping \(\widetilde{A}-\widetilde{A}(0)\) is linear. The next result generalizes the first step of the proof of the Banach open mapping theorem by following a standard pattern which combines the Baire's category theorem [21, Theorem 5.1.1] and the _line segment principle_[21, Proposition 6.2.2]: _Let \(K\) be an open convex subset of a Hausdorff topological vector space and let \(\lambda\in(0,1]\). Then \(\lambda\,K+(1-\lambda)\overline{K}\subset K\)._
**Proposition 6.5**.: _Let \(X\) be a locally convex space and \(Y\) be a complete metrizable topological vector space. Consider a point \(x\in X\), an affine mapping \(A:X\to Y\) with \(x\in\operatorname{dom}A\) and \(A(x)\in\operatorname{core}A(X)\), and a sequence \((K_{p})\) of convex subsets of \(\operatorname{dom}A\) such that_
\[K_{\infty}:=\operatorname{dom}A=\bigcup_{p\in\mathbb{N}}\,K_{p}\quad\text{and} \quad x\in K_{p}\subset K_{p+1}\quad\text{for each}\quad p\in\mathbb{N}. \tag{33}\]
_Let \(\alpha\in(0,1]\) and let \(U\) be a neighborhood of \(x\) in \(x+\alpha(K_{\infty}-x)\). Then there exists a \(q\in\mathbb{N}\) such that the set \(\overline{A\big{(}U\cap(x+\alpha(K_{p}-x))\big{)}}\) is a neighborhood of \(A(x)\) in \(Y\) for each \(p\in\{q,q+1,\dots\}\). If, in addition, for such an index \(p\) the set \(U\cap K_{p}\) is bounded, then for each neighborhood \(W\) of \(x\) in \(x+\alpha(K_{p}-x)\) the set \(\overline{A(W)}\) is a neighborhood of \(A(x)\) in \(Y\)._
Proof.: Without any loss of generality assume that \(x=0\) and \(A(0)=0\). Let \(U\) and \(\alpha\) be as in the premise. Let \(K:=\alpha K_{\infty}\). Since \(0\in\operatorname{core}A(K_{\infty})\) and \(A(K)=\alpha A(K_{\infty})\), we have \(0\in\operatorname{core}A(K)\). Find a convex neighborhood \(V\) of \(0\) in \(X\) such that \(V\cap K\subset U\). Let \(V_{p}:=V\cap\alpha K_{p}\), \(p\in\mathbb{N}\).
Fix an arbitrary \(y\in Y\). As \(0\in\operatorname{core}A(K)\), we have \(\bigcup_{\lambda>0}\lambda A(K)=Y\). Find an \(i\in\mathbb{N}\) such that \(y=iA(x)\) for some \(x\in K\). Since \(V\) is a convex neighborhood of \(0\) in \(X\) and \(K\) is the union of convex sets \(\alpha K_{p}\) (all containing \(0\)), there exist positive integers \(j\) and \(l\) such that \(x/j\in V\cap\alpha K_{l}=:V_{l}\). Thus
\[y=iA(x)=ij\left(\tfrac{1}{j}A(x)+\big{(}1-\tfrac{1}{j}\big{)}A(0)\right)=ijA \big{(}\tfrac{1}{j}x\big{)}\in ijA(V_{l}).\]
Since \(y\in Y\) is arbitrary, we get that
\[\bigcup_{k=1}^{\infty}\bigcup_{p=1}^{\infty}k\,A(V_{p})=Y,\quad\text{where} \quad V_{p}:=V\cap\alpha K_{p},\ p\in\mathbb{N}. \tag{34}\]
Baire's category theorem says that there is a \(q\in\mathbb{N}\) such that the set \(M_{q}:=\overline{A(V_{q})}\) has an interior point, say \(y\). As \(V_{p}\subset V_{p+1}\) for each \(p\in\mathbb{N}\), (34) reveals that, taking larger \(q\) if necessary, we may assume that \(-y\in kM_{q}\) for some \(k\in\mathbb{N}\). The set \(V_{q}\) is convex, thus so are \(A(V_{q})\) and \(M_{q}\). Since \(-y/k\in M_{q}\) and \(y\in\operatorname{int}M_{q}\), the line segment principle says that
\[0\in\operatorname{int}M_{q}\subset M_{q}=\overline{A(V\cap\alpha K_{q})}= \overline{A(V\cap K\cap\alpha K_{q})}\subset\overline{A(U\cap\alpha K_{q})}.\]
As \(\alpha K_{q}\subset\alpha K_{q+1}\subset\dots\subset\alpha K_{\infty}\), the proof of the first part is finished.
Fix any \(p\in\{q,q+1,\dots\}\) such that \(U_{p}:=U\cap K_{p}\) is a bounded set. Let \(W\) be an arbitrary neighborhood of \(0\) in \(\alpha K_{p}\). Find a convex neighborhood \(\Omega\) of \(0\) in \(X\) such that \(\Omega\cap\alpha K_{p}\subset W\). Then there exists a \(\lambda>0\) such that \(U_{p}\subset\lambda\Omega\). As \(\Omega\) is convex, we may assume that \(\lambda\geq 1\). Then \(\alpha K_{p}\subset\lambda\alpha K_{p}\) because \(0\in K_{p}\), thus
\[U\cap\alpha K_{p}=U\cap K_{p}\cap\alpha K_{p}=U_{p}\cap\alpha K_{p}\subset \lambda\Omega\cap\lambda\alpha K_{p}=\lambda(\Omega\cap\alpha K_{p}).\]
For \(\gamma:=1/\lambda\), the set \(\gamma M_{q}\) is a neighborhood of \(0\) in \(Y\) and
\[\gamma M_{q}\subset\gamma\overline{A(U\cap\alpha K_{p})}\subset\overline{A \left(\gamma(U\cap\alpha K_{p})\right)}\subset\overline{A(\Omega\cap\alpha K_{ p})}\subset\overline{A(W)}.\]
## 7. Concluding remarks and future work
\(\mathbf{R}_{1}\)**.** Assumption \((\mathcal{A}_{3})\) seems to be crucial in order to get full forms of Theorems 2.7 and 2.9 as well as of the corresponding corollaries described in Remark 2.10. This consideration is left for an interested reader (if any).
\(\mathbf{R}_{2}\)**.** Definition 3.1 and Proposition 3.2 can be generalized in three basic ways. First, we can allow \(\mu\) to be a continuous non-negative strictly increasing function on \([0,\infty)\) such that \(\lim_{t\downarrow 0}\mu(t)=\mu(0)=0\) and \(\lim_{t\to\infty}\mu(t)=\infty\) instead of \(\mu(t):=\mu t\). Second, one can consider a non-empty set \(W\subset X\times Y\) instead of \(U\times V\). These modifications are obvious, cf. [20, Definition 2.92 and Theorem 2.95] and [28, Proposition 2.8], respectively. Roughly speaking, one adds brackets around the quantities following \(\mu\) and works with \(\mu^{-1}\) instead of \(1/c\) in the first case. In the latter, one replaces \(U\) and \(V\) by either the projections of \(W\) onto \(X\) and \(Y\), respectively, defined by \(W_{X}:=\{x\in X:\ (x,y)\in W\text{ for some }y\in Y\}\) and \(W_{Y}:=\{y\in Y:\ (x,y)\in W\text{ for some }x\in X\}\); or the fibers \(W_{.,y}:=\{x\in X:\ (x,y)\in W\}\) for \(y\in W_{Y}\) and \(W_{x,\cdot}:=\{y\in Y:\ (x,y)\in W\}\) for \(x\in W_{X}\), see also Proposition 7.1 below. Third, one can consider directional versions of the properties as in [12, 8]. In the case of openness, one uses "balls" \(B_{X}^{\eta}(x,r)\), where \(\eta\) satisfies \((\mathcal{A}_{1})-(\mathcal{A}_{3})\), instead of the usual ones; while for the remaining two equivalent properties one employs the so-called minimal time function with respect to a set, see [8, Definition 1 and Proposition 3].
\(\mathbf{R}_{3}\)**.** Similarly to Corollaries 4.3 and 4.6, taking \(U\times V\) as \(U\times\{\bar{y}\}\) and \(\{\bar{x}\}\times V\), respectively, we obtain (obvious) sufficient conditions for almost subregularity and semiregularity at a point. More importantly, we can replace \(U\times V\) by a general non-empty set \(W\subset X\times Y\) to obtain approximate versions of statements in [11], e.g. Proposition 2.6 therein. Moreover, using Theorem 2.5, one can merge regularity and almost regularity together, e.g. Proposition 1.2 and Corollary 4.3. Such a generalized Proposition 4.1 reads as:
**Proposition 7.1**.: _Let \((X,d)\) and \((Y,\varrho)\) be metric spaces. Consider a \(c>0\), a non-empty set \(W\subset X\times Y\), a function \(\gamma:X\to[0,\infty]\) being defined and not identically zero on \(W_{X}\), and a mapping \(g:X\to Y\) defined on the whole
\(X\). Assume that for each \(y\in W_{Y}\) there is a \(\lambda=\lambda(y)\in(0,1)\) such that for each \(u\in X\) satisfying (17) for some \(x\in W_{\cdot,y}\), there is a point \(u^{\prime}\in X\) such that (18) holds true. Then for each \(x\in W_{X}\), each \(t\in(0,\gamma(x))\), and each \(y\in B_{Y}(g(x),ct)\cap W_{x,\cdot}\) there is a Cauchy sequence \((u_{k})_{k\in\mathbb{N}}\) in \(B_{X}\big{[}x,\varrho(g(x),y)/c\big{]}\) such that \(\lim_{k\to\infty}\varrho(g(u_{k}),y)=0\)._
_In particular, \(B_{Y}(g(x),ct)\cap W_{x,\cdot}\subset\overline{g\big{(}B_{X}(x,t)\big{)}}\) for each \(x\in W_{X}\) and each \(t\in(0,\gamma(x))\). If, in addition, \(X\) is complete and \(g\) is continuous, then \(B_{Y}(g(x),ct)\cap W_{x,\cdot}\subset g\big{(}B_{X}(x,t)\big{)}\) for each \(x\in W_{X}\) and each \(t\in(0,\gamma(x))\)._
Proof.: Let \(x\in W_{X}\), \(t\in(0,\gamma(x))\), and \(y\in B_{Y}(g(x),ct)\cap W_{x,\cdot}\subset W_{Y}\) be arbitrary.
If \(y=g(x)\), then put \(u_{k}:=x\), \(k\in\mathbb{N}\), and we are done. Assume that this is not the case. Let \(\varphi:=\varrho(g(\cdot),y)\) and \(\eta:=cd\). Apply Theorem 2.5, to find an \(N\in\mathbb{N}\cup\{\infty\}\) and \((u_{k})_{k=1}^{N}\), with \(u_{1}=x\), satisfying (i) and (ii) therein. If \(N<\infty\), then put \(u_{k}:=u_{N}\), \(k\in\{N+1,N+2,\dots\}\). Let \(\beta_{k}:=\varrho(g(u_{k}),y)\), \(k\in\mathbb{N}\). By (i), we get that
\[(0\leq)\quad c\,d(u_{j},u_{k})\leq\beta_{k}-\beta_{j}\quad\text{for each}\quad j,k\in \mathbb{N}\quad\text{with}\quad k\leq j. \tag{35}\]
As \(u_{1}=x\), the sequence \((u_{k})_{k\in\mathbb{N}}\) lies in \(B_{X}\big{[}x,\varrho(g(x),y)/c\big{]}\) because \(\beta_{1}=\varrho(g(x),y)\). Also the sequence \((\beta_{k})_{k\in\mathbb{N}}\) is decreasing (and bounded from below); hence it converges, to a \(\beta\in\big{[}0,\varrho(g(x),y)\big{]}\). Employing (35) again, we infer that \((u_{k})_{k\in\mathbb{N}}\) is a Cauchy sequence because \((\beta_{k})_{k\in\mathbb{N}}\) is such. Assume that \(\beta>0\). Find a \(\lambda\in(0,1)\) such that the assumption containing (17) and (18) holds true. By (ii) in Theorem 2.5 with \(\varepsilon:=(1-\lambda)\beta\), there is an index \(j\in\mathbb{N}\) such that
\[\varrho(g(u^{\prime}),y)+c\,d(u^{\prime},u_{j})>\varrho(g(u_{j}),y)-(1-\lambda )\beta\quad\text{for each}\quad u^{\prime}\in X. \tag{36}\]
Since \((\beta_{k})_{k\in\mathbb{N}}\) is decreasing, (35), with \(k=1\), implies that
\[\beta\leq\beta_{j}=\varrho(g(u_{j}),y)\leq\varrho(g(x),y)-c\,d(u_{j},x).\]
As \(\varrho(g(x),y)<ct<c\gamma(x)\) and \(x\in W_{\cdot,y}\) because \((x,y)\in W\), there is a \(u^{\prime}\in X\) such that
\[\varrho(g(u^{\prime}),y)+c\,d(u_{j},u^{\prime})\leq\lambda\,\varrho(g(u_{j}), y). \tag{37}\]
Since \(\beta\leq\varrho(g(u_{j}),y)\), we get that
\[c\,d(u_{j},u^{\prime}) \stackrel{{\eqref{eq:2}}}{{\leq}} \lambda\varrho(g(u_{j}),y)-\varrho(g(u^{\prime}),y)\leq\varrho(g( u_{j}),y)-(1-\lambda)\beta-\varrho(g(u^{\prime}),y)\] \[\stackrel{{\eqref{eq:2}}}{{<}} \varrho(g(u^{\prime}),y)+c\,d(u^{\prime},u_{j})-\varrho(g(u^{ \prime}),y)=c\,d(u^{\prime},u_{j}),\]
a contradiction. Hence \(\beta=0\), which proves the first conclusion. As \(\varrho(g(x),y)<ct\), the (whole) sequence \((u_{k})_{k\in\mathbb{N}}\) lies in \(B_{X}(x,t)\). Consequently, \(y\in\overline{g\big{(}B_{X}(x,t)\big{)}}\).
Assume, in addition, that \(X\) is complete and \(g\) is continuous. Then \((u_{k})_{k\in\mathbb{N}}\) converges, to a \(u\in B_{X}\big{[}x,\varrho(g(x),y)/c\big{]}\subset B_{X}(x,t)\) and \(0=\lim_{k\to\infty}\varrho(g(u_{k}),y)=\varrho(g(u),y)\). Consequently, \(y\in g\big{(}B_{X}(x,t)\big{)}\)
Obviously, one can also obtain versions of [11, Proposition 2.7 and Corollary 2.8]. Finally, as described in \(\mathbf{R}_{2}\) one can consider directional versions of the properties to get approximate versions of the Ioffe-type criteria in [12, 8].
\(\mathbf{R}_{4}\)**.** Employing _equivalent_ set-valued version of Proposition 7.1, Theorem 5.1 can be generalized to cover [11, Theorem 3.8], implying [28, Theorems 4.2 and 4.3], as well as an approximate version of it in incomplete spaces for regularity on a fixed set \(W\subset X\times Y\). When the paper was ready for submission we learned about [27, Theorems 5.1(a) and 5.2(a)], being precisely statements of this type for Milyutin regularity. By our approach, we get almost regularity of \(F+h\) under the (weaker) assumption that \(F\) is Milyutin _almost_ regular on \(U\times V\) instead of Milyutin regular on this set.
Second, assumption (B) in Theorem 5.3 can be augmented as
\[\begin{array}{ll}(\widetilde{B})&H(u)\cap B_{Y}(\bar{w},(a+r)\ell+\delta) \subset\overline{H(u^{\prime})+B_{Y}[0,\ell\,d(u,u^{\prime})]}\text{ for each }u,\,u^{\prime}\in B_{X}(\bar{x},a+2r).\end{array}\]
Indeed, in the proof, one finds \(z^{\prime}\in F(u^{\prime})\) such that \(\varrho(y-w,z^{\prime})<(c-c^{\prime})t/4\) and then, using \((\widetilde{B})\), a \(w^{\prime}\in H(u^{\prime})\) such that \(\varrho(w,w^{\prime})\leq\ell\,d(u,u^{\prime})+(c-c^{\prime})t/4\).
Finally, one can investigate directional versions as well as the almost regularity of compositions similarly to [8].
**Acknowledgement.** The author is grateful to Abderrahim Hantoute, University of Alicante, and to Tomas Roubal, The Institute of Information Theory and Automation (UTIA) in Prague, for a detailed discussion on numerous preliminary versions of the manuscript, insightful suggestions improving the presentation, and last but not least for their support and friendship. Proposition 6.5 originates from an unpublished joint work with Jiri Reif, who was the author's Ph.D. supervisor, initiated his interest in open mapping theorems, and unfortunately passed away before the corresponding thesis was defended.
|
2310.13816 | On topological properties of compact attractors on Hausdorff spaces | We characterize when a compact, invariant, asymptotically stable attractor on
a locally compact Hausdorff space is a strong deformation retract of its domain
of attraction. | Wouter Jongeneel | 2023-10-20T21:06:23Z | http://arxiv.org/abs/2310.13816v2 | # On topological properties of compact attractors on Hausdorff spaces
###### Abstract
We characterize when a compact, invariant, asymptotically stable attractor on a locally compact Hausdorff space is a strong deformation retract of its domain of attraction.
## I Introduction
The purpose of this note is to improve our understanding of topological properties of compact asymptotically stable attractors and their respective domain of attraction. Here, we will almost exclusively appeal to topological tools pioneered by Borsuk [1]. In particular, we will elaborate on the _retraction theoretic_ work by Moulay & Bhat [2], which itself is a generalization of the seminal works [3, Thm. 21], [4] and [5, Thm. 4.1].
After Poincare and Lyapunov (Liapunov), the modern qualitative study of attractors was largely propelled through the monographs by Birkhoff [6] and Nemytskii & Stepanov [7], with influential follow-up works by Auslander, Bhatia & Siebert [8], Wilson [9], Hahn [10], Bhatia & Szego [11], Conley [12], Milnor [13] and many others, _e.g._, see [14, Ch. 1].
Lately, attractors have been extensively studied through the lens of _shape theory_, _e.g._, see [15, 16, 17] and [18, Prop. 1], with the seminal work of Gunther & Segal showing that a finite-dimensional compact subset of a manifold can be an attractor if and only if it has the shape of a finite polyhedron [19].
The interest in understanding topological properties of attractors and their respective domain of attraction stems from the simple observation that if a certain dynamical system does not exist, then certainly there is no feedback law resulting in a closed-loop dynamical system with precisely those dynamics.
Indeed, this type of study often provides _necessary_ conditions of the form that an attractor must be _equivalent_ in some sense to its domain of attraction. With that in mind, one seeks a notion of equivalence that is weak enough to cover many dynamical systems, yet also strong enough to obtain insights, _e.g._, obstructions. Hence, although shape equivalence is more widely applicable [20] and in that sense more fundamental, we focus on _homotopy equivalence_ with the aim of recovering stronger necessary conditions.
In the same spirit, by further restricting the problem class, one could even look for stronger notions of equivalence as recently done in [21]. There, in the context of a vector-field guided path-following problem, homotopy equivalences have been strengthened to topological equivalences (homeomorphisms).
Although we focus on continuous dynamical systems, one can link work of this form to families of differential inclusions [22]. Indeed, further partial generalizations of [2] to nonsmooth dynamical system are presented in [23].
_Notation and technical preliminaries_: The _identity map_ on a space \(X\) is denoted by \(\mathrm{id}_{X}\), that is, \(\mathrm{id}_{X}:x\mapsto x\ \forall x\in X\). The (embedded) \(n\)-sphere is the set \(\mathbb{S}^{n}:=\{x\in\mathbb{R}^{n+1}:\langle x,x\rangle=1\}\), with the (closed) \(n\)-disk being \(\mathbb{D}^{n}:=\{x\in\mathbb{R}^{n}:\langle x,x\rangle\leq 1\}\). The topological boundary of a space \(X\) is denoted by \(\partial X\), _e.g._, \(\partial\mathbb{D}^{n+1}=\mathbb{S}^{n}\). We use \(\simeq_{h}\) to denote homotopy (equivalence), see Section. II-B.
A topological space \(X\) is said to be a _locally compact Hausdorff_ space when: (i) for any \(x\in X\) there is a compact set \(K\subseteq X\) containing an open neighbourhood \(U\) of \(x\) and; (ii) for any \(x_{1},x_{2}\in X\) there are open neighbourhoods \(U_{1}\ni x_{1}\) and \(U_{2}\ni x_{2}\) such that \(U_{1}\cap U_{2}=\emptyset\), _e.g._, see [24, p. 31, 104]. Examples of locally compact Hausdorff spaces are: \(\mathbb{R}^{n}\), topological manifolds, the Hilbert cube, any discrete space and so forth. In particular, any compact Hausdorff space is locally compact. Regarding counterexamples, a space \(X\) with the trivial topology \(\tau=\{X,\emptyset\}\) is not Hausdorff and any infinite-dimensional Hilbert space is a Hausdorff topological vector space, yet, it fails to be locally compact, see also [25, Thm. 29.1].
## II Continuous dynamical systems
In this note we study continuous (global) _semi-dynamical systems_ compromised of the triple \(\Sigma:=(\mathsf{M},\varphi,\mathbb{R}_{\geq 0})\). Here, \(\mathsf{M}\) is a locally compact Hausdorff space and \(\varphi:\mathbb{R}_{\geq 0}\times\mathsf{M}\to\mathsf{M}\) is a (global) semi-flow, that is, a continuous map that satisfies for any \(x\in\mathsf{M}\):
1. \(\varphi(0,x)=x\) (_identity_ axiom); and
2. \((\varphi(s,\varphi(t,x))=\varphi(t+s,x)\ \forall s,t\in\mathbb{R}_{\geq 0}:=\{t\in \mathbb{R}:t\geq 0\}\) (_semi-group_ axiom).
We say that a point \(x\in\mathsf{M}\) is a _start point_ (under \(\Sigma\)) if \(\forall(t,y)\in\mathbb{R}_{>0}\times\mathsf{M}\) we have that \(\varphi^{t}(y)\neq x\). Differently put, \(x\in\mathsf{M}\) is a starting point when a flow starting from \(x\) cannot be extended backwards, see [26, Ex. 5.14] for an example. To avoid confusion, the evaluation of an integral curve at \(0\) is sometimes called a "_starting point_" [27, p. 206], which is not what we are talking about here. Then, to eliminate the existence of start points we appeal to [26, Prop. 1.7], for instance, we can consider semi-flows generated by a smooth vector field. Concretely, let \(F\in\Gamma^{\infty}(T\mathsf{M})\) be a smooth vector field on a smooth manifold \(\mathsf{M}\). It is well-known that
under these conditions, for each \(p\in\mathsf{M}\) there is a \(\varepsilon>0\) such that \(\gamma:(-\varepsilon,\varepsilon)\to\mathsf{M}\) is an integral curve of \(F\) with \(\gamma(0)=p\)[27, Prop. 9.2], that is, in terms of the (local) flow \(\varphi:(-\varepsilon,\varepsilon)\times\mathsf{M}\to\mathsf{M}\) we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\varphi^{t}(p)\bigg{|}_{t=s}=F\left(\varphi^{s}( p)\right),\quad s\in(-\varepsilon,\varepsilon).\]
Hence, with the previous observation in mind we will assume the following throughout the remainder of the note.
**Assumption II.1** (Start points).: _The set of start points under the semi-dynamical system \(\Sigma\) is empty._
### _Stability_
We will exclusively focus on a subclass of semi-dynamical systems with practically relevant stability properties.
**Definition II.2** (Attractor).: _Given some global semi-dynamical system \(\Sigma=(\mathsf{M},\varphi,\mathbb{R}_{\geq 0})\), then, a compact set \(A\subseteq\mathsf{M}\) is said to be an invariant, local asymptotically stable, attractor when_
1. \(\varphi(\mathbb{R},A)=A\) _(invariance); and_
2. _for any open neighbourhood_ \(U_{\varepsilon}\subseteq\mathsf{M}\) _of_ \(A\) _there is an open neighbourhood_ \(U_{\delta}\subseteq U_{\varepsilon}\subseteq\mathsf{M}\) _of_ \(A\) _such that_ \(\varphi(\mathbb{R}_{\geq 0},U_{\delta})\subseteq U_{\varepsilon}\) _(Lyapunov stability), plus, there is an open neighbourhood_ \(W\subseteq\mathsf{M}\) _of_ \(A\) _such that all semi-flows intialized in_ \(W\) _converge to_ \(A\) _(local attractivity), that is, for any_ \(p\in W\) _and any open neighbourhood_ \(V\subseteq\mathsf{M}\) _of_ \(A\) _there is a_ \(T\geq 0\) _such that_ \(\varphi^{t}(p)\in V\)__\(\forall t\geq T\)_._
The combination of Lyapunov stability and local attractivity is referred to as _local asymptotic stability_. When the neighbourhood \(W\) in Item (ii) can be chosen to be all of \(\mathsf{M}\) we speak of _global_ asymptotic stability. Local asymptotic stability is also captured by the existence of an open neighbourhood \(U\subseteq\mathsf{M}\) of \(A\) such that \(\cap_{t\geq 0}\varphi^{t}(U)=A\)[28, Lem. 1.6]
On can find several definitions of _"attractors"_ in the literature, see for instance [11, Def. V.1.5], [13] and [20, Sec. 2.2].
**Definition II.3** (Domain of attraction).: _Let the compact set \(A\subseteq\mathsf{M}\) be an invariant, local asymptotically stable attractor under the semi-dynamical system \(\Sigma=(\mathsf{M},\varphi,\mathbb{R}_{\geq 0})\), then, its domain of attraction is_
\[\mathcal{D}_{\Sigma}(A)=\{p\in\mathsf{M}\ : \text{for any open neighbourhood }U\subseteq\mathsf{M}\] \[\text{of }A\text{ there is a }T\geq 0\text{ such that}\] \[\varphi^{t}(p)\in U\,\forall t\geq T\}.\]
Definition II.3 can be equivalently written in terms of convergent subsequences. Topological properties of attractors \(A\subseteq\mathsf{M}\) and their respective domain of attraction \(\mathcal{D}_{\Sigma}(A)\subseteq\mathsf{M}\) are an active topic of study since the 1960s [11, 14].
To elaborate on the introduction, the interest stems from the observation that the (numerical) analysis or synthesis, _e.g._, via feedback control, of dynamical systems \(\Sigma\) can be involved, while topological properties of the pair \((\mathcal{D}_{\Sigma}(A),A)\) might be readily available. Here, topological knowledge of \(\mathcal{D}_{\Sigma}(A)\) is frequently used to study if some _"desirable"_ domain of attraction is admissible. For instance, one can show that no point \(p\in\mathbb{S}^{1}\) can be a global asymptotically stable attractor under any \(\Sigma=(\mathbb{S}^{1},\varphi,\mathbb{R}_{\geq 0})\), _e.g._, see Theorem II.6 below. The intuition being that for this to be true the circle \(\mathbb{S}^{1}\) needs to be torn apart, which is obstructed by demanding \(\varphi\) to be continuous, see also [14, Fig. 1.1]. Again, we emphasize that conclusions of this form emerge without involved analysis of any particular system \(\Sigma\).
### _Retraction theory_
The previous example can be understood through \(\mathbb{S}^{1}\) not being _contractible_, that is, \(\mathbb{S}^{1}\) is not _homotopy equivalent_ to a point \(p\). Formally, two topological spaces \(X\) and \(Y\) are said to have the same _homotopy type_ when they are homotopy equivalent1, that is, there are continuous maps \(f:X\to Y\) and \(g:Y\to X\) such that \(f\circ g\simeq_{h}\mathrm{id}_{Y}\) and \(g\circ f\simeq_{h}\mathrm{id}_{X}\). In some sense, this notion is a more general version of a deformation retract--which we recall below, and is most naturally understood through invariance in differential- [29] and algebraic topology [30]. As alluded to, now, we recall that \(A\subseteq X\) is a _retract_ of \(X\) when there is a map \(r:X\to A\) such that \(r\circ\iota_{A}=\mathrm{id}_{A}\). The set \(A\) is said be a _deformation retract_ of \(X\) when \(A\) is a retract and additionally \(\iota_{A}\circ r\simeq_{h}\mathrm{id}_{X}\), implying that \(X\) is homotopy equivalent to \(A\). When, additionally, the homotopy is _stationary relative to \(A\)_, we speak of a _strong deformation retract_.
Footnote 1: More abstractly, homotopies are isomorphisms in the homotopy category of topological spaces.
**Remark II.4** (Deformation retracts).: _The literature does not agree on what a "deformation retract" is. For instance, Hatcher calls strong deformation retracts simply deformation retracts and speaks of "deformation retracts in the weak sense" where we would be speak of simply a deformation retract [30, Ch. 0]. This should be contrasted with for instance the 1965 text of Hu [31, Sec. 1.11]._
Next, a set \(A\subseteq X\) is said to be a _weak deformation retract_ of \(X\) when every open neighbourhood \(U\supseteq A\) contains a strong deformation retract \(V\supseteq A\) of \(X\).
In this note we will elaborate on the following result due to Moulay & Bhat. In particular, we aim to understand when \(\mathcal{D}_{\Sigma}(A)\) strongly deformation retracts onto \(A\) and not just to a subset of a neighbourhood around \(A\).
**Theorem II.5** ([2, Thm. 5]).: _Suppose that the compact set \(A\subseteq\mathsf{M}\) is an invariant, local asymptotically stable attractor under the semi-dynamical system \(\Sigma=(\mathsf{M},\varphi,\mathbb{R}_{\geq 0})\), then, \(A\) is a weak deformation retract of \(\mathcal{D}_{\Sigma}(A)\)._
It is well-known that when \(\mathsf{M}\) is a smooth manifold and \(A\) is an embedded submanifold of \(\mathsf{M}\), then, \(A\) is a strong neighbourhood deformation retract of \(\mathsf{M}\) and thus \(A\) is homotopic to \(\mathcal{D}_{\Sigma}(A)\)[2, Prop. 10], see also [32]. Our aim is to provide further, especially weaker, conditions for this to be true.
Indeed, Theorem II.5 can be seen as a generalization of the following well-known result due to Sontag.
**Theorem II.6** ([3, Thm. 21]).: _Suppose that the point \(A:=\{p\}\subseteq\mathsf{M}\) is an invariant, global asymptotically stable attractor under the semi-dynamical system \(\Sigma=(\mathsf{M},\varphi,\mathbb{R}_{\geq 0})\), then, \(\mathcal{D}_{\Sigma}(A)\) is contractible._
We aim to strengthen this generalization. We do this by appealing to neighbourhood retracts. A set \(A\subseteq X\) is a _neighbourhood retract_ of \(X\) when there is an open neighbourhood \(U\subseteq X\) of \(A\) such that \(A\) is retract of \(U\). This definition extends naturally to (strong) deformation retracts. Now indeed, if for instance \(\mathsf{M}\) weakly deformation retracts onto \(A\subseteq\mathsf{M}\) while \(A\) is a neighbourhood deformation retract of \(\mathsf{M}\), then \(\mathsf{M}\) deformation retracts onto \(A\) and thus \(A\simeq_{h}\mathsf{M}\)[2, Thm. 4].
In this note we focus on homotopy equivalence, a similar but weaker notion is that of _shape equivalence_, understood as being for Cech (co)homology what homotopy theory is for singular (co)homology. Indeed, only for sufficiently "_nice_" spaces, singular cohomology and Cech cohomology agree. For a concise introduction to shape theory, in the sense of Fox [33], we refer the reader to [20, Sec. 3]. The crux is to work with open neighbourhoods of a set and not solely with the set itself. For instance, the Warsaw circle \(\mathbb{W}^{1}\), as studied below in Example III.8 is not homotopy equivalent to the circle \(\mathbb{S}^{1}\) but the two spaces are shape equivalent.
## III Cofibrations
It follows from Theorem II.5 that for \(A\) to be homotopy equivalent to \(\mathcal{D}_{\Sigma}(A)\), it suffices for \(A\) to be a neighbourhood deformation retract of \(\mathsf{M}\). We will appeal to _cofibrations_ to capture this property.
The theory of cofibrations emerges from the so-called "_extension problem_", that is, understanding when a continuous map \(f:A\subseteq X\to Y\) can be extended from \(A\) to all of \(X\). A typical counterexample is any map \(f:\partial\mathbb{D}^{n}\to Y\) of nonzero degree, that is, when \(\deg(f)\neq 0\), \(f\) cannot be extended from the \(n\)-sphere \(\mathbb{S}^{n}\simeq\partial\mathbb{D}^{n+1}\) to the \((n+1)\)-disk \(\mathbb{D}^{n+1}\)[29, Sec. 2.4].
Now, cofibrations tell us, loosely speaking, if maps that can be extended, lead to homotopies that be extended. To define this, we need the following. Let \(X\) be a topological space and \(A\subseteq X\), then, a pair \((X,A)\) has the _homotopy extension property_ (HEP) when, for any \(Y\), the diagram
(1)
can always be completed (the _dotted arrow_1 can be found) to be commutative. Differently put, given a a homotopy \(H:A\times[0,1]\to Y\) and some map \(g:X\to Y\) such that \(H(\cdot,0)=g|_{A}\), one needs to be able to extend the homotopy from \(A\) to \(X\). Pick \(Y=(A\times[0,1])\cup(X\times\{0\})\), then we see that \((X,A)\) having the HEP implies that \((A\times[0,1])\cup(X\times\{0\})\) is a retract of \(X\times[0,1]\). On the other hand, one can show that the existence of such a retract implies that \((X,A)\) has the HEP, that is, these two notions are equivalent, see Theorem III.2 below. See also [34, p. 13] for a stronger result.
Footnote 1: Our notion of cofibration is aligned with the so-called _Hurewicz cofibrations_. We will not discuss _Serre cofibrations_, which are most easily understood through _fibrations_, a notion dual to that of cofibrations, _e.g._, see [35, Ch. 7].
Then, a continuous map \(i:A\to X\) is said be a _cofibration_2 if the following commutative diagram
Footnote 2: We focus on pairs \((X,A)\) such that \(A\subseteq X\), this inclusion is, however, not required for a cofibration to be well-defined. Nevertheless, to make sense of (1) one should work with \((A\times[0,1])\cup_{i}X\) instead \((A\times[0,1])\cup(X\times\{0\})\), that is, with the so-called “_mapping cylinder_” as further discussed below.
\[\tikzfig{cofibration} \tag{2}\]
can be completed for any \(Y\). Loosely speaking, the map \(i\) is a cofibration when it has the HEP3.
Footnote 3: We focus on pairs \((X,A)\) such that \(A\subseteq X\), this inclusion is, however, not required for a cofibration to be well-defined. Nevertheless, to make sense of (1) one should work with \((A\times[0,1])\cup_{i}X\) instead \((A\times[0,1])\cup(X\times\{0\})\), that is, with the so-called “_mapping cylinder_” as further discussed below.
Next, we need a slight variation of the aforementioned notions of retraction, that of a _neighbourhood deformation retract pair_ (NDR pair).
**Definition III.1** (NDR pair).: _A pair \((X,A)\) is said to be a NDR pair if:_
1. _there is a continuous map_ \(u:X\to[0,1]\) _such that_ \(A=u^{-1}(0)\)_; and_
2. _there is a homotopy_ \(H:X\times[0,1]\to X\) _such that_ \(H(\cdot,0)=\mathrm{id}_{X}\)_,_ \(H(x,s)=x\ \forall x\in A\)_,_ \(\forall s\in[0,1]\) _and_ \(H(x,1)\in A\) _if_ \(u(x)<1\)_._
See that if \(u(x)<1\)\(\forall x\in X\), then, \(A\) is a strong deformation retract of \(X\). In general, however, we cannot assume \(u\) to be of this form. See that for \((X,A)\) to be a NDR pair, \(A\) must be closed. Now, a useful result is the following.
**Theorem III.2** ([35, Ch. 6]).: _Let \(A\) be closed in \(X\), then, the following are equivalent:_
1. _the inclusion_ \(\iota_{A}:A\hookrightarrow X\) _is a cofibration;_
2. \((A\times[0,1])\cup(X\times\{0\})\) _is a retract of_ \(X\times[0,1]\)_;_
3. \((X,A)\) _is a NDR pair._
To be somewhat self-contained, we provide intuition regarding Item (ii) and Item (iii). In particular, we highlight continuity.
Proof (sketch).: Suppose we have the retract \(r:X\times[0,1]\to(A\times[0,1])\cup(X\times\{0\})\). Define the projections \(\pi_{1}:X\times[0,1]\to X\) and \(\pi_{2}:X\times[0,1]\to[0,1]\). Next, define the map \(u:X\to[0,1]\) via
\[u(x)=\sup_{\tau\in[0,1]}\{\tau-\pi_{2}(r(x,\tau))\}, \tag{3}\]
where the supremum is attained since \([0,1]\) is compact and \(\pi_{2}\) and \(r\) are continuous. Now define the homotopy \(H:X\times[0,1]\to X\) by \(H(x,s)=\pi_{1}(r(x,s))\). Indeed, one can readily check that the pair \((u,H)\) satisfies the properties required for \((X,A)\) to be a NDR pair. Note that since the retract \(r\) is continuous, we have that \(u\) is not identically \(0\) when \(X\setminus A\neq\emptyset\). The map \(u\) constructed through (3) is in fact continuous since \([0,1]\) is compact and both \(\pi_{2}\) and \(r\) are continuous, _e.g._, one can appeal to the simplest setting of Berge's maximum theorem [36].
Note that in general, \((A\times[0,1])\cup(X\times\{0\})\) will be equivalent to the _mapping cylinder_ under the inclusion map \(\iota_{A}:A\hookrightarrow X\), that is, \(M\iota_{A}=((A\times[0,1])\cup X)/\sim\) with \((a,0)\sim\iota_{A}(a)\) for all \(a\in A\), also denoted by \((A\times[0,1])\cup_{\iota_{A}}X\). Equivalence can possibly fail when the product and quotient topologies under consideration do not match.
For illustrative purposes, we end this section with the collection of a powerful result. Omitting the details, we recall that \(X\) is a _CW complex_ when \(X\) can be constructed via iteratively "_glueing_" \(n\)-cells, being topological disks \(\mathbb{D}^{n}\), along their boundary to a \((n-1)\)-dimensional CW complex, with a \(0\)-dimensional CW complex being simply a set of discrete points. For instance, the circle \(\mathbb{S}^{1}\) can be constructed from a single point and the interval. Then, a set \(A\subseteq X\) is a _subcomplex_ of the CW complex \(X\) when it is closed and a union of cells of \(X\). For more on CW complexes, we refer to [30, Ch. 0] and [24, Ch. 5].
**Proposition III.3** (CW complexes [30, Prop. 0.16]).: _Let \(X\) be a CW complex and \(A\subseteq X\) a subcomplex, then, the inclusion map \(\iota_{A}:A\hookrightarrow X\) is a cofibration._
Proposition III.3 hinges on \(\iota_{\mathbb{S}^{n-1}}:\mathbb{S}^{n-1}\hookrightarrow\mathbb{D}^{n}\) being a cofibration, which can be shown via showing that \((\mathbb{D}^{n},\mathbb{S}^{n-1})\) is a NDR pair, however, using a strategy of more general use, one can show there is a (strong deformation) retract from \(\mathbb{D}^{n}\times[0,1]\) onto \((\partial\mathbb{D}^{n}\times[0,1])\cup(\mathbb{D}^{n}\times\{0\})\)[30, p. 15], _e.g._, consider some point \((0,c)\in\mathbb{D}^{n}\times\mathbb{R}_{\geq 2}\) and project \((0,c)\) onto \((\partial\mathbb{D}^{n}\times[0,1])\), then, the line between \((0,c)\) and this projection provides for the homotopy.
CW complices are fairly general, yet, properties that obstruct \(X\) admitting a CW decomposition are for instance: (i) \(X\) failing to be locally contractible [30, Prop. A4]; and (2) \(X\) failing to adhere to Whitehead's theorem _cf._ Example III.8.
### _Main result_
Now, we have collected all ingredients to prove the following.
**Lemma III.4** (\(\Leftarrow\)Cofibration).: _Let \(A\subseteq\mathsf{M}\) be a compact, invariant, asymptotically stable, attractor with domain of attraction \(\mathcal{D}_{\Sigma}(A)\). If \(\iota_{A}:A\hookrightarrow\mathcal{D}_{\Sigma}(A)\) is a cofibration, then, \(A\) is a strong deformation retract of \(\mathcal{D}_{\Sigma}(A)\)._
Proof.: We know from Theorem II.5 that \(A\) is a weak deformation retract of \(\mathcal{D}_{\Sigma}(A)\). Since \(\iota_{A}:A\hookrightarrow\mathcal{D}_{\Sigma}(A)\) is a cofibration, we also know from Theorem III.2 that \((\mathcal{D}_{\Sigma}(A),A)\) is a NDR pair. Then, recall Definition III.1 and recall the proof of Theorem III.2. Now, let \(W:=u^{-1}([0,1))\supset A\), which is open since \(u:\mathcal{D}_{\Sigma}(A)\rightarrow[0,1]\) is continuous, and consider the map \(H|_{W\times[0,1]}\). It is imperative to remark that this map does _not_ provide a strong deformation retract from \(W\) onto \(A\) in general. The reason why we cannot conclude on the existence of such a map is that we cannot guarantee that throughout the homotopy we have \(H(x,s)\in W\) for any \((x,s)\in W\times[0,1]\). Indeed, we have a map \(H|_{W\times[0,1]}:W\times[0,1]\rightarrow\mathcal{D}_{\Sigma}(A)\supseteq W\), the codomain cannot be assumed to be \(W\). Precisely this detail was already known to Strom _cf._[34, Thm. 2], see also [37, p. 432]. Nevertheless, since \(A\) is a weak deformation retract of \(\mathcal{D}_{\Sigma}(A)\) we know that \(W\) contains a set \(V\supseteq A\) such that \(\mathcal{D}_{\Sigma}(A)\) strongly deformation retracts onto \(V\), that is, there is map \(\bar{H}:\mathcal{D}_{\Sigma}(A)\times[0,1]\rightarrow\mathcal{D}_{\Sigma}(A)\) such that \(\bar{H}(x,0)=x\ \forall x\in\mathcal{D}_{\Sigma}(A)\), \(\bar{H}(x,1)\in V\ \forall x\in\mathcal{D}_{\Sigma}(A)\) and \(\bar{H}(x,s)=x\ \forall(x,s)\in V\times[0,1]\). Hence, the continuous map \(\widetilde{H}:\mathcal{D}_{\Sigma}(A)\times[0,1]\rightarrow\mathcal{D}_{\Sigma}(A)\) defined by
\[\widetilde{H}(x,s)=\begin{cases}\bar{H}(x,2s)&s\in[0,\tfrac{1}{2}]\\ H\left(\bar{H}(x,1),2s-1\right)&s\in(\tfrac{1}{2},1]\end{cases}\]
is a homotopy and provides for the strong deformation retract of \(\mathcal{D}_{\Sigma}(A)\) onto \(A\).
To continue, we need a converse result.
**Lemma III.5** (\(\Rightarrow\)Cofibration).: _Suppose that \(\mathsf{M}\) is a locally compact Hausdorff space, that \(\Sigma\) satisfies Assumption II.1 and let \(A\subseteq\mathsf{M}\) be a compact, invariant, asymptotically stable, attractor with domain of attraction \(\mathcal{D}_{\Sigma}(A)\). If \(A\) is a strong deformation retract of \(\mathcal{D}_{\Sigma}(A)\), then, \(\iota_{A}:A\hookrightarrow\mathcal{D}_{\Sigma}(A)\) is a cofibration._
Proof.: We will appeal to the characterization of a cofibration as given by Theorem III.2. As \(A\) is a strong deformation retract of \(D_{\Sigma}(A)\) by assumption, then, to conclude on \((D_{\Sigma}(A),A)\) being a NDR pair, we need to construct the map \(u:\mathcal{D}_{\Sigma}(A)\rightarrow[0,1]\). As \(A\) is a compact, invariant, asymptotically stable attractor, \(\mathsf{M}\) is a locally compact Hausdorff space and \(\Sigma\) satisfies Assumption II.1, there is a Lyapunov function of precisely this form [26, Thm. 10.6].
**Theorem III.6** (Cofibrations).: _Suppose that \(\mathsf{M}\) is a locally compact Hausdorff space, that \(\Sigma\) satisfies Assumption II.1 and let \(A\subseteq\mathsf{M}\) be a compact, invariant, asymptotically stable, attractor with domain of attraction \(\mathcal{D}_{\Sigma}(A)\). Then, \(A\) is a strong deformation retract of \(\mathcal{D}_{\Sigma}(A)\) if and only if the inclusion \(\iota_{A}:A\hookrightarrow\mathcal{D}_{\Sigma}(A)\) is a cofibration,_
Proof.: The results follow directly by combining Lemma III.4 and Lemma III.5.
### _Examples_
Regarding ramifications of Theorem III.6, we start with a sanity check. We know that for a a linear ODE \(\dot{x}=Fx\) with \(F\in\mathbb{R}^{n\times n}\) a Hurwitz matrix, \(A=\{0\}\) and \(\mathcal{D}_{\Sigma}(A)=\mathbb{R}^{n}\). Hence, we remark that: (i) \(\iota_{\{0\}}:\{0\}\hookrightarrow\mathbb{R}^{n}\) is a cofibration, _e.g._, since \((\mathbb{R}^{n},\{0\})\) is a NDR pair under
the map \(x\mapsto u(x):=1-e^{-\langle x,x\rangle}\); and (ii) \(\mathbb{R}^{n}\) strongly deformation retracts onto \(0\in\mathbb{R}^{n}\) via the map \(\mathbb{R}^{n}\times[0,1]\ni(x,s)\mapsto H(x,s):=s\cdot x\).
Cofibrations that are not strong deformation retracts are abundant. We start with a well-known example.
**Example III.7** (Spheres and disks).: _It can be shown that the inclusion \(\iota_{\mathbb{S}^{n}}:\mathbb{S}^{n}\hookrightarrow\mathbb{D}^{n+1}\) is a cofibration, e.g., see the remark on CW complices above. However, \(\mathbb{D}^{n+1}\) cannot strongly deformation retract onto \(\mathbb{S}^{n}\) since \(\mathbb{S}^{n}\) and \(\mathbb{D}^{n+1}\) are not even homotopy equivalent, e.g., \(\chi(\mathbb{S}^{n})\neq\chi(\mathbb{D}^{n+1})\). Hence, \(\mathbb{S}^{n}\) cannot be a global, asymptotically stable, attractor under any semi-dynamical system \(\Sigma=(\mathbb{D}^{n+1},\varphi,\mathbb{R}_{\geq 0})\)._
We proceed with an example where we obtain topological insights through dynamical systems knowledge.
**Example III.8** (The Warsaw circle).: _Let \(\mathbb{W}^{1}:=\{(0,x_{2})\in\mathbb{R}^{2}:x_{2}\in[-1,1]\}\cup\{(x_{1}, \sin(x_{1}^{-1}))\in\mathbb{R}^{2}:x_{1}\in(0,\pi^{-1})\}\cup\{\text{arc from }(0,-1)\text{ to }(\pi^{-1},0)\}\) denote the so-called "Warsaw circle". The set \(\mathbb{W}^{1}\) is compact, but not a manifold since \(\mathbb{W}^{1}\) is not locally connected. Hastings showed4 that \(\mathbb{W}^{1}\) can be rendered a compact, invariant, locally asymptotically stable attractor with an annular neighbourhood \(A\subset\mathbb{R}^{2}\) as \(\mathcal{D}_{\Sigma}(\mathbb{W}^{1})\)[38]. Although the circle \(\mathbb{S}^{1}\subset\mathbb{R}^{2}\) and \(\mathbb{W}^{1}\subset\mathbb{R}^{2}\) are shape equivalent, they are not homotopy equivalent since \(\pi_{1}(\mathbb{W}^{1})\simeq 0\) while \(\pi_{1}(S)\simeq\mathbb{Z}\) and the fundamental group \(\pi_{1}(\cdot)\) is homotopy invariant [24, Thm. 7.40]. As such, \(\mathbb{W}^{1}\) cannot be a strong deformation retract of any annulus \(A\subset\mathbb{R}^{2}\) it embeds in. Then, according to Theorem III.6, \(\iota_{\mathbb{W}^{1}}:\mathbb{W}^{1}\hookrightarrow A\) cannot be a cofibration._
Footnote 4: Although a substantial part of the proof is left to the reader.
We recall that for inclusion maps \(\iota_{A}:A\hookrightarrow X\) to not be a cofibration, the pair \((X,A)\) cannot be too regular, _e.g._, by Proposition III.3\((X,A)\) cannot be a CW pair. Indeed, by the Whitehead theorem [30, Thm. 4.5], The Warsaw circle \(\mathbb{W}^{1}\) is not homotopy equivalent to a CW complex. This could also be concluded by observing that CW complices must be locally path-connected.
Next, we provide an example inspired by an example from [39, p. 78-79]. Here we gain dynamical insights via topological knowledge. Before doing so, we recall the difference between the _box_ and _product_ topology. Let \(X_{\alpha}\) be a topological space indexed by \(a\in A\), then, if we endow a topological space of the form \(X=\prod_{\alpha\in A}X_{\alpha}\) with the product topology, open sets are of the form \(\prod_{\alpha\in A}U_{\alpha}\) with \(U_{\alpha}\) open in \(X_{\alpha}\) and all but finitely many \(U_{\alpha}=X_{\alpha}\). The box topology, on the other hand, does not require the last constraint to hold and open sets are simply of the form \(\prod_{\alpha\in A}U_{\alpha}\) with \(U_{\alpha}\) open in \(X_{\alpha}\). When \(A\) is finite, these topologies are equivalent, however, the box topology is finer than the product topology in general.
**Example III.9** (The Tychonoff cube).: _Let \(\Omega>\aleph_{0}\), then, define the Tychonoff cube as \([0,1]^{\Omega}\), that is, as a uncountably infinite product of the unit interval. Here we endow \([0,1]\) with the standard topology and \([0,1]^{\Omega}\) with the product topology. As such, \([0,1]^{\Omega}\) is a compact Hausdorff space by Tychonoff's theorem [25, Thm. 37.3] and the fact that any product of Hausdorff spaces is Hausdorff [25, Thm. 19.4]. Exploiting compactness, \(\{0\}^{\Omega}\in[0,1]^{\Omega}\) can be shown to be a strong deformation retract of \([0,1]^{\Omega}\). Importantly, one cannot simply use the map \([0,1]^{\Omega}\times[0,1]\ni(x,s)\mapsto H(x,s):=s\cdot x\), which would be continuous in the box topology, but not in the product topology. Despite the strong deformation retraction, \(\iota_{0}:\{0\}^{\Omega}\hookrightarrow[0,1]^{\Omega}\) is not a cofibration since otherwise, by Definition III.1 and Theorem III.2, there must be a continuous map \(u:[0,1]^{\Omega}\to[0,1]\) such that \(\{0\}^{\Omega}=u^{-1}(0)\). However, it can be shown that such a map fails to exist due to \(\Omega\) being uncountable [39, p. 78-79]. Hence, Theorem III.6 implies that \(\{0\}^{\Omega}\in[0,1]^{\Omega}\) cannot be an asymptotically stable attractor, for any continuous--with respect to the product topology on \([0,1]^{\Omega}\) --semi-dynamical system \(\Sigma=([0,1]^{\Omega},\varphi,\mathbb{R}_{\geq 0})\). Note that if \(\Omega\) would be finite, then, the map \(u\) does exist and can be chosen to be \(u:(x_{1},\ldots,x_{\Omega})\mapsto\max_{i=1,\ldots,\Omega}\{x_{i}\}\)._
Note that Example III.9 is essentially saying that despite seemingly convenient properties of \(\Sigma=([0,1]^{\Omega},\varphi,\mathbb{R}_{\geq 0})\), a Lyapunov function fails to exist for \(\{0\}^{\Omega}\in[0,1]^{\Omega}\). Concurrently, this example shows that even a strong notion of homotopy equivalence can be insufficient to conclude on the existence of an asymptotically stable attractor.
Although, in general, a metrizable space must be merely countably locally finite (\(\sigma\)-locally finite) [25, Thm. 40.3], compact metric spaces must be second countable. Hence, \([0,1]^{\Omega>\aleph_{0}}\) is not metrizable since \(\Omega>\aleph_{0}\) obstructs second countability. Similarly, one can consider the topology of pointwise convergence. Regardless, Example III.9 illustrates where to look for counterexamples. Indeed, as \([0,1]^{\Omega>\aleph_{0}}\) is not a normed space and in particular not a Hilbert space, it does not fit into common analysis frameworks, _e.g._, [40].
It turns out that Theorem III.6 covers known results in case \(A\) is an embedded submanifold of \(\mathsf{M}\). We will assume all our manifolds under consideration to be \(C^{\infty}\)-smooth and second countable. In that case, let \(A\subseteq\mathsf{M}\) be a closed embedded submanifold, then, one appeals to the existence of a _tubular neighbourhood_[27, Thm. 6.24] to show that \((\mathcal{D}_{\Sigma}(A),A)\) compromises a NDR pair. Hence, using the following proposition, [2, Prop. 10] follows as a corollary to Theorem III.6, see also [32].
**Proposition III.10** (Submanifolds).: _Let \(A\) be a compact embedded submanifold of \(\mathsf{M}\), then, \(\iota_{A}:A\hookrightarrow\mathsf{M}\) is a cofibration._
Our last example pertains to compositions, indicating that Theorem III.6 can be applied to subsystems.
**Example III.11** (Compositions).: _Cofibrations are closed under composition. Let \(i_{1}:A\to B\) and \(i_{2}:B\to C\) be cofibrations, then, \(i:=i_{2}\circ i_{1}:A\to C\) is a cofibration. To
see this, consider the diagram_
(4)
_For any triple \((Y,\alpha_{1},\alpha_{2})\) there is some appropriate map \(\beta_{1}\) completing (2) for the cofibration \(i_{1}:A\to B\), but for any triple \((Y,\beta_{1},\beta_{2})\) the diagram (4) can be completed since \(i_{2}:B\to C\) is a cofibration._
## IV Future work
Several directions of future work are: (i) discrete systems of the form \(\Sigma=(\mathsf{M},\varphi,\mathbb{Z}_{\geq 0})\); (ii) exploiting duality (fibrations); (iii) extensions to other notions of stability; (iv) developing computational tool (homology); (v) relaxing the invariance assumption, which is possible; and (vi) addressing stochastic systems in a meaningful way.
### _Closed attractors_
Several results hold when the compactness assumption on \(A\subseteq\mathsf{M}\) is relaxed to \(A\) being merely _closed_, _e.g._, see [26, 41], this generalization is future work. We do emphasize that the generalization is not trivial. Consider for instance [32, Ex. 22] with \(\mathsf{M}=\mathbb{R}^{2}\setminus\{(1,0)\}\) and \(A=\mathbb{S}^{1}\setminus\{(1,0)\}\subset\mathsf{M}\). There, the authors construct a vector field on \(\mathsf{M}\) such that \(A\) is a global asymptotically stable attractor, with \(\mathcal{D}_{\Sigma}(A)=\mathsf{M}\setminus\{(0,0)\}\). So, although \(A\) is an attractor and \(\iota_{A}:A\hookrightarrow\mathcal{D}_{\Sigma}(A)\) is a cofibration due to Proposition III.10, \(A\) cannot be a strong deformation retract of \(\mathcal{D}_{\Sigma}(A)\) since those sets are not homotopy equivalent. Indeed, \(A\) is _not_ compact and one cannot simply appeal to Theorem II.5. The intuition is that for attractors of this form, limits need not be attained and as such stability does not provide for a homotopy between \(A\) and \(\mathcal{D}_{\Sigma}(A)\). Formally speaking, the proof of Theorem II.5 exploits compactness of the sublevel sets of the corresponding Lyapunov function and implicitly _Cantor's intersection theorem_, which fails to be generally true for closed sets. See also [21, Counterex. 1].
|
2307.10708 | Can cosmologically-coupled mass growth of black holes solve the mass gap
problem? | Observations of elliptical galaxies suggest that black holes (BHs) might
serve as dark energy candidates, coupled to the expansion of the Universe.
According to this hypothesis, the mass of a BH could increase as the Universe
expands. BH low-mass X-ray binaries (LMXBs) in the Galactic disk were born
several gigayears ago, making the coupling effect potentially significant. In
this work, we calculate the evolution of BH binaries with a binary population
synthesis method to examine the possible influence of cosmologically-coupled
growth of BHs, if it really exists. The measured masses of the compact objects
in LMXBs show a gap around $\sim 2.5-5~{\rm M_\odot}$, separating the most
massive neutron stars from the least massive BHs. Our calculated results
indicate that, considering the mass growth seem to (partially) account for the
mass gap and the formation of compact BH LMXBs, alleviating the challenges in
modeling the formation and evolution of BH LMXBs with traditional theory.
However, critical observational evidence like the detection of
intermediate-mass black hole binaries is required to test this hypothesis. | Shi-Jie Gao, Xiang-Dong Li | 2023-07-20T08:58:50Z | http://arxiv.org/abs/2307.10708v1 | # Can cosmologically-coupled mass growth of black holes solve the mass gap problem?
###### Abstract
Observations of elliptical galaxies suggest that black holes (BHs) might serve as dark energy candidates, coupled to the expansion of the Universe. According to this hypothesis, the mass of a BH could increase as the Universe expands. BH low-mass X-ray binaries (LMXBs) in the Galactic disk were born several gigayears ago, making the coupling effect potentially significant. In this work, we calculate the evolution of BH binaries with a binary population synthesis method to examine the possible influence of cosmologically-coupled growth of BHs, if it really exists. The measured masses of the compact objects in LMXBs show a gap around \(\sim 2.5-5\) M\({}_{\odot}\), separating the most massive neutron stars from the least massive BHs. Our calculated results indicate that, considering the mass growth seem to (partially) account for the mass gap and the formation of compact BH LMXBs, alleviating the challenges in modeling the formation and evolution of BH LMXBs with traditional theory. However, critical observational evidence like the detection of intermediate-mass black hole binaries is required to test this hypothesis.
Stellar mass black holes (1611); Expanding universe (502); Dark energy (351); Low-mass X-ray binary stars (939) 0000-0002-4818-2082]Shi-Jie, Gao (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong, Li, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiangiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong,, (Xiang-Dong, (Xiang-Dong,, (Xiangiang-Dong,, (Xiangiang-Dong,, (Xiang-Dong, (Xiang-Dong, (Xiang-Dong, (Xiang-Dong,, (Xiang-Dong, (Xiang-Dong,, (Xiang-Dong,, (Xiangiang-Dong, (Xiangiang-Dong, (Xiang-Dong,, (Xiangiang-Dong, (Xiangiang, (Xiang-Dong,, (Xiangiang-Dong,, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang, (Xiang-Dong,, (Xiangiang-Dong, (Xiangiang-Dong,, (Xiangiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dong, (Xiangiang-Dongong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiang-Dong, (Xiangiang-Dong, (Xiang-Dong, (Xiang (Xiang-Dong, (Xiangiang-Dong, (Xiang (Xiang-Dongong, (Xiang (Xiangiang-Dong, (Xiangiang, (Xiangiang -Dongong, (Xiangiang, (Xiang (Xiangiang -Dongong, (Xiang (Xiangiang -Dongongong, (Xiang (Xiang (XXiang (XXXiang), (Xiang, (XXiang, (XXiang (
Parnovsky, 2023; Lei et al., 2023; Andrae & El-Badry, 2023; Ghodla et al., 2023).
BH low-mass X-ray binaries (LMXBs) have low-mass companions (\(M_{2}\lesssim 2\) M\({}_{\odot}\)) as their mass donors. Dozens of BH LMXBs have been dynamically confirmed (see Table 1). These binaries, which have ages \(>10^{9}\) years (Tauris & van den Heuvel, 2006), were born in the early Universe (\(z\lesssim 2\))1. As a result, the influence of cosmologically-coupled growth, if really worked, should have been significant. Though the formation mechanism of BH LMXBs is not fully understood, it is widely believed that they have evolved from primordial binaries consisting of a massive star and an intermediate/low-mass companion (for a review, see Li, 2015). The conventional formation pathways generally involve a phase of common envelope evolution (CEE, Paczynski, 1976; Ivanova et al., 2013) when the primary star (the BH's progenitor) evolves off main-sequence and transfers mass to the low-mass secondary star. The main issue is that the orbital energy of the low-mass star is typically insufficient to expel the massive envelope of the BH's progenitor (Portegies Zwart et al., 1997; Kalogera, 1999; Kiel & Hurley, 2006; Yungelson & Lasota, 2008). This would cause merger of the two stars rather with a close binary left.
Footnote 1: There is a possibility that some of the BH LMXBs were formed recently by dynamically interactions in dense environment.
Neutron stars (NSs) and BHs are the end products of massive stars. A smooth distribution of stellar initial masses may intuitively result in a continuous mass distribution between NSs and BHs. However, there is absence or dearth of observed BHs with masses in the range of \(\sim 2.5-5\) M\({}_{\odot}\) in LMXBs (for a recent review, see Shao, 2022). The origin of this mass gap is still under debate. The proposed explanations include different growth time-scales of the instabilities driving the explosion of massive stars which affect the amount of the ejected envelope mass (Fryer et al., 2012; Belczynski et al., 2012), disruption of the binaries with low-mass BHs due to supernova-driven natal kicks (Mandel et al., 2021), or potential observational artifacts arising from systematic uncertainties in ellipsoidal fits (Kreidberg et al., 2012).
If BHs can server as vacuum energy with equation of state described by \(p=-\rho\), \(k\) is equal to 3, while for
\begin{table}
\begin{tabular}{l l l l l l} \hline Name & Type & \(M_{\rm BH}\) [M\({}_{\odot}\)] & \(M_{2}\) [M\({}_{\odot}\)] & \(P_{\rm orb}\) [day] & References \\ \hline GRS 1915+105 & LMXB & \(12.4^{+2.0}_{-1.8}\) & \(0.52\pm 0.31\) & 33.85 & Reid et al. (2014) \\ V404 Cyg & LMXB & \(9.0^{+0.2}_{-0.6}\) & \(0.54\pm 0.05\) & 6.473 & Khargharia et al. (2010) \\ V4641 Sgr & IMXB & \(6.4\pm 0.6\) & \(2.9\pm 0.4\) & 2.8173 & MacDonald et al. (2014) \\
4U 1543–47 & IMXB & \(9.4\pm 1.0\) & \(2.7\pm 1\) & 2.7 & Shafee et al. (2006) \\ GRO J1655–40 & LMXB & \(5.31\pm 0.07\) & \(1.45\pm 0.35\) & 2.62168 & Beer \& Podsiadlowski (2002) \\ GS 1354–64 & LMXB & \(>7.6\pm 0.7\) & \(>0.988\) & 2.54451 & Casares et al. (2009) \\ GX 339–4 & LMXB & \(9^{+1.6}_{-1.2}\) & \(0.7\pm 0.4\) & 1.75 & Parker et al. (2016) \\ LMC X–3 & IMXB & \(6.98\pm 0.56\) & \(3.63\pm 0.57\) & 1.7048089 & Orosz et al. (2014) \\ XTE J1550–564 & LMXB & \(9.10\pm 0.61\) & \(0.30\pm 0.07\) & 1.5420333 & Orosz et al. (2011) \\ MAXI J1820+070 & LMXB & \(8.48^{+0.79}_{-0.72}\) & \(0.61^{+0.13}_{-0.12}\) & 0.6903 & Torres et al. (2020) \\ H1705–250 & LMXB & \(6.4\pm 1.5\) & \(0.25\pm 0.17\) & 0.5228 & Remillard et al. (1996); Harlaftis et al. (1997) \\ GRS 1124–68 & LMXB & \(11.0^{+2.1}_{-1.4}\) & \(0.89^{+0.18}_{-0.11}\) & 0.43260249 & Wu et al. (2015, 2016) \\ MAXI J1305–704 & LMXB & \(8.9^{+1.6}_{-1.0}\) & \(0.43\pm 0.16\) & 0.394 & Mata Sánchez et al. (2021) \\
4U 1957+11 & LMXB & \(\sim 4.6\) & \(\lesssim 1\) & 0.39 & Barillier et al. (2023) \\ GS 2000+251 & LMXB & \(5.5-8.5\) & \(0.16-0.47\) & 0.34 & Charles et al. (1991); Ioannou et al. (2004) \\ A0620–00 & LMXB & \(6.61\pm 0.25\) & \(0.40\pm 0.045\) & 0.3230160 & Johannsen et al. (2009); Cantrell et al. (2010) \\ GRS 1009–45 & LMXB & \(8.5\pm 1\) & \(0.54\pm 0.1\) & 0.285 & Filippenko et al. (1999); Macias et al. (2011) \\ XTE 1859+226 & LMXB & \(7.85\pm 0.46\) & \(0.55\pm 0.16\) & 0.276 & Motta et al. (2022); Yanes-Rizo et al. (2022) \\ GRO J0422+32 & LMXB & \(2.7^{+0.7}_{-0.5}\) & \(0.33^{+0.28}_{-0.20}\) & 0.2121600 & Casares et al. (2022) \\ XTE J1118+480 & LMXB & \(8.3^{+0.28}_{-0.14}\) & \(0.22\pm 0.07\) & 0.16993404 & Gonzalez Hernández et al. (2012) \\ Swift J1375–0933 & LMXB & \(10.9^{+1.7}_{-1.6}\) & \(0.42^{+0.04}_{-0.03}\) & 0.106969 & Casares et al. (2022) \\ MAXI J1659–352 & LMXB & \(3.3-7.5\) & \(0.06-0.22\) & 0.100 & Torres et al. (2021) \\ MAXI J0637–430 & LMXB & \(5.1\pm 1.6\) & \(0.25\pm 0.07\) & 0.092 & Soria et al. (2022) \\ \hline \end{tabular}
\end{table}
Table 1: Types, BH masses (\(M_{\rm BH}\)), donor masses (\(M_{2}\)) and orbital periods (\(P_{\rm orb}\)) of dynamically confirmed intermediate-mass X-ray binaries (IMXBs)/LMXBs.
NSs, \(w\sim 0.05-0.1\) and \(-k\) is \(\sim 0.15-0.3\)(Croker and Weiner, 2019). Since the cosmological shift for positive \(w\) appears as an energy loss rather than mass growth (Croker and Weiner, 2019), a mass gap could be naturally left between the most massive NSs and the less massive BHs. This cosmological coupling not only leads to mass growth of BHs, but also influences the evolution of the binary orbits and hence the observational characteristics of BH LMXBs. In this work, we perform detailed evolutionary calculations and binary population synthesis to investigate whether cosmological coupling can result in a significant mass gap in LMXBs and its impact on the evolution of BH LMXBs.
This article is organized as follows. In section 2, we introduce the physical considerations and the binary evolution model. We present our numerically calculated results and compare them with observations in section 3. Finally, we conclude and discuss our results in section 4.
## 2 Methods and calculations
The cosmological coupling introduces no preferred spatial directions. Hence, both angular momentum and eccentricity of the binary are unaffected (Croker et al., 2020). According to angular momentum conservation in Newtonian binary evolution, the mass growth leads to an extra change 2 (see also Eq. 28 in Croker et al., 2020) in the orbital separation \(A\), i.e.,
Footnote 2: This change is independent of other mechanisms like magnetic braking, gravitational wave emission and mass transfer process.
\[\frac{A(a)}{A(a_{i})}=\left[\frac{M_{\rm BH}(a_{i})}{M_{\rm BH}(a)}\right]^{2} \frac{M_{2}+M_{\rm BH}(a)}{M_{2}+M_{\rm BH}(a_{i})}. \tag{2}\]
The quadratic component dominates and always leads to decrease in the orbital separation. We apply the cosmological-constant cold dark matter (\(\Lambda\)CDM) model in our calculation and use the cosmological parameters from the latest cosmic microwave background radiation measurements (Planck Collaboration, 2020), i.e., \(H_{0}=67.7\ {\rm km\ s^{-1}\ Mpc^{-1}}\), \(\Omega_{\rm M}=0.31\), \(\Omega_{\Lambda}=0.69\), and the age of the Universe is 13.78 Gyr. We explore two cases \(k=0\) and \(k=3\), corresponding to without and with cosmologically-coupled growth of BH mass considered, respectively.
### Example evolutionary calculation
We use the state-of-the-art stellar evolution code MESA3(version 22.11.1, Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023) to evolve binary stars composed of a BH and a zero-age main sequence donor star4. The orbital change (Equation 2) due to BH's mass growth is implemented in the code. We adopt solar metallicity for the stars (\(Z=0.02\)). To illustrate the effects of cosmologically-coupled growth, we present three example evolutionary tracks (a, b and c) for the cases of \(k=0\) (a0, b0 and c0) and \(k=3\) (a3, b3 and c3). The initial and final states of these tracks are summarized in Table 2. Tracks a and b compare the evolution of close and wide LMXBs, and tracks c demonstrates the evolution of intermediate-mass X-ray binaries (IMXBs), which are thought to be the main predecessors of LMXBs.
Footnote 3: The codes to reproduce our simulations are available at [https://doi.org/10.5281/zenodo.8167249](https://doi.org/10.5281/zenodo.8167249).
Figure 1 shows the orbital period, BH mass and companion mass as a function of the stellar age and redshift for tracks a0/a3 (left), b0/b3 (middle) and c0/c3 (right). The red and blue lines depict the results in the \(k=0\) and \(k=3\) cases, respectively. The solid and dashed lines in the lower panel represent the evolution of the BH mass and the companion mass, and the circles and crosses denote the onset and end of Roche-lobe overflow (RLOF), respectively.
In the left panel, for tracks a0, the initial orbital period is above the bifurcation period (Pylyser and Savonije, 1988), so the orbit period increases with the mass transfer to be 15.3 day. The BH mass grows to be 4.01 M\({}_{\odot}\) due to accretion, while the donor star evolves to be a 0.27 M\({}_{\odot}\) white dwarf (WD). However, for track a3, the BH's mass growth prior to the mass transfer causes the orbit to shrink. Consequently, the mass transfer starts earlier, and the binary orbit decreases with mass transfer until the mass ratio is reversed. The binary finally consists of a 48.1 M\({}_{\odot}\) BH and a 0.004 M\({}_{\odot}\) dwarf star in a 0.16 day orbit.
The middle panel shows the evolutionary tracks b0 and b3 where both binaries eventually evolve into BH+helium WD binaries, but with significantly different final orbital periods. In the case of \(k=0\), the orbit expands due to the donor star's expansion to the red giant branch. A 3.9 M\({}_{\odot}\) BH and a 0.37 M\({}_{\odot}\) helium WD are left in a 200 day orbit. In the case of \(k=3\), the binary contracts before RLOF starts due to the mass growth of BH. Then, the mass transfer causes the binary orbit to expand. Interestingly, after the detachment of the donor, a 0.29 M\({}_{\odot}\) helium WD is formed, and the orbital period decreases dramatically due to the BH's mass growth. The final mass of the BH is 106 M\({}_{\odot}\). The right panel of Figure 1 shows evolutionary tracks c0 and c3 with a 5 M\({}_{\odot}\) companion star. The evolution shows
similar features as in tracks b0 and b3, except that the companion evolves into a carbon-oxygen WD.
The last column of Table 2 presents the duration of RLOF. It is seen that the mass transfer lifetimes in the case of \(k=3\) are longer compared with in the case of \(k=0\). This is because in the former case mass transfer starts earlier and ends later. The more extended duration of mass transfer leads to increase in the number of mass-transferring binaries (see below).
### Binary population synthesis
We then use a Monte Carlo method to simulate the evolution of a large population of primordial binaries. The binary evolution code BSE(Hurley et al., 2002; Kiel & Hurley, 2006; Shao & Li, 2014) is utilized to model the formation and evolution of incipient BH LMXBs. The lookback time \(t_{\rm lb}\) for the birth of primordial binaries is assumed to follow a uniform distribution within the past 12 Gyr and a constant star formation rate 3 \(\rm M_{\odot}~{}yr^{-1}\) is applied. The initial mass \(M_{1,0}\) of the primary is sampled from a power-law distribution \(p(M_{1,0})\propto M_{1,0}^{-2.7}\)(Kroupa et al., 1993) in the range of \(5-100~{}\rm M_{\odot}\). We assume that the mass ratio \(q=M_{2,0}/M_{1,0}\) between the secondary (\(M_{2,0}\)) and the primary masses has a uniform distribution between 0 and 1, and the initial orbital separation \(A_{0}\) follows \(p(A_{0})\propto A_{0}^{-1}\) in the range of 3 \(\rm R_{\odot}-10^{4}~{}\rm R_{\odot}\). We take the eccentricity of all the primordial binaries to be zero, as its effect on the formation and evolution of LMXBs is minor (Hurley et al., 2002). We use the energy conversation equation
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Tracks & Cases & \(t_{\rm lb}\) [Gyr] & \(z\) & \(M_{\rm BH,i}\) [\(\rm M_{\odot}\)] & \(M_{2,i}\) [\(\rm M_{\odot}\)] & \(P_{\rm orb,i}\) [day] & \(M_{\rm BH,f}\) [\(\rm M_{\odot}\)] & \(M_{2,t}\) [\(\rm M_{\odot}\)] & \(P_{\rm orb,f}\) [day] & \(\Delta t_{\rm mt}\) [year] \\ \hline a0 & \(k=0\) & 9.228 & 1.388 & 3.0 & 1.5 & 0.63 & 4.01 & 0.27 & 15.31 & \(2.70\times 10^{9}\) \\ a3 & \(k=3\) & 9.228 & 1.388 & 3.0 & 1.5 & 0.63 & 48.12 & 0.004 & 0.16 & \(8.54\times 10^{9}\) \\ \hline b0 & \(k=0\) & 10.847 & 2.230 & 3.0 & 1.5 & 10.00 & 3.89 & 0.37 & 199.65 & \(3.44\times 10^{7}\) \\ b3 & \(k=3\) & 10.847 & 2.230 & 3.0 & 1.5 & 10.00 & 106.41 & 0.29 & 0.17 & \(6.70\times 10^{8}\) \\ \hline c0 & \(k=0\) & 9.228 & 1.388 & 3.0 & 5.0 & 1.58 & 4.28 & 0.46 & 82.51 & \(1.13\times 10^{8}\) \\ c3 & \(k=3\) & 9.228 & 1.388 & 3.0 & 5.0 & 1.58 & 60.15 & 0.44 & 0.38 & \(1.45\times 10^{8}\) \\ \hline \end{tabular}
\end{table}
Table 2: The parameters of the six evolutionary tracks, including the lookback time \(t_{\rm lb}\), redshift \(z\) at the birth of the BHs, the initial/final BH masses \(M_{\rm BH,i}/M_{\rm BH,f}\), companion masses \(M_{2,i}/M_{2,t}\), orbital periods \(P_{\rm orb,i}/P_{\rm orb,f}\) and mass transfer lifetime \(\Delta t_{\rm mt}\).
Figure 1: Orbital periods (\(P_{\rm orb}\)), BH mass (\(M_{\rm BH}\)) and companion mass (\(M_{2}\)) as a function of stellar age (\(t_{\rm age}\)) and redshift (\(z\)) for tracks a0/a3 (left), b0/b3 (middle), and c0/c3 (right). The red and blue lines correspond to the \(k=0\) and \(k=3\) cases, respectively. The solid and dashed lines in the lower panel show the evolution of the BH and the companion, respectively. The circles and crosses indicate the onset and end of RLOF, respectively.
(Webbink, 1984) to treat the CEE, taking the CEE efficiency \(\alpha_{\rm CEE}=1.0\). We use the numerically calculated results by Xu and Li (2010) and Wang et al. (2016) for the binding energy parameter \(\lambda_{\rm CE}\) of the stellar envelope for a variety of massive and intermediate-mass stars.
To deal with the compact remnant masses and possible natal kicks, we adopt the rapid (Fryer et al., 2012), delayed (Fryer et al., 2012) and stochastic supernova scenarios (Mandel and Muller, 2020), which are widely used in the literature. Note that the rapid supernova mechanism predicts an inherent mass gap around \(2.5-5\) M\({}_{\odot}\). For the first two mechanisms, the natal kick \(v_{\rm k}\) of BHs are assumed to obey a Maxwellian distribution with the dispersion \(\sigma_{\rm k}=265\) km s\({}^{-1}\)(Hobbs et al., 2005) reduced by a material fallback factor \(f_{\rm fb}\)(for more details, see Shao and Li, 2021). For the stochastic supernova mechanism, we use the natal kick proposed by Mandel and Muller (2020). The minimum mass of BHs at birth is assumed to be 2.5 M\({}_{\odot}\). We construct six models R0, R3, D0, D3, S0, and S3, corresponding to different supernova prescriptions and different values of \(k\). The parameters and results of these six models are presented in Table 3.
## 3 Results and Comparison with Observations
We select BH binaries with a 0.08 M\({}_{\odot}<M_{2}<6\) M\({}_{\odot}\) non-degenerate donor star and BH mass accretion rates \(>10^{-12}\) M\({}_{\odot}\) yr\({}^{-1}\) as I/LMXBs. Their predicted numbers in different BH mass range are presented in Table 3. Figure 2 shows the mass function of BHs. The left, middle and right panels depict the results of Models R0/R3, D0/D3 and S0/S3, respectively. The red and blue solid histograms demonstrate the number distribution of the BH mass in the case of \(k=0\) and \(k=3\) in the range of \(2.5-15\) M\({}_{\odot}\), respectively. The red (\(k=0\)) and blue (\(k=3\)) dashed lines show the normalized cumulative mass distribution, and the shaded region indicates the mass gap (\(2.5-5\) M\({}_{\odot}\)). In the traditional formation models (R0, D0, and S0), because the envelope of stars of mass \(>45\) M\({}_{\odot}\) is too heavy to be expelled by a low-mass companion during CEE, the masses of BHs are confined to be \(\sim 9-11\) M\({}_{\odot}\), but if considering cosmologically-coupled mass growth (Models R3, D3, and S3), the final masses of BHs can reach several hundred M\({}_{\odot}\), so we artificially cut off at 15 M\({}_{\odot}\) in the figure. We find that BHs with mass \(<15\) M\({}_{\odot}\) constitute \(\sim 65\%-94\%\) of the total BH I/LMXB population in this case (see Table 3).
While Model R0 produces no BHs with mass \(\lesssim 5\) M\({}_{\odot}\), Models D0 and S0 predict that \(\sim 90\%\) and \(\sim 24\%\) of the BHs are mass-gap BHs, respectively. In comparison, only \(\sim 33\%\) and \(\sim 7\%\) of BHs with masses \(<15\) M\({}_{\odot}\) fall within the mass gap in Models D3 and S3, respectively.
Figure 3 shows the number distribution of I/LMXBs in the donor mass - BH mass (\(M_{2}\)-\(M_{\rm BH}\)) plane. The upper and lower panels correspond \(k=0\) and \(k=3\) cases, respectively. The left, middle and right panels show the results of Models R0/R3, D0/D3 and S0/S3, respectively. The number of I/LMXBs in each pixel is reflected by different colors. The green crosses represent the observed I/LMXBs (see Table 1). It is seen that, none of the three supernova scenarios in the case of \(k=0\) can well reproduce the observed I/LMXBs - in Model R0, the BH masses range from \(\sim 6\) to \(\sim 11\) M\({}_{\odot}\), and in Models D0 an S0, the BH masses are less than \(\sim 9\) and \(\sim 10\) M\({}_{\odot}\), respectively. The situation is improved in the \(k=3\) case, with the help of the cosmologically-coupled mass growth. Especially most of the observed BHs can be covered in Models D3 and S3. However, the relative numbers in different \(M_{2}\)-\(M_{\rm BH}\) regions still seem not precisely match the observations. In particular, the predicted donor masses in Models D3 and S3 are considerably larger than observations. Since low-mass stars are unlikely to survive CEE during the first mass transfer process, this implies that either abnormally high values of the CE efficiency is required or the majority of BHs may originate from failed supernovae (for discussion, see Li, 2015, and references therein).
Figure 4 shows the number distribution in the donor mass - orbital period (\(M_{2}-P_{\rm orb}\)) plane. Both Models R0 and R3 are hard to reproduce the observed BH LMXBs,
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Models & Supernova mechanisms & \(k\) & \(N_{\rm tot}\) & \(N_{15}\) & \(N_{\rm mg}\) & \(N_{15}/N_{\rm tot}\) & \(N_{\rm mg}/N_{\rm tot}\) & \(N_{\rm mg}/N_{15}\) \\ \hline R0 & rapid & 0 & \(4.27\times 10^{2}\) & \(4.27\times 10^{2}\) & 0 & 100\% & 0\% & 0\% \\ R3 & rapid & 3 & \(6.95\times 10^{2}\) & \(6.53\times 10^{2}\) & 0 & 94\% & 0\% & 0\% \\ \hline D0 & delayed & 0 & \(1.28\times 10^{4}\) & \(1.28\times 10^{4}\) & \(1.16\times 10^{4}\) & 100\% & 90\% & 90\% \\ D3 & delayed & 3 & \(1.42\times 10^{4}\) & \(1.27\times 10^{4}\) & \(4.17\times 10^{3}\) & 89\% & 29\% & 33\% \\ \hline S0 & stochastic & 0 & \(2.86\times 10^{3}\) & \(2.86\times 10^{3}\) & \(6.85\times 10^{2}\) & 100\% & 24\% & 24\% \\ S3 & stochastic & 3 & \(1.06\times 10^{4}\) & \(6.83\times 10^{3}\) & \(4.48\times 10^{2}\) & 65\% & 4\% & 7\% \\ \hline \end{tabular}
\end{table}
Table 3: Binary population synthesis models and results. Here \(N_{\rm tot}\) is the predicted total number of I/LMXBs, \(N_{15}\) and \(N_{\rm mg}\) are the numbers of I/LMXBs with BH mass in the range of \(<15\) M\({}_{\odot}\) and \(<5\) M\({}_{\odot}\) (mass gap), respectively.
while Models D and S seem to work better, no matter \(k=0\) or 3. The cosmologically-coupled mass growth of BHs causes the orbit to shrink and RLOF earlier, leading to a higher birth rate for BH I/LMXBs. In addition, the duration of the mass transfer is prolonged (see Table 2 and the upper panels of Figure 1). So, more compact BH
LMXBs can be produced in Models D3 and S3 compared with Models D0 and S0. This seems to be compatible with the observed distribution.
## 4 Discussion and Conclusions
In this work, we employ both binary evolution calculation and binary population synthesis to simulate the formation and evolution of LMXBs. The goal of this work is to examine the possible effect of cosmologically-coupled mass growth of BHs, especially on the origin of the mass gap. Our main results are summarized as follows.
(1) The cosmologically-coupled mass growth of BHs seems to naturally result in the mass gap or dearth around \(2.5-5\,\mathrm{M}_{\odot}\). The percentage of mass gap BHs decreases from 90% and 24% in Models D0 and S0 to 29% and 4% in Models D3 and S3.
(2) The cosmologically-coupled mass growth of BHs always leads to orbital contraction. This can trigger RLOF in originally wide binaries, producing more compact LMXBs, which are difficult to form in the standard CEE theory, because a low-mass companion star does not have enough orbital energy to expel the envelope of the BH's progenitor star.
(3) The cosmologically-coupled mass growth extends the BH masses up to several hundred solar mass. Future observations of such intermediate-mass BH binaries will be crucial in testing the feasibility of the model.
There are some discussions on how the cosmologically-coupled mass growth of BHs can be constrained by observations in the literature. Gaia BH1 (El-Badry et al., 2023) and BH2 (El-Badry et al., 2023) are two detached BH binaries with the BH masses being 9.62 \(\mathrm{M}_{\odot}\) and 8.9 \(\mathrm{M}_{\odot}\), respectively. The orbital periods are 186 day and 1277 day, and the eccentricities are 0.45 and 0.52 for Gaia BH1 and BH2, respectively. In the framework of isolated binary evolution, Andrae and El-Badry (2023) suggested that, if the BHs are cosmologically coupled to the expansion of the Universe, there is a \(70\%-77\%\) probability that the initial BH masses are below 2.2 \(\mathrm{M}_{\odot}\). However, the current binary orbits can not accommodate the BH progenitors in the supergiant phase, and the low-mass optical companions are unable to eject the massive envelope during CEE. Thus, the two BHs seem unlikely to origin from isolated binary evolution (as discussed in El-Badry et al. 2023,b). However, if BHs are coupled to the expansion of the Universe, it is possible that these two BH binaries formed from extremely wide binaries. The initial low
Figure 4: Similar to Figure 3 but the donor mass-orbital period (\(M_{2}-P_{\mathrm{orb}}\)) distributions.
mass BHs grew with the expansion of the Universe, resulting in a contraction of the orbit while conserving the eccentricity imparted by the supernova kick.
Rodriguez (2023) argued that if \(k=3\), two BH candidates in the globular cluster NGC 3201 (Giesers et al., 2018, 2019) must have near face-on orientations or have formed from stellar collapse with NS-like masses (\(<2.2\) M\({}_{\odot}\)). While the latter is not impossible, the cosmologically-coupled mass growth may shift low-mass BHs to be more massive and leave the mass gap. Alternatively, low-mass BHs may be formed from accretion induced collapse of NSs (e.g., Gao et al., 2022). In this situation, the cosmological coupling may not be significant.
The evolutionary tracks in section 2 show that, after the mass transfer, the growth of the BHs, together with gravitational radiation, continues to drive the orbital contraction, and the final orbital periods can be as short as a few hours within the Hubble time. During this stage the binaries will manifest as low-frequency gravitational wave sources. The forthcoming low-frequency gravitational wave detectors (e.g., LISA, Amaro-Seoane et al., 2023) could detect these binaries and test whether the BH evolution is influenced by the expansion of the Universe. Considering the cosmologically-coupled mass growth of BHs (assuming \(k=3\)), Ghodla et al. (2023) showed that the estimated rates of gravitational-wave merger events would be three orders of magnitude higher than the observed rates. The LIGO-Virgo-KAGRA collaboration is currently conducting the O4 observing run. It is anticipated that a larger sample of BH mergers at different redshift will provide valuable test of the cosmological coupling of BHs.
We thank Yuan-Xing, Gao (Xing ); Lei, Feng (Xing ); Yi-Ying, Wang (Xing ) and Lei, Lei (Xing ) for helpful discussions. We are also grateful to an anonymous referee for useful comments and suggestions. This work was supported by the National Key Research and Development Program of China (2021YFA0718500), the Natural Science Foundation of China under grant No. 12041301, 12121003, and Project U1838201 supported by NSFC and CAS. The computation was made by using the facilities at the High-Performance Computing Center of Collaborative Innovation Center of Advanced Microstructures (Nanjing University, [https://hpc.nju.edu.cn](https://hpc.nju.edu.cn)). MESA(Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023), BSE(Hurley et al., 2002), Numpy(Harris et al., 2020), Astorpy(Astropy Collaboration et al., 2013), Matplotlib(Hunter, 2007), Jupyter-lab ([https://jupyterlab.readthedocs.io](https://jupyterlab.readthedocs.io)).
|
2304.11146 | Second order add/drop filter with a single ring resonator | We show theoretically and experimentally how a flat-top second-order response
can be achieved with a self-coupled single add-drop ring resonator based on two
couplers with different splitting ratios. The resulting device is a 1x1 filter,
reflecting light back in the input waveguide at resonating wavelengths in the
passbands, and transmitting light in the output waveguide at all other
non-resonating wavelengths. Different implementations of the filter have been
designed and fabricated on a micron-scale silicon photonics platform. They are
based on compact Euler bends - either U-bends or L-bends - and Multi-Mode
Interferometers as splitters for the ring resonators. Different finesse values
have been achieved by using either 50:50 MMIs in conjunction with 85:15 MMIs or
85:15 MMIs in conjunction with 95:05 double MMIs. Unlike ordinary lowest order
directional couplers, the MMIs couple most of the power in the cross-port which
make them particularly suitable for the topology of the self-coupled ring,
which would otherwise require a waveguide crossing. Experimental results are
presented, showing good agreement with simulations. The proposed devices can
find applications as wavelength-selective reflectors for relatively broad-band
lasers or used as 2x2 add-drop filters when two exact replicas of the device
are placed on the arms of a Mach-Zehnder interferometer. | Matteo Cherchi, Fei Sun, Markku Kapulainen, Tapani Vehmas, Mikko Harjanne, Timo Aalto | 2023-04-21T17:43:38Z | http://arxiv.org/abs/2304.11146v1 | # Second order add/drop filter with a single ring resonator
###### Abstract
We show theoretically and experimentally how a flat-top second-order response can be achieved with a self-coupled single add-drop ring resonator based on two couplers with different splitting ratios. The resulting device is a 1x1 filter, reflecting light back in the input waveguide at resonating wavelengths in the passbands, and transmitting light in the output waveguide at all other non-resonating wavelengths. Different implementations of the filter have been designed and fabricated on a micron-scale silicon photonics platform. They are based on compact Euler bends - either U-bends or L-bends - and Multi-Mode Interferometers as splitters for the ring resonators. Different finesse values have been achieved by using either 50:50 MMIs in conjunction with 85:15 MMIs or 85:15 MMIs in conjunction with 95:05 double MMIs. Unlike ordinary lowest order directional couplers, the MMIs couple most of the power in the cross-port which make them particularly suitable for the topology of the self-coupled ring, which would otherwise require a waveguide crossing. Experimental results are presented, showing good agreement with simulations. The proposed devices can find applications as wavelength-selective reflectors for relatively broad-band lasers or used as 2x2 add-drop filters when two exact replicas of the device are placed on the arms of a Mach-Zehnder interferometer.
P 1
Footnote *: [email protected]; phone +358 40 684 9040; fax +358 20 722 7012; [http://www.vtt.fi/siliconphotonics](http://www.vtt.fi/siliconphotonics)
## 1 Introduction
Optical filters with flat-top response are highly desirable and often mandatory, especially in telecommunication applications, where in-band ripples can cause unwanted signal distortions (e.g. in high-speed links) and/or power variations induced by wavelength drifts of the light sources (e.g. in CWDM systems with no thermal control). One of the most popular solutions for optical filtering is the ring-resonator[1, 2], which is the integrated-optics version of the Fabry-Perot interferometer. Given that a single ring resonator inherently shows the typical Lorentzian response of Fabry-Perot resonators, the typical approach to achieve flat-top response is to combine multiple ring resonators, either in series[3] or in parallel[4]. Configurations with two coupled resonators (i.e. second-order filters) have been also exploited to highlight Fano resonances[5] (often referred as all-optical analogue to electromagnetically induced transparency) in optical circuits[6]. More recently a few papers have shown that Fano resonances can be obtained also in a single ring resonator, by suitably coupling light from more than one port[7, 8]. Remarkably, all configurations reported to date require a waveguide crossing when implemented in a photonic integrated circuit (PIC).
In the present paper we focused on the simple configuration proposed by Wang et al.[8], with two main goals: first to demonstrate that it is possible to design a topologically equivalent structure without any crossing, and second that a suitable design of the coupling ratios allow to achieve flat-top response instead of a Fano resonance that is of little use for real applications.
The paper is organized as follows: in section 2 the novel configuration with no crossing is introduced, in section 3 the generic implementation of ring resonators in VTT micron-scale silicon photonic platform is explained, in section 4 the design of flat top filters is reported, in section 5 the layout and fabrication of the filters is presented, and in section 6 the experimental results are discussed. Conclusions and perspectives are eventually provided.
## 2 A second-order ring resonator without crossings
A very simple second order response from a single-ring resonator has been achieved by Wang et al.[8] by connecting the two couplers of the ring through a waveguide, so to create two counter-propagating light paths in the same ring. The resonating light is back-reflected in the input port, effectively acting also as drop port, whereas non-resonating light is transmitted in the output port, acting as through port. Assuming the two couplers of the ring resonator are identical, at
exact resonance the interplay of the two coupled counter-propagating modes induces destructive interference in the drop port, resulting in a narrow peak in the middle of the resonant dip of the through port. As shown in Fig. 1, the coupling configuration requires a crossing of the output port with the connecting waveguide (blue). This was not a big deal in the original paper, where the concept has been demonstrated based on optical fiber loops, but it can be an issue in an integrated version in a PIC. In fact, even if waveguide crossings are possible in most waveguide platforms, crossing losses are usually non-negligible, special fabrication steps are typically required, and the footprint of the crossing can also be large. Actually crossing losses have little impact in this case, as the crossing does not affect the ring resonator and its quality factor, but the requirement of additional etch-steps, together with the relatively larger footprint, have bad impact on the fabrication cost.
In order to avoid crossings, we propose to replace the original couplers (having coupling \(C_{1}\) and \(C_{2}\)) of the ring resonator with complementary couplers having inverse splitting ratio (i.e. \(\bar{C}_{1}=1-C_{1}\) and \(\bar{C}_{2}=1-C_{2}\)), and reconnect the waveguides in order to ensure the same coupling to all paths, as shown in Fig. 1. This way, the connecting waveguide will be embedded inside the ring, and no crossing is required anymore. In a sense, we can claim that the crossing is effectively moved inside the couplers. To better understand the topological equivalence of the two configurations, in Fig. 2 we have highlighted rings (pink) and connectors (yellow) both in the original configuration and in our upgraded version.
It may be argued that, in most practical cases, the original coupling of the couplers is small, whereas our version requires high coupling, which is not easy to achieve in most platforms with directional couplers, unless long couplers are used, with potential bad impact on the size of the ring resonator and/or on its losses. Another issue with our version is that the ring itself is broken into two pieces divided by the couplers, which means that transition losses in the couplers, usually negligible, can instead have a non-negligible impact on the quality factor of the resonator. Even if these argument make a lot of sense in most waveguide platform, they don't actually apply to VTT platform, where ring resonators are made based on multi-mode interference (MMI) splitters as couplers, which, unlike normal directional couplers, have most of the power coupled in the cross port instead of the bar port. This will be explained in more details in the next section.
Figure 1: Schematic representation of the resonator proposed by Wang et al. [8] (left) and proposed version with no crossings (right), based on complementary coupling ratios.
Figure 2: Direct comparison of the configuration by Wang et al. [8] (left) and the novel configuration (right), in order to highlight the ring resonator (pink) and the connecting waveguide (yellow).
Ring Resonators in VTT Platform
VTT platform is an interesting alternative silicon photonics platform, based on 3 \(\upmu\)m thick silicon on insulator (SOI) wafers. Single mode condition is ensured by suitably designed rib waveguides[9], obtained by etching 40% of the silicon layer thickness. Strip waveguides are obtained by etching the remaining silicon through a self-aligned process[10], also ensuring very low loss rib-to-strip converters[11]. Total internal reflection (TIR) in silicon can also be exploited to design and fabricate low loss mirrors for both rib and strip waveguides[12]. All these elements are schematically depicted in Fig. 3.a. A thermal oxidation step is applied to all types of waveguides, leading to propagation losses as low as 0.10 dB/cm in the rib waveguides and 0.13 dB/cm in the strip waveguides. Beside TIR mirrors, micron-scale bends are also possible in multimode strip waveguides, which effectively operate as single-mode, thanks to a recent breakthrough[13, 14] (see Fig. 3.b), which led also to a patent[15].
In principle there are two approaches to implement ring resonators in VTT platform. Implementations based on rib-waveguides are limited by the large bending radius (about 5 mm) required and by the length of the couplers. Furthermore rib couplers are very sensitive to etch-depth variations of the rib waveguides, which are typically around \(\pm\) 5% at wafer scale, making the Q to sensibly change all over the wafer and from wafer to wafer. The most promising approach is instead based on strip waveguides and Euler bends, making sure that higher order modes are not excited.
Figure 4: Ring resonators based on standard MMI splitters: only two ports are accessible.
Figure 3: Exemplified schematics of the VTT silicon photonics platform. a) Rib and strip waveguides with their mode distribution, rib to strip converter, and TIR mirror; b) the Euler bend.
But a major problem with this approach is that light is very well confined inside the strip waveguide, which means that, taking also into account the lithographic limitations for minimum achievable gap sizes, directional couplers are not an option. The only viable way to couple light between this type of waveguides is using MMI splitters. MMIs are well known for their relatively broadband operation and tolerance to fabrication errors, the only limitation being that covering all possible splitting ratios is not trivial. In this paper we tried to use as much as possible the discrete set of splitting ratios available in standard MMI splitters, and resort to double MMIs[16] only when strictly necessary, so to minimize the number of elements in the filter layout. In principle also tapered MMIs[17] could have been used instead, to shorten the length of the resonators.
Ring resonators have been fabricated on VTT platform based on standard MMIs that, unlike usual directional couplers, have most of the power coupled in the bar port, as depicted in Fig. 4. This results in an odd configuration, where two of the ring ports, namely the through and add ports, are not available, as they are embedded inside the ring. The only way to access them would be through waveguide crossings that would affect the ring quality factor. A typical response from the drop port of a MMI based on 85:15 MMIs is reported in Fig. 5, with finesse 16 and quality factor 12,000. In the left-hand side of all peaks it can be seen a significant distortion of the Lorentzian peak, most likely due to excitation of higher-order modes.
We notice that, whereas having most of the power in the bar port is a major limitation in standard ring resonators, the analysis in section 2 clearly showed that the very same property is instead an advantage for the particular configuration under study in this paper, as it avoids using any crossing. In other words, we can claim that our platform is perfectly suitable to implement the type of second order filter in Fig. 2, thanks to the splitting properties of the MMIs.
## 4 Design of Flat-Top Filters
The amplitude transmission \(\tau\) and reflection \(\rho\) of the filter in Fig. 2 can be expressed as[8]
\[\tau=\big{(}ae^{i\delta}\big{)}^{v}\frac{\big{(}t_{1}-t_{2}ae^{i\delta}\big{)} \big{(}t_{2}-t_{1}ae^{i\delta}\big{)}+c_{1}^{2}c_{2}^{2}ae^{i\delta}}{(1-t_{1 }t_{2}ae^{i\delta})^{2}} \tag{1}\]
and
Figure 5: Example spectrum of a fabricated ring based on 85:15 MMIs and Euler bends.
\[\rho=-2\big{(}ae^{i\delta}\big{)}^{\nu}\frac{\big{(}t_{1}-t_{2}ae^{i\delta}\big{)} c_{1}c_{2}\sqrt{ae^{i\delta}}}{(1-t_{1}t_{2}ae^{i\delta})^{2}}\ \, \tag{2}\]
where \(t_{1}\equiv\sqrt{T_{1}}\), \(t_{2}\equiv\sqrt{T_{2}}\), \(c_{1}\equiv\sqrt{C_{1}}\), and \(c_{2}\equiv\sqrt{C_{2}}\) are the bar and cross coupling coefficients of the original design, i.e. the cross and bar coefficients of the novel proposed layout. The parameter \(\delta\) accounts for the phase accumulated in a round trip in the ring, while \(\alpha\) is the amplitude attenuation per round trip. The parameter \(\nu\) is the ratio of the connector length relative to the ring length.
Based on these formulas, we have explored a few different configurations leading to flat-top response, trying to exploit as much as possible standard MMI splitting ratios. The ideal spectral response of the two chosen configurations are shown in Fig. 6, corresponding to completely lossless devices.
By choosing significantly different splitting ratio for the two MMIs of the ring the Fano resonance can be significantly (but not completely) suppressed, resulting in a flat-top response. The first combination has low finesse, as splitting ratios are 50:50 and 85:15. This is clear also from the low contrast (\(<10\) dB) of the resonances of the drop port (reflection R in the plots). The second choice has significantly higher finesse, being based on 85:15 and 95:05 splitting ratio. The last one is obtained by cascading two 50:50 MMI connected by two waveguides with a small tilt (see schematic picture in Fig. 6). The impact of losses, especially the ones coming from the MMI splitters, will be clearly higher on the high finesse version. Typical measured losses from a MMI splitter are between 0.1 and 0.2 dB, which means that the maximum round trip loss could be estimated around 0.5 dB in the worst case. A more detailed analysis of the impact of losses will be presented in section 6.
## 5 Layout and Fabrication
Four different designs have been included in the mask layout, with two different finesse values and either L-bends or U-bends, as illustrated in Fig. 7. The input/output waveguides were designed to match a fiber array polished with 8\({}^{\circ}\) angle, in order to make characterization easier and suppress residual back-reflections from the silicon facet to the fiber. The rib waveguides are tapered up to 13 \(\upmu\)m width at the interface, so to optimize coupling with the fiber modes.
We have fabricated the designed structures on 3 \(\upmu\)m thick SOI wafers based on the well assessed VTT silicon waveguide technology. The devices were fabricated using smart-cut silicon-on-insulator wafers from SOITEC.
Figure 6: Simulated spectral response of the chosen filters.
Initial SOI layer thickness was increased with epitaxial silicon growth. FilmTek 4000 spectrophotometer was used to measure the thicknesses of the SOI layer and the underlying buried oxide. A 500 nm thick Tetraethyl orthosilicate (TEOS) layer was deposited on the wafer in LPCVD diffusion furnace to work as a hard mask in silicon etching. Waveguide fabrication was done using our standard double-masking multi-step process[8]. In the process, two mask layers are passively aligned with respect to each other, and two separate silicon etch-steps form the rib and strip waveguide structures and waveguide facets. Lithography steps were done using FPA-2500i4 i-line wafer stepper from Canon Inc. and pattern transfer to oxide hard mask was done using LAM 4520 reactive ion etcher with CF\({}_{4}\) and CHF\({}_{3}\) chemistry. Waveguides were etched into silicon using Omega i2l ICP etcher from SPTS Technologies. The etching was done with a modified Bosch process[21] using SF\({}_{6}\) and C\({}_{4}\)F\({}_{8}\) as etch and passivation gases, respectively, and O\({}_{2}\) as an etch gas to break passivation polymer formed by C\({}_{4}\)F\({}_{8}\). After silicon etching, TEOS hard mask was removed with buffered oxide etch (BOE). Hard mask removal was followed by wet thermal oxidation consuming 225 nm of silicon, and thermal oxide removal with BOE. This was done to smoothen the etched surfaces, and to thin the SOI-layer to its final thickness of approximately 3 \(\upmu\)m.
Figure 8: Micrographs of some of the fabricated filters and details of a 85:15 MMI splitter and of the slightly bent junction between two 50:50 MMIs in a 95:05 double MMI.
Figure 7: The four types of filters put on mask (left) and detail of the two implementations based on L-bends, namely Type 4 and Type 2.
A 0.17 \(\upmu\)m thick silicon nitride layer was then deposited on the wafer with LPCVD as an antireflection coating and to prevent the etching of buried oxide in later oxide wet etch process steps. This layer was patterned in hot phosphoric acid using a hard mask made of 0.25 \(\upmu\)m thick LPCVD TEOS patterned with BOE. After silicon nitride layer patterning, 265 nm of LPCVD TEOS was deposited as a cladding layer for the waveguides. As a last step, the oxide layers were removed from the waveguide facets with BOE etch, and wafer was diced into chips.
Micrographs of some fabricated filters are shown in Fig. 8, together with details of a 85:15 MMI and of the slightly bent junction in a 95:05 double MMI.
## 6 Characterization and Discussion
We have initially characterized the filters over a broadband range from 1530 nm to 1570 nm wavelength, using a SLED and an optical spectrum analyzer (OSA) (see Fig. 9.a), to then realize that the limited 70 pm resolution of the OSA was too coarse to fully capture the spectral features of the filters. This is why we have also made some additional measurements with a narrow-line tunable laser and 5 pm tuning steps (see Fig. 9.b). In both cases we have used a circulator to collect the light back-reflected from the filters.
The low-resolution measurements are summarized in Fig. 10, not showing a proper second-order behavior in all cases. The four different filters have been measured from two different chips, and spectral features are not identical but consistent. High finesse filters show very high losses in the reflection spectrum, not to mention major spectral issues in the Type 3 implementation (with U-bends), most likely due to significant excitation of higher order modes.
Figure 10: Details of some resonant peaks measured with the SLED from two different fabricated chips: top row Chip 1, bottom row Chip 2.
Figure 9: Simulated spectral response of the chosen filters.
Type 1 and Type 2 filters from Chip 2 have been measured again using the tunable laser, and comparison shown in Fig. 11 clearly highlights the bad impact of convoluting the measured spectrum with the low resolution OSA. Instead the high-resolution measurements show second order responses, and in particular the right-hand side peak in Fig. 11.a, also zoomed in in Fig. 11.b, matches very nicely the target flat top design. On the other hand other peaks don't look as good as that one, probably due both to the impact of losses and beating of excited higher order modes.
By simulating the impact of losses on the spectral response, we have highlighted that, while the transmission response is independent of the chosen input port, as clearly seen by the symmetric structure of Eq. (1), this is not the case for the reflection, Eq. (2) lacking that symmetry. We show in Fig. 12 the simulated impact of 0.458 dB round-trip loss on the two designed filters, clearly highlighting this symmetry breaking.
## 7 Conclusions
We have introduced a new design of second-order filter based on a single ring resonator and implanted in our platform, by exploiting the unique properties of MMI splitters. The designed devices are based on different splitting ratios for the two splitters of the rings, targeting flat-top response instead of Fano resonances. The fabricated devices show second order behavior, but only in some cases the spectra match the design. Designs with high finesse suffer from high losses. We have also highlighted by simulation symmetry breaking of the reflected spectrum, depending on what port is used as an input.
Figure 11: Comparison of high resolution (top row) low-resolution (bottom row) spectra for Type 1 and Type 2 filters from Chip 2.
Figure 12: Simulated spectral response of the chosen filters entering different ports assuming non-negligible loss. |
2302.03814 | Rotation curve of the Milky Way and the mass of Sagittarius A* | In the present work we derive an analytical expression for the mass density
of an object with spherical symmetry, whose corresponding potential allows
obtaining a circular velocity around it that is in agreement with the observed
rotation curve of galaxies. The rotation curve of our galaxy is analyzed,
determining the properties of the central object, its radius and mass whose
value obtained is very close to that reported in the literature. | José Enrique Hernández Ramírez | 2023-02-08T00:40:59Z | http://arxiv.org/abs/2302.03814v1 | # Rotation curve of the Milky Way and the mass of Sagittarius A*
###### Abstract
In the present work we derive an analytical expression for the mass density of an object with spherical symmetry, whose corresponding potential allows obtaining a circular velocity around it that is in agreement with the observed rotation curve of galaxies. The rotation curve of our galaxy is analyzed, determining the properties of the central object, its radius and mass whose value obtained is very close to that reported in the literature.
Milky Way Galaxy rotation curve Dark matter Sagittarius A* Angular momentum
## 1 Introduction
The problem of the rotation curve of galaxies has generated various theories and profile models ([3]), to explain the non-correspondence between the theory and the observational data. The stars and galaxies when rotating around their center should show a decay in their rotation speed as they move away from their center. Many and different observations made show the opposite, as they move away from the center, their speeds increase and then maintain an almost constant value. The speed of rotation of the stars is much greater than what is allowed, which does not correspond to the estimated mass of the galaxy, calculated as the result of the masses of all the stars visible in it. The most accepted theory refers to the presence of a non-visible mass (dark matter), with the consideration of which it is possible to explain the non-decaying shape of the rotation curve. In this work we will consider the presence of this non-visible mass, whose density will be determined and which obeys a certain law.
## 2 The model
We will build a density \(\rho\) that allows us to explain its properties as well as its behavior with the objects with which it interacts. We will deal, in this model, with a central object without rotation and with spherical symmetry.
The dimension \(r\) of the object is obviously a magnitude to consider in the expression of the density that we want to build. As daily experience shows us, the density in large objects, such as planets and stars, is not constant, it depends on which point in the interior of the object we refer to. If we are close to the surface of the object, its density will be less than if we refer to any point very close to its center. This leads us to consider that the density is inversely proportional to the radius of the object, \(\rho\propto 1/r^{n}\), where \(n\) is an integer to be determined. As we are dealing with problems related to gravitation, we will also consider a magnitude that characterizes the interaction of the object of study with other objects around it, the universal gravitational constant \(G\). The other magnitude to consider, which characterizes the internal properties of the object of study, is its escape velocity, \(v_{es}\).
Thus we can express the density as,
\[\rho(r)=k\,G^{x}\,v_{es}^{y}\,r^{n}, \tag{1}\]
where \(k\) is a dimensionless proportionality constant and \(x\), \(y\) and \(n\) numbers to be determined. A dimensional analysis gives us that \(x=-1\), \(y=2\) and \(n=2\), so,
\[\rho(r)=k\frac{v_{es}^{2}}{G\,r^{2}}. \tag{2}\]
As we known, the escape velocity of an object of mass \(M\) and radius \(a\) is given by, \(v_{es}^{2}=2\,M\,G/a\), thus, we can write the above equation as
\[\rho(r)=k\frac{2M}{a\,r^{2}}. \tag{3}\]
We will determine the proportionality constant \(k\), knowing that the mass contained in the volume \(4/3\,\pi\,a^{3}\) is \(M\). Thus, \(k=1/8\,\pi\). In this way we can finally express the density as
\[\rho(r)=\frac{1}{3}\,\rho_{obj}\left(\frac{a}{r}\right)^{2}, \tag{4}\]
being \(\rho_{obj}=3M/4\,\pi a^{3}\).
The mass corresponding to the distribution given by 4 is
\[m(r)=M\,\frac{r}{a}. \tag{5}\]
Note that for small distances from the center of the distribution, the density given by 4 is large but the mass given by 5 is small, but for large distances the opposite is true.
It should be noted that the above density obtained analytically is it very similar to that used by [2], obtained through high-resolution N-body simulations, to study the equilibrium density profiles of dark matter in halos.
## 3 Potential
Now we are going to consider the potential \(\phi\) generated by this mass distribution. We will consider that this density is not limited to the border \(a\) of the object, but that it extends beyond it. That is, the space that surrounds our object of study is not empty. In this model we only consider the mass distribution given by 4 and the vacuum, and that the object is not rotating.
Solving the Poisson's equation for this mass distribution, we found that
\[\phi(r)=\frac{M\,G}{a}\,\left[-\ln(r/a)-c_{1}\,a/r+c_{2}\right], \tag{6}\]
where by the boundary conditions \(\phi(a)=-G\,M/a\) and \(\phi(a)=-G\,M/a^{2}\) we found that \(c_{1}=2\) and \(c_{2}=1\), thus the potential will be given by
\[\phi(r)=\frac{M\,G}{a}\,\left[-\ln(r/a)-2\,a/r+1\right]. \tag{7}\]
## 4 Rotation Velocity
Now we can determine the rotation velocity \(v_{rot}\) of some particle of mass \(m\) around our central object, considering that the space around it is not empty but rather filled with an unknown substance whose density obeys the distribution given by 4.
The rotation velocity \(v\) can be determined as,
\[\frac{v^{2}}{r}=-\frac{d\phi}{dr}, \tag{8}\]
thus we find that,
\[v=\sqrt{\frac{G\,M}{a}\left(1-\frac{2\,a}{r}\right)}, \tag{9}\]
whose graph is shown in (Fig.1).
We can rewrite equation 9 in a convenient way
\[Y=B-A\,X \tag{11}\]
where \(Y=v^{2}\), \(A=2G\,M\), \(B=G\,M/a\) and \(X=1/r\). Determining, from Eq. 11, the constants \(A\) and \(B\) using the observational data, we were able to find the mass \(M\) and the radius \(a\) of the central object.
In the case of our galaxy, the curve is shown in the Fig.2. This does not show the linear behavior predicted by the model, because it considers that the central object has a fixed mass \(M\) and radius \(a\). In the case of galaxies, this mass varies, due, on the one hand, to the large number of objects between the central object and the object that orbits and due to the non-visible mass, whose density distribution is given by4. However, we note that certain groups of data
have a linear behavior. For this reason, we split the data into five groups, using the high degree of linearity in each of them as a criterion. The first group from 1-4, the second from 5-21, the third from 22-41, the fourth from 42-68 and the last from 69-99. The equations of the lines, according to equation 11, are:
\(Y_{1}=-1.1177\times 10^{25}+7.3641\times 10^{9}\),
\(Y_{2}=-1.1543\times 10^{26}+2.3121\times 10^{10}\),
\(Y_{3}=-2.3857\times 10^{26}+2.643\times 10^{10}\),
\(Y_{4}=-9.3890\times 10^{26}+3.67\times 10^{10}\) and
\(Y_{5}=-3.7650\times 10^{27}+6.3970\times 10^{10}\), respectively.
Through them we were able to determine the corresponding masses, being them \(M_{1}=8.44\times^{34}kg\),\(M_{2}=8.72\times^{35}\)\(kg\),\(M_{3}=1.80\times^{36}\)\(kg\) and \(M_{4}=7.09\times^{36}\)\(kg\), \(M_{5}=2.84\times^{37}\)\(kg\), whose average is \(M_{SgrA^{*}}=7.65\times 10^{36}\)\(kg\), value very close to that reported in [1] (3.85 million solar masses).
The corresponding radius are: \(a_{1}=7.55\times 10^{14}\,m\), \(a_{2}=2.51\times 10^{15}\,m\), \(a_{3}=4.59\times 10^{15}\,m\), \(a_{4}=1.27\times 10^{16}\,m\), \(a_{5}=2.94\times 10^{16}\,m\) whose mean value is \(a_{SgrA^{*}}=9.98\times 10^{15}\,m\). The observational and fitted curves are shown in 3.
\(M_{SrgA^{*}}\) and \(a_{SrgA^{*}}\) are average values. The best approximation for these is privided by the two objects closest to the galactic center that orbit it. On the other hand, the best approximation to the mass and dimension of the galaxy (\(M_{galaxy}\) and \(R_{galaxy}\)) come from the two most distant objects that orbit it.
## 6 Angular Momentum Conservation
Since the Lagrangian of the system does not depend explicitly on \(\theta\) the angular momentum \(L\) is conserved. It happens that after a certain distance the speed of rotation remains almost constant, but not the position of the orbiting object, which increases. It may be asked on account of what angular momentum is conserved. Let's see,
\[L=m\,v\,d, \tag{12}\]
being \(m\) the mass of the star, \(v\) its speed of rotation and \(d\) its distance from the galactic center. Since \(L\) is constant then, the mass must vary according to the equation
\[m(r)=\frac{m_{0}\,v_{0}\,r_{0}}{r\sqrt{\frac{GM}{a}\left(1-2\,a/r\right)}}. \tag{13}\]
As the star moves away from the center of rotation, it loses mass and with it, energy. The mass loss mechanism occurs by the well-known nuclear process that occurs in stars, in addition to the one exposed by conservation of angular momentum.
## 7 Conclusion
The data obtained by the model was possible assuming the presence of a non-visible distribution of matter, outside the limits of the central object. The Eq. 9 is in correspondence with the observational data as it does not present a decay and it is valid for spiral galaxies or others whose dependence \(1/r\) vs \(v^{2}\) is given by the Eq. 11. Since the speed of rotation of the stars depends on the mass of the galaxy contained in a radius \(r\), several rotation curves are obtained, each of which will reproduce the observational data in a certain range. A more realistic model should include the rotation of the central object, this would allow a better fit of the model with the observational data.
Figure 1: The curve is valid for \(r\geq 2a\). As shown in the figure, the circular velocity does not decrease and approaches to \(\sqrt{G\,M/a}\) as \(r\rightarrow\infty\). Here we set \(a=1\) and \(GM=0.5\) for illustrative porpouses. |
2303.05194 | Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation | Standard unsupervised domain adaptation methods adapt models from a source to
a target domain using labeled source data and unlabeled target data jointly. In
model adaptation, on the other hand, access to the labeled source data is
prohibited, i.e., only the source-trained model and unlabeled target data are
available. We investigate normal-to-adverse condition model adaptation for
semantic segmentation, whereby image-level correspondences are available in the
target domain. The target set consists of unlabeled pairs of adverse- and
normal-condition street images taken at GPS-matched locations. Our method --
CMA -- leverages such image pairs to learn condition-invariant features via
contrastive learning. In particular, CMA encourages features in the embedding
space to be grouped according to their condition-invariant semantic content and
not according to the condition under which respective inputs are captured. To
obtain accurate cross-domain semantic correspondences, we warp the normal image
to the viewpoint of the adverse image and leverage warp-confidence scores to
create robust, aggregated features. With this approach, we achieve
state-of-the-art semantic segmentation performance for model adaptation on
several normal-to-adverse adaptation benchmarks, such as ACDC and Dark Zurich.
We also evaluate CMA on a newly procured adverse-condition generalization
benchmark and report favorable results compared to standard unsupervised domain
adaptation methods, despite the comparative handicap of CMA due to source data
inaccessibility. Code is available at https://github.com/brdav/cma. | David Bruggemann, Christos Sakaridis, Tim Brödermann, Luc Van Gool | 2023-03-09T11:48:29Z | http://arxiv.org/abs/2303.05194v3 | # Contrastive Model Adaptation for Cross-Condition Robustness
###### Abstract
Standard unsupervised domain adaptation methods adapt models from a source to a target domain using labeled source data and unlabeled target data jointly. In model adaptation, on the other hand, access to the labeled source data is prohibited, i.e., only the source-trained model and unlabeled target data are available. We investigate normal-to-adverse condition model adaptation for semantic segmentation, whereby image-level correspondences are available in the target domain. The target set consists of unlabeled pairs of adverse- and normal-condition street images taken at GPS-matched locations. Our method--CMA--leverages such image pairs to learn condition-invariant features via contrastive learning. In particular, CMA encourages features in the embedding space to be grouped according to their condition-invariant semantic content and not according to the condition under which respective inputs are captured. To obtain accurate cross-domain semantic correspondences, we warp the normal image to the viewpoint of the adverse image and leverage warp-confidence scores to create robust, aggregated features. With this approach, we achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks, such as ACDC and Dark Zurich. We also evaluate CMA on a newly procured adverse-condition generalization benchmark and report favorable results compared to standard unsupervised domain adaptation methods, despite the comparative handicap of CMA due to source data inaccessibility. Code is available at [https://github.com/brdav/cma](https://github.com/brdav/cma).
## 1 Introduction
Adverse visual conditions, such as fog, heavy rain, or snowfall, represent a challenge for autonomous systems expected to navigate "in the wild". To achieve full autonomy, systems require perception algorithms which perform robustly in every condition. However, due to their infrequent occurrence, inclement weather conditions are often underrepresented in common, finely annotated outdoor datasets (_e.g_., BDD100K [45] or Mapillary Vistas [29]). As a result, state-of-the-art recognition methods are biased towards "normal" visual conditions (_i.e_., daytime and clear weather), which causes them to fail for edge cases. Furthermore--in particular for detailed, pixel-level tasks like semantic segmentation--high-quality annotations for adverse-condition images are difficult and expensive to obtain. In fact, they require specialized annotation protocols due to ambiguities arising from aleatoric uncertainty [33]. To circumvent these issues, researchers have investigated unsupervised domain adaptation (UDA) from normal to adverse conditions as an alternative to full supervision [6, 14, 42, 11], where a model is jointly trained on labeled source-domain data and unlabeled target-domain data.
This paper instead targets the more general problem of source-free domain adaptation--also known as _model adaptation_--for semantic segmentation. In model adapta
Figure 1: CMA exploits image-level correspondences to learn condition-invariant features. Two images of the same location (but captured under different visual conditions) are encoded, and the normal-condition image features are warped to get spatially aligned with the adverse-condition image features. Our contrastive loss then creates an embedding space where patches of adverse features (black) are _closer to their corresponding normal patches_ (green) than to other adverse patches (red).
tion, only (i) the model pre-trained on source images and (ii) unlabeled target images are available. This pertains to many real-world use cases, when the labeled source data is proprietary or inaccessible due to privacy concerns. The complete absence of fine ground-truth annotations represents a significant challenge, as the model can easily drift and unlearn important concepts during adaptation. To bolster the adaptation process, we leverage another form of weak supervision, which is far easier and cheaper to collect than pixel-wise semantic annotations. In particular, multiple recent driving datasets--such as RobotCar [27], ACDC [33] and Boreas [3]--traverse the same route several times under varying weather conditions, and provide GPS-matched frames. Each adverse-condition target image can thus be paired with a corresponding _reference_ image depicting roughly the same scene under normal conditions. While also unlabeled, the reference images bridge the domain gap between the source and target domain by overlapping both with the source domain in terms of visual condition and with the target domain in terms of geography and sensor characteristics.
Our proposed method, named Contrastive Model Adaptation (CMA), leverages the reference predictions through a unified embedding space. Assuming the reference and target images are sufficiently aligned, co-located features should be similar between the two--neglecting dynamic objects and slight shifts in static content (_e.g_., missing leaves on a tree). Accordingly, we posit that for a given target feature, its reference feature at the same spatial location should be _closer in the embedding space than most other target features_. An embedding space fulfilling this assumption would effectively eliminate condition-specific information, but simultaneously preserve semantic content. We aim to create such an embedding space through contrastive learning, where dense spatial embeddings of the target image serve as _anchors_ (black patch in Fig. 1). Each anchor is pulled towards a single _positive_, _i.e_., the embedding of the reference image corresponding to the same location (green patch in Fig. 1). Since the pre-trained source model is expected to produce semantic features of higher quality on the reference images than on the target images (see appendix Fig. C-1 for a qualitative comparison), this clustering step helps to correct less reliable anchor semantics. Conversely, the anchor is pushed apart from the _negatives_, which are simply target embeddings at other spatial locations (or from other target images, red patches in Fig. 1), to counteract mode collapse. Through spatial alignment of the reference and target images and custom, confidence-modulated feature aggregation, we create robust embeddings for optimization with our cross-domain contrastive loss.
CMA yields state-of-the-art results for model adaptation on several normal-to-adverse semantic segmentation benchmarks. It even outperforms recent standard UDA methods on these benchmarks, despite its data handicap compared to the latter methods. Attesting to our successful cross-domain embedding alignment, CMA delivers exceptionally _robust_ results, as shown by evaluations on the newly compiled Adverse-Condition Generalization (ACG) benchmark.
## 2 Related Work
**Model adaptation** or source-free domain adaptation [24, 35] lifts the assumption of standard unsupervised domain adaptation [13, 37] that data from the source domain are accessible at adaptation time, which renders the former task more challenging. In the complete absence of labeled data for providing supervision, model adaptation methods for semantic segmentation typically rely on loss-based constraints on the features and/or outputs of the network, which are computed for target images. Such losses promote robustness of the network to missing features [35, 26] or to perturbations of the inputs and features [26], or aim at minimizing entropy in the network outputs [35, 39]. Some works focus primarily on the normalization layers of the involved networks, encouraging consistency of the statistics of these layers across the initial source-trained model and the final adapted model [24] and optimizing channel-level affine transformations of the normalized features with respect to output entropy [39]. Best "upstream" practices for training source models which are required as inputs for model adaptation are explored in [19]. One previous work on model adaptation in semantic segmentation [17] has considered contrastive learning similarly to us. However, that work contrasts features within individual images across the model adaptation cycle, whereas we contrast features across domains due to multiple corresponding views.
**Contrastive learning** is a fundamental unsupervised framework based on instance discrimination for extracting informative representations. Seminal works on contrastive learning include [30], which introduced the widely used InfoNCE loss, and [5], which proposed a simple framework for visual contrastive learning. While such fundamental works focus on the setting of unsupervised pre-training for image classification, there has been a body of recent literature examining contrastive learning for domain adaptive semantic segmentation, by primarily leveraging _class-wise_ contrast. [18, 43, 21] employ partially dense contrast between classes using pixel features as anchors and class-level prototype vectors as positives and negatives. Along similar lines, [43, 22] implement partially dense contrast between classes using pixel features as anchors and estimated class-level _distributions_ as positives and negatives, while [28] use class prototypes both as anchors and as positives/negatives. These approaches are prone to false positive/negative samples which contaminate the contrastive loss due to potential errors in the target-domain pseudo-labels, which are
used both to determine the anchors and to compute the class prototypes that serve as positives and negatives. By contrast, we propose a novel _domain-wise_ contrast, which does not rely on class (pseudo-)labels to construct the contrastive loss and avoids the above issue. Moreover, our cross-domain contrastive loss is _fully_ dense, in the sense that both the anchors and the positives/negatives are densely extracted in patches from the input images. In that regard, our method is closely related to [40], which also uses correspondences between matching views to contrast dense, locally aggregated features--albeit for unsupervised pre-training. Whereas correspondences between views are heuristically determined in [40] through the similarities of backbone features, we have a dedicated, externally pre-trained module for computing correspondences via dense matching. Besides, contrary to [40], we explicitly combat false positive pairs owing to errors in the correspondences by incorporating confidence-guided feature aggregation: when grouping features locally, we weigh them by the confidence associated with the respective correspondences. In addition, low-confidence anchors and positives are filtered out.
**Cross-condition image-level correspondences** are provided by several driving datasets [3, 34, 3, 27] and can be utilized for normal-to-adverse condition domain adaptive semantic segmentation: [20] find sparse, pixel-level correspondences and apply a consistency loss on semantic predictions. Other works establish dense correspondences through two-view geometry [34] or end-to-end dense matching [41, 2]. The dense correspondences are subsequently used to enforce prediction consistency [41] or to fuse cross-condition predictions [34, 2]. While we also use end-to-end dense matching to find correspondences, we instead use contrastive learning to create a condition-invariant, discriminative embedding space.
## 3 Contrastive Model Adaptation (CMA)
The aim of CMA is to fine-tune a given pre-trained semantic segmentation model on unlabeled, adverse-condition images. In contrast to standard unsupervised domain adaptation, access to the labeled source data--with which the model was originally trained--is assumed to be prohibited, _e.g_., due to privacy concerns. In the experimental setup for CMA, we are specifically given (i) a semantic segmentation model, pre-trained on a source dataset recorded under normal conditions, _e.g_., Cityscapes [9], and (ii) a set of unlabeled adverse-condition target samples, _e.g_., from ACDC [33], to whose population we aim to adapt the model. Moreover, (iii) an unlabeled reference image is available for each target image, which depicts approximately the same scene as the adverse-condition target image, albeit under normal visual conditions.
### Architecture Overview
Fig. 2 shows our model adaptation architecture. The pre-trained source-model weights are used to initialize the encoder enc and decoder dec. Since CMA focuses on generating condition-invariant, discriminative encoder features, decoder weights are kept frozen to preserve source-domain knowledge. The encoder enc\({}_{\theta}\) is adapted by three loss functions: entropy minimization (\(\mathcal{L}_{\text{ent}}\), Sec. 3.4), self-training (\(\mathcal{L}_{\text{st}}\), Sec. 3.4), and the proposed cross-domain contrastive (CDC) loss (\(\mathcal{L}_{\text{cdc}}\), Sec. 3.3). We describe the individual modules in detail in the next sections.
Figure 2: Overview of the CMA architecture. (Left) The segmentation network (enc and dec) is initialized with source-pretrained weights, however, access to the source data itself is prohibited. (Right) The model is trained with pairs of target and reference images. In addition to standard entropy minimization (\(\mathcal{L}_{\text{ent}}\)) and self-training (\(\mathcal{L}_{\text{st}}\)), we propose a cross-domain contrastive (CDC) loss (\(\mathcal{L}_{\text{cdc}}\)) to align features across domains. Dense embeddings are extracted from both images through projection heads proj. The CDC loss pulls the target anchors close to corresponding reference embeddings (positives), while pushing them apart from other target embeddings stored in a queue (negatives). Crucially, the positives are obtained through spatial alignment (warp) and robust feature aggregation (see Sec. 3.3). Frozen modules are in orange, trainable modules in blue, and exponential moving average modules in pink. Gradients are only backpropagated through solid arrows.
### Spatial Alignment
Although the reference-target image pairs depict the same scene, their viewpoint can differ substantially as they are only GPS-matched. The resulting correspondence discrepancies can have a detrimental effect when working on pixel-accurate tasks such as semantic segmentation. We therefore densely warp the reference image into the viewpoint of the target image to obtain more accurate matches.
In particular, we choose the existing dense matching network UAWarpC [2] (warp module in Fig. 2), since it provides a confidence score for each displaced pixel, which is an important component of our downstream CDC loss (see Sec. 3.3). UAWarpC was independently trained in a self-supervised way on MegaDepth [23], a large-scale collection of multi-view internet photos. When training CMA, UAWarpC is frozen and warps the reference image features. We tried to either warp the image before feeding it into the encoder \(\textsc{enc}_{\theta}\), or warp the dense feature maps, and found that the latter works better.
### Cross-Domain Contrastive Loss
The cross-domain contrastive (CDC) loss is the central component of CMA. It incentivizes the encoder to learn features which discriminatively reflect the semantics, but are invariant to the visual condition. To this end, spatial target image features represent _anchors_, which are pulled towards spatially corresponding reference image features--the _positives_. The anchors and positives are assumed to represent similar semantics in distinct visual conditions. Simultaneously, the anchors are contrasted to other target image features--the _negatives_--to prevent mode collapse. Although the CDC loss could in principle be applied to encoder features directly, we first project the dense features to a dedicated 128-dimensional embedding space, as per standard practice [5]. The projection head proj consists of two 1\(\times\)1 convolutions with a ReLU non-linearity in-between. As shown in Fig. 2, the embeddings of the trainable model \(\textsc{proj}_{\theta}\circ\textsc{enc}_{\theta}\) serve as anchors, while--as proposed in [12]--positives and negatives are obtained by an exponential moving average model \(\textsc{proj}_{\textsc{emao}}\circ\textsc{enc}_{\textsc{emao}}\) to improve their consistency. Furthermore, we use a _queue_ to accumulate negatives [12]. This enables the use of a large number of negatives during instance discrimination, which encourages the learning of meaningful representations by making the discrimination more challenging. Finally, the positives are spatially warped to align them with the anchors, as detailed in Sec. 3.2.
Despite the warping, anchors and positives might not always depict the same semantic content, because (i) the warping described in Sec. 3.2 is not exact, _e.g_., street poles and other small or thin objects are often not perfectly aligned, and (ii) dynamic objects such as cars and pedestrians differ between the reference and target images. Such false positives introduce excessive noise into the contrastive loss, which worsens generalization [36]. To mitigate this issue, we use two strategies: patch-level grouping and confidence modulation.
**Patch-Level Grouping.** Inspired by [40], we average-pool spatial embeddings across square patches for anchors, positives, and negatives. However, differently from [40], we directly pool the embeddings, instead of applying pooling earlier in the model. Due to the averaging, larger patches are more forgiving towards small errors in the warping, as well as small semantic discrepancies due to dynamic objects. On the other hand, very large patch sizes do not promote the learning of local discriminative features. The grid size dictating the employed grouping is thus a key hyperparameter; we choose a 7\(\times\)7 grid for square full-height crops of street images. A desirable side-effect of patch-level grouping is a significant reduction in memory and computational cost.
**Confidence Modulation.** The confidence scores provided by the warp module can be leveraged to refine and filter patch embeddings for anchors and positives. We propose to use _weighted_ average pooling to create patch embeddings, where each pixel is weighted by its confidence score. Accordingly, low-confidence correspondences within a single patch (_e.g_., resulting from pixels of a dynamic object) contribute less to the aggregated patch embedding. In addition, patches with an average confidence of below 0.2 are deemed false positives and discarded altogether.
To formalize those steps, we define the set \(\mathcal{N}_{i}\) to comprise all indices of pixels in the pooling receptive field of patch \(i\). \(\mathbf{z}_{j}^{a},\mathbf{z}_{j}^{p}\in\mathbb{R}^{128}\) are the proj head outputs at pixel index \(j\) for anchor and (warped) positive respectively. \(c_{j}\in[0,1]\) is the corresponding warp confidence score (which is identical for anchor and positive). Importantly, \(c_{j}=0\) for pixels without a valid correspondence. Unnormalized patch embeddings for anchors \(\mathbf{\tilde{a}}_{i}\) and positives \(\mathbf{\tilde{p}}_{i}\) are computed through the weighted sums
\[\mathbf{\tilde{a}}_{i}=\sum_{j\in\mathcal{N}_{i}}c_{j}\mathbf{z}_{j}^{a},\quad \mathbf{\tilde{p}}_{i}=\sum_{j\in\mathcal{N}_{i}}c_{j}\mathbf{z}_{j}^{p}. \tag{1}\]
The embeddings are subsequently L2-normalized to obtain \(\mathbf{a}_{i}\) and \(\mathbf{p}_{i}\). Meanwhile, negative embeddings \(\mathbf{n}_{j}\) are obtained through simple average pooling, followed by L2-normalization. To create an embedding space where, for each patch \(i\), the anchor \(\mathbf{a}_{i}\) is pulled towards the positive \(\mathbf{p}_{i}\) and pushed away from \(M\) negatives \(\mathbf{n}_{j}\) (sourced from the queue of length \(M\)), we use the InfoNCE [30] loss:
\[\mathcal{L}_{\text{oc},i}=-\log\frac{\exp\left(\mathbf{a}_{i}^{T}\mathbf{p}_{ i}/\tau\right)}{\exp\left(\mathbf{a}_{i}^{T}\mathbf{p}_{i}/\tau\right)+\sum_{j=1}^{M} \exp\left(\mathbf{a}_{i}^{T}\mathbf{n}_{j}/\tau\right)}. \tag{2}\]
\(\tau\) is a temperature hyperparameter which scales the sensitivity of the loss function. Finally, when aggregating the
patch-wise losses, low-confidence patches are discarded:
\[\mathcal{L}_{\text{cdc}}=\frac{\sum_{i}\mathcal{L}_{\text{cdc},i}[\bar{c}_{i}\geq 0.2]}{\sum_{i}[\bar{c}_{i}\geq 0.2]}, \tag{3}\]
where \([\cdot]\) denotes the Iverson bracket and \(\bar{c}_{i}\) is the average-pooled confidence of patch \(i\):
\[\bar{c}_{i}=\frac{1}{|\mathcal{N}_{i}|}\sum_{j\in\mathcal{N}_{i}}c_{j}. \tag{4}\]
The effect of patch-level grouping and confidence modulation is illustrated in Fig. 3. In the two center columns, each pixel is whited out according to its confidence. Within each patch the "confident", visible pixels are subsequently aggregated. Orange patches are eliminated due to their overall low confidence. Notice that the remaining features correspond well between the two images, despite the initial differences in viewpoint, dynamic objects, occlusions, _etc._ In fact, the warping confidence is rather conservative.
### Complete Training Loss
Besides the proposed CDC loss discussed in Sec. 3.3, we complement model adaptation with two commonly used loss functions.
**Self-Training.** State-of-the-art unsupervised domain adaptation methods for semantic segmentation predominantly rely on self-training [15, 16, 47]. We follow the pseudo-labeling strategy of CBST [48] to create class-balanced pseudo-labels from confident predictions. The pseudo-labels are created once before training by the source model, to inject regularization to the source. We retain the most confident 20% of pixels and all other pixels are ignored. During model adaptation, we use a cross-entropy loss \(\mathcal{L}_{\text{st}}\) for self-training.
**Entropy Minimization.** We use entropy minimization as a regularizer during training. \(\mathcal{L}_{\text{ent}}\) is the mean normalized entropy of the predicted class-probabilities over all pixels.
Finally, the complete training loss consists of a weighted sum of the three losses:
\[\mathcal{L}_{\text{tot}}=\mathcal{L}_{\text{st}}+\lambda_{\text{ent}}\mathcal{ L}_{\text{ent}}+\lambda_{\text{cdc}}\mathcal{L}_{\text{cdc}}. \tag{5}\]
\(\lambda_{\text{ent}}\) and \(\lambda_{\text{cdc}}\) are hyperparameters which determine the relative importance given to the individual losses.
## 4 Experiments
In this section, we present extensive experimental results, comparing CMA to state-of-the-art model adaptation (Sec. 4.2) and standard unsupervised domain adaptation (Sec. 4.3) methods. Moreover, we analyze generalization performance (Sec. 4.4) and ablate various components (Sec. 4.5) of the method.
### Setup
**Datasets.** We use Cityscapes [9] as a source dataset, and various target datasets: ACDC [33] (train/val/test: 1600/406/2000 images), Dark Zurich [34] (train/val/test: 2416/50/151 images), RobotCar Correspondence [20, 27] (train/val/test: 6511/27/27 images), and CMU Correspondence [1, 20] (train/val/test: 28766/25/33 images). All target datasets contain corresponding pairs of normal- and adverse-condition street images in the training set. Adverse conditions vary between datasets: ACDC contains images in fog, night, rain, and snow; Dark Zurich consists of night images; RobotCar and CMU contain variable conditions as well as seasonal changes. Unless otherwise stated, we report performance on the test sets for all datasets.
**ACG Benchmark.** To assess the generalization performance of trained adverse-condition models, we present an adverse-condition generalization (ACG) benchmark consisting of diverse samples from several public street-scene segmentation datasets. We inspected all labeled images of WildDash2 [46], BDD100K [45], Foggy Zurich [10, 31], and Foggy Driving [32], selected adverse-condition samples (featuring fog, night, rain, snow, or a combination of those), and manually verified the quality of each corresponding ground truth. Samples with evident ground-truth inaccuracies were eliminated. For Foggy Zurich and Foggy Driving, we also meticulously cross-checked every image for overlap with the ACDC dataset, as all three datasets were recorded in the same region. We refer to Sec. B of the appendix for more details about the sample selection process and the resulting dataset statistics. ACG
Figure 3: Visualization of matched patches. The second column shows the warped reference image, with low-confident regions whited out. The drawn grid illustrates the patch-level grouping. Within each patch, features are aggregated proportionally to their confidence. Patches with low average confidence are discarded altogether, as shown by orange shading.
consists of a highly _diverse_ set of 922 adverse-condition driving images from various geographical regions in Europe and North America. We additionally divide ACG into 121 fog, 225 rain, 276 snow, and 300 night images, to allow for condition-wise evaluation. Importantly, ACG-night also includes adverse-weather images, _e.g._, snowy night-time scenes. The curated list of ACG image filenames is publicly available1.
Footnote 1: [https://data.vision.ee.ethz.ch/brdavid/cma/ACG.zip](https://data.vision.ee.ethz.ch/brdavid/cma/ACG.zip)
**Architectures and Hyperparameters.** We conduct the bulk of our experiments using the state-of-the-art SegFormer [44] architecture, however, we also include experiments using DeepLabv2 [4]. For all datasets and architectures, we train CMA for 10k iterations, where for the first 2.5k iterations we stop gradient flow from \(\textsc{proj}_{\theta}\) back to \(\textsc{enc}_{\theta}\) to "warm up" \(\textsc{proj}_{\theta}\). Since \(\textsc{proj}_{\theta}\) is the only randomly initialized module, we use a 10x learning rate for it w.r.t. \(\textsc{enc}_{\theta}\). We find that a high momentum of 0.9999 works best for the exponential moving average components, presumably preserving source knowledge better during the adaptation process. All model were trained on a single TITAN RTX GPU. More details on training configurations are provided in Sec. A of the appendix.
### Comparison to Model Adaptation Methods
We benchmark CMA against the state-of-the-art model adaptation methods TENT [39], HCL [17], and URMA [35] on Cityscapes\(\rightarrow\)ACDC and report the ACDC test set scores in Table 1. In this comparison, CMA is the only method using reference images. Using a SegFormer architecture, CMA sets the new state of the art for model adaptation from Cityscapes to ACDC with an mIoU of 69.1%, substantially outperforming all other methods. Note that CMA outperforms competing methods both on static and dynamic classes, even though our contrastive loss does not explicitly target dynamic objects. We hypothesize that the universal performance gain is enabled through the learned invariance to global condition-level variations, _e.g._, with respect to changes in scene illumination or reflectance. For DeepLabv2, CMA obtains 50.4% mIoU, still outperforming other methods, although in this case all compared methods bring a substantial improvement over the source model. Due to the large absolute performance difference, we con
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{10}{c}{ACDC IoU \(\uparrow\)} \\ \cline{2-13} & \multirow{2}{*}{\(\textsc{proj}_{\theta}\)} & \multirow{2}{*}{
duct the rest of the experiments in this paper on the SegFormer architecture. We show qualitative segmentation results for the different SegFormer-based methods on three ACDC validation set images in Fig. 4. CMA predicts higher-quality segmentation maps than other methods, _e.g_., on the snowy sidewalk in the top image, the fence in the middle image, or the wall on the right in the bottom image.
We further evaluate CMA on three other adaptation settings: Cityscapes\(\rightarrow\)Dark Zurich, Cityscapes\(\rightarrow\)RobotCar, and Cityscapes\(\rightarrow\)CMU in Table 2. For all investigated scenarios, CMA significantly outperforms other methods.
### Comparison to Standard UDA Methods
Table 3 shows a comparison of CMA to standard unsupervised domain adaptation (UDA) methods, which use the labeled source data during the adaptation process. Standard UDA methods are thus not susceptible to forgetting source knowledge. Despite this handicap, CMA compares favorably to DAFormer [15], SePiCo [43] and HRDA [16], even outperforming them on Cityscapes\(\rightarrow\)ACDC, while only slightly falling behind on the more challenging Cityscapes\(\rightarrow\)Dark Zurich.
### Generalization and Robustness
To evaluate the generalization performance of trained adverse-condition models, we test the Cityscapes\(\rightarrow\)ACDC models on our diverse ACG benchmark. We report the condition-wise mIoU for fog, night, rain, and snow, as well as the overall mIoU, in Table 4. CMA achieves the best generalization performance compared to other model adaptation and UDA methods, with an mIoU of 51.3% over all ACG samples. Interestingly, CMA performs exceptionally well on the most challenging ACG-night split, which also contains combinations of conditions (_e.g_., night and rain), which are absent from the training set. This corroborates that CMA learns highly _robust_ representations through our proposed contrastive cross-domain feature alignment.
### Ablation Study and Further Analysis
All the numbers in this section are from the ACDC validation set. We report the mean performance over 3 repeated runs for each experiment, to reduce variance.
**Ablation Study.** Table 5 shows the ablation study for several important CMA components. Row 1 represents CMA without the CDC loss, solely relying on entropy minimization and self-training for adaptation. A comparison of row 1 to the final model in row 6 reveals the large performance increase of 7.1% mIoU owing to the CDC loss. In row 2 the contrastive loss is added, but the embeddings are obtained by global average pooling over the entire image. Nevertheless, this improves performance substantially by 4.6% mIoU. Next, we add patch-level grouping on a 7%7 grid combined with reference image warping in row 4, which brings a 1.2% mIoU improvement over the global embeddings of row 2. Interestingly, using patches without warping decreases the performance, as shown in row 3. This can be explained by the excessive noise introduced to the contrastive loss due to patch misalignment between reference and target. In row 5, we show that adding our confidence modulation to patch forming leads to another 1% mIoU increase. Finally, row 6 refers to the complete model, which involves estimating positives and negatives through an exponential moving average model, instead of simply using the source model and a random projection head.
**Effect of Reference Images.** Compared to other model adaptation methods, CMA uses extra reference images through the CDC loss. We therefore train two baseline models--CMA without the CDC loss and URMA--by including the reference images in two ways: (i) mixing the reference and target images randomly, and adapting to the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Source-Free} & \multicolumn{4}{c}{AGG mIoU\(\uparrow\)} \\ \cline{3-6} & & fog & night & rain & snow & all \\ \hline TENT [39] & ✓ & 52.6 & 27.7 & 47.5 & 41.1 & 40.0 \\ HCL [17] & ✓ & 54.2 & 28.3 & 48.2 & 42.4 & 40.8 \\ URMA [35] & ✓ & 54.1 & 31.0 & 51.9 & 45.5 & 44.4 \\ DAFormer [15] & & 52.6 & 21.5 & 47.5 & 33.6 & 40.1 \\ SePiCo [43] & & 53.9 & 20.6 & 46.3 & 36.1 & 38.6 \\ HRDA [16] & & **60.0** & 27.1 & 56.2 & 43.3 & 48.9 \\ CMA & ✓ & 59.7 & **40.0** & **59.6** & **52.2** & **51.3** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Generalization performance of adapted models on the ACG benchmark. All models were trained on Cityscapes\(\rightarrow\)ACDC.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{mIoU\(\uparrow\)} \\ \cline{2-4} & Dark Zurich [34] & RobotCar [20, 27] & CMU [1, 20] \\ \hline Source model & 41.7 & 50.0 & 80.0 \\ TENT [39] & 42.8 & 50.1 & 78.9 \\ HCL [17] & 42.7 & 50.1 & 80.2 \\ URMA [35] & 49.3 & 51.6 & 82.8 \\ CMA & **53.6** & **54.3** & **92.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison to the state of the art in model adaptation on Cityscapes\(\rightarrow\)Dark Zurich, Cityscapes\(\rightarrow\)RobotCar, and Cityscapes\(\rightarrow\)CMU. All models use a SegFormer architecture.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Source-Free} & \multicolumn{3}{c}{mIoU\(\uparrow\)} \\ \cline{3-4} & & ACDC [33] & Dark Zurich [34] \\ \hline DAFormer [15] & & 55.4 & 53.8 \\ SePiCo [43] & & 59.1 & 54.2 \\ HRDA [16] & & 68.0 & **55.9** \\ CMA & ✓ & **69.1** & 53.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of CMA to state-of-the-art standard UDA methods. Adaptation from Cityscapes as source dataset.
combination of both, and (ii) adapting in a curriculum,, first adapting from the source to the reference domain, and then using the same method to continue adapting from the reference to the target domain. Table 6 shows that our contrastive approach clearly outperforms both of these simple strategies. The performance benefits of our approach are thus not attainable by naively introducing reference images.
**Alternative Contrastive Loss.** In addition to the mechanisms discussed in Sec. 3.3, we explored swapping the InfoNCE loss for more robust alternatives, in an effort to reduce detrimental effects of false positives or false negatives. We picked the debiased contrastive loss of [8] to account for false negatives, and the robust InfoNCE (RINCE) [7] loss to account for false positives. As shown in Table 7, neither alternative shows significant improvements over InfoNCE. Even though RINCE performs slightly better overall, it introduces extra complexity, which prompts us to prefer the simpler InfoNCE loss. The negligible benefit of RINCE implies that patch-level grouping and confidence modulation already effectively mitigate the false positive rate.
**Hyperparameter Sensitivity.** Fig. 5 shows the sensitivity of CMA performance to changes in two central, method-specific hyperparameters: the embedding grid size, and the InfoNCE temperature \(\tau\). Note that the performance is quite insensitive to either hyperparameter.
**Embedding Space Visualization.** The t-SNE [38] visualizations in Fig. 6 show semantic features extracted from a corresponding pair of adverse- and normal-condition images of ACDC. The features are color-coded by ground truth class, whereby adverse-condition features are plotted sharp and normal-condition features blurry (the "ground-truth" for the normal-condition image was obtained through pseudo-labeling). For clarity only road, sidewalk, vegetation, and sky features are plotted. The left plot shows the features of CMA without the CDC loss. Note that sky (blue) and sidewalk (pink) features are scattered. By contrast, with the CDC loss, the features of these classes are correctly grouped together across conditions in the right plot.
## 5 Conclusion
We present CMA, a model adaptation method for cross-condition semantic segmentation. CMA leverages image-level correspondences to learn condition-invariant features through a contrastive loss. This fosters a shared embedding space, where adverse-condition image features are clustered with semantically corresponding normal-condition features. As experimentally shown, this leads to large performance
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Target & Reference & Type & mIoU\(\uparrow\) \\ \hline Source model [44] & & & - & 56.6 \\ \hline URMA [35] & ✓ & & - & 63.2 \\ URMA [35] & ✓ & ✓ & mixed & 62.9 \\ URMA [35] & ✓ & ✓ & curriculum & 64.1 \\ \hline CMA w/o CDC loss & ✓ & & - & 60.1 \\ CMA w/o CDC loss & ✓ & ✓ & mixed & 60.3 \\ CMA w/o CDC loss & ✓ & ✓ & curriculum & 61.5 \\ \hline CMA & ✓ & ✓ & contrastive & 67.2 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of different strategies for involving reference images in model adaptation, reporting ACDC validation scores.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Debiased [8] & RINCE [7] & InfoNCE [30] \\ \hline CMA & 66.3 & 67.4 & 67.2 \\ \hline \hline \end{tabular}
\end{table}
Table 7: ACDC validation performance of alternative contrastive loss functions.
Figure 5: t-SNE plots showing semantic features of a pair of corresponding ACDC validation set images (adverse\(\leftrightarrow\)sharp, normal\(\leftrightarrow\)blurry), for a CMA model trained without (left) and with (right) the CDC loss. For clarity only four classes are shown.
Figure 6: t-SNE plots showing semantic features of a pair of corresponding ACDC validation set images (adverse\(\leftrightarrow\)sharp, normal\(\leftrightarrow\)blurry), for a CMA model trained without (left) and with (right) the CDC loss. For clarity only four classes are shown.
gains in normal-to-adverse model adaptation, with CMA setting the new state of the art on several benchmarks.
**Acknowledgment.** This work was supported by the ETH Future Computing Laboratory (EFCL), financed by a donation from Huawei Technologies.
|
2305.16500 | Data-driven Discovery of The Quadrotor Equations of Motion Via Sparse
Identification of Nonlinear Dynamics | Dynamical systems provide a mathematical framework for understanding complex
physical phenomena. The mathematical formulation of these systems plays a
crucial role in numerous applications; however, it often proves to be quite
intricate. Fortunately, data can be readily available through sensor
measurements or numerical simulations. In this study, we employ the Sparse
Identification of Nonlinear Dynamics (SINDy) algorithm to extract a
mathematical model solely from data. The influence of the hyperparameter
$\lambda$ on the sparsity of the identified dynamics is discussed.
Additionally, we investigate the impact of data size and the time step between
snapshots on the discovered model. To serve as a data source, a ground truth
mathematical model was derived from the first principals, we focus on modeling
the dynamics of a generic 6 Degrees of Freedom (DOF) quadrotor. For the scope
of this initial manuscript and for simplicity and algorithm validation
purposes, we specifically consider a sub-case of the 6 DOF system for
simulation, restricting the quadrotor's motion to a 2-dimensional plane (i.e. 3
DOF). To evaluate the efficacy of the SINDy algorithm, we simulate three cases
employing a Proportional-Derivative (PD) controller for the 3 DOF case
including different trajectories. The performance of SINDy model is assessed
through the evaluation of absolute error metrics and root mean squared error
(RMSE). Interestingly, the predicted states exhibit at most a RMSE of order of
magnitude approximately $10^{-4}$, manifestation of the algorithm's
effectiveness. This research highlights the application of the SINDy algorithm
in extracting the quadrotor mathematical models from data. | Zeyad M. Manaa, Mohammed R. Elbalshy, Ayman M. Abdallah | 2023-05-25T21:59:54Z | http://arxiv.org/abs/2305.16500v2 | Data-driven Discovery of The Quadrotor Equations of Motion Via Sparse Identification of Nonlinear Dynamics
###### Abstract
**Dynamical systems provide a mathematical framework for understanding complex physical phenomena. The mathematical formulation of these systems plays a crucial role in numerous applications; however, it often proves to be quite intricate. Fortunately, data can be readily available through sensor measurements or numerical simulations. In this study, we employ the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm to extract a mathematical model solely from data. The influence of the hyperparameter \(\lambda\) on the sparsity of the identified dynamics is discussed. Additionally, we investigate the impact of data size and the time step between snapshots on the discovered model. To serve as a data source, a ground truth mathematical model was derived from the first principals, we focus on modeling the dynamics of a generic 6 Degrees of Freedom (DOF) quadrotor. For the scope of this initial manuscript and for simplicity and algorithm validation purposes, we specifically consider a sub-case of the 6 DOF system for simulation, restricting the quadrotor's motion to a 2-dimensional plane (i.e. 3 DOF). To evaluate the efficacy of the SINDy algorithm, we simulate three cases employing a Proportional-Derivative (PD) controller for the 3 DOF case including different trajectories. The performance of SINDy model is assessed through the evaluation of absolute error metrics and root mean squared error (RMSE). Interestingly, the predicted states exhibit at most a RMSE of order of magnitude approximately \(10^{-4}\), manifestation of the algorithm's effectiveness. This research highlights the application of the SINDy algorithm in extracting the quadrotor mathematical models from data. We also try to investigate the effect of noisy measurements on the algorithm efficacy. The successful modeling of the 3 DOF quadrotor dynamics demonstrates the algorithm's potential, while the evaluation metrics validate its performance, thereby clearing the way for more applications in the realm of unmanned aerial vehicles.**
## I. Nomenclature
\begin{tabular}{l l l} \(\mathcal{I}\) & = & Inertial frame of reference \\ \(\mathcal{B}\) & = & Body frame of reference \\ \(\mathbb{R}^{n}\) & = & Real n-dimensional vector \\ \(\mathcal{R}\) & = & Attitude/Rotational matrix/Orientation of a rigid-body, \(\mathcal{R}\in\mathbb{SO}(3)\) \\ \(\phi\) & = & Rotation angle about \(\boldsymbol{x}_{B}\) \\ \(\theta\) & = & Rotation angle about \(\boldsymbol{y}_{B}\) \\ \(\psi\) & = & Rotation angle about \(\boldsymbol{z}_{B}\) \\ eye(n) & = & Identity matrix with a dimensions of \(n\times n\) \\ \(\mathbb{R}^{n\times m}\) & = & Real n-by-m dimensional matrix \\ \(\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}\) & = & Mapping from \(\mathbb{R}^{n}\) space to \(\mathbb{R}^{m}\) space \\ \(\boldsymbol{\Pi}\) & = & Candidate functions library \\ \(\boldsymbol{\Omega}\) & = & Sparsity prompting matrix \\ \(\lambda\) & = & The sparsity-promoting hyperparameter \\ \(\boldsymbol{x}\otimes\boldsymbol{y}\) & = & The vector of all possible product combinations of all the elements in \(\boldsymbol{x}\) and \(\boldsymbol{y}\) \\ \(\dot{\boldsymbol{\varrho}}\) & = & The time derivative of the state \(\varrho\) \\ \(\dot{\boldsymbol{\varrho}}\) & = & The prediction of the state \(\varrho\) \\ \end{tabular}
###### Abstract
We consider a two-dimensional system of coupled coupled systems, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system. The system is coupled to a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear system, with a nonlinear dynamical system, with a nonlinear system, with a nonlinear dynamical system, with a nonlinear dynamical system, with a nonlinear system,
the data-driven model to account for sudden changes in the system dynamics motivated by [40, 41]. Also, Kaiser et al. [3] proposed an integration between SINDy and MPC. They found that the SINDy method needs less data compared to other popular machine learning approaches, such as neural networks. This reduces the likelihood of over-fitting. In contrast to [39], when combined with MPC, the SINDy algorithm yields models that are both computationally feasible and precise, even when trained on limited data.
Motivated by this, SINDy algorithm will be utilized to discover the EOM of quadrotor using numerical simulation data at first, and for the next phase, experimental data might be used. Moreover, a study on SINDy's sparsity hyperparameter effect on model discovery will be examined along with the effect of the sampling time of data collected whether numerically or experimentally.
## 3 Modelling and Simulation
### Coordinate Systems
The coordinate systems for the quadrotor are shown in Fig. 1. The inertial reference frame, \(\mathcal{I}\), is defined by
\[\mathcal{I}=\{\mathcal{O},\boldsymbol{x}_{\mathcal{I}},\boldsymbol{y}_{ \mathcal{I}},\boldsymbol{z}_{\mathcal{I}}\} \tag{1}\]
where, \(\mathcal{O}\) is the frame's origin, \(\boldsymbol{x}_{\mathcal{I}}\), \(\boldsymbol{y}_{\mathcal{I}}\), \(\boldsymbol{z}_{\mathcal{I}}\) are unit vectors defining the frame axis. The body frame, \(\mathcal{B}\), is fixed to the center of mass of the quadrotor defined by
\[\mathcal{B}=\{\mathcal{C},\boldsymbol{x}_{\mathcal{B}},\boldsymbol{y}_{ \mathcal{B}},\boldsymbol{z}_{\mathcal{B}}\} \tag{2}\]
where, \(\mathcal{C}\) is the frame's origin, \(\boldsymbol{x}_{\mathcal{B}},\boldsymbol{y}_{\mathcal{B}}\), \(\boldsymbol{z}_{\mathcal{B}}\) are unit vectors defining the frame axis.
We may acquire the transformation matrix that transition between the body frame to the inertial frame by using the \(\psi-\theta-\phi\) sequence, where \(\psi\) indicates the yaw angle, \(\theta\) represents the pitch angle, and \(\phi\) represents the roll angle.
\[{}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}=\left[\begin{array}{ccc}c\psi c \theta-s\phi s\psi s\theta&-c\phi s\psi&c\psi s\theta+c\theta s\phi s\psi\\ c\theta s\psi+c\psi s\theta s\theta&c\phi c\psi&s\psi s\theta-c\psi c\theta s \phi\\ -c\phi s\theta&s\phi&c\phi c\theta\end{array}\right] \tag{3}\]
Giving the fact that this matrix is orthonormal
\[{}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}{}^{\mathcal{I}}\mathcal{R}_{ \mathcal{B}}^{-1}={}^{\mathcal{I}}\mathcal{R}_{\mathcal{B}}{}^{\mathcal{I}} \mathcal{R}_{\mathcal{B}}^{\mathcal{T}}=\mathtt{eye}(3) \tag{4}\]
where \(\mathtt{eye}(3)\) is a \(3\times 3\) identity matrix.
### Quadrotor kinetics
We assume the quadrotor is a rigid body with 6 DOF, having a mass m and Intertia matrix \(\boldsymbol{J}=\mathtt{diag}(J_{x},J_{y},J_{z})\).
#### 3.1 Newton's Equations of Motion
Let \(\boldsymbol{r}\) denotes the position of \(\mathcal{C}\) in \(\mathcal{I}\). Applying Newton's law to the translational motion
\[m\frac{d^{2}\boldsymbol{r}^{I}}{dt^{2}}=\sum\boldsymbol{F} \tag{5}\]
Figure 1: Quadrotor with the body and the inertial reference frames.
The equation governing the motion of \(\mathcal{C}\)
\[m\frac{d^{2}\mathbf{r}^{\bar{I}}}{dt^{2}}=m\left(\frac{d^{2}\mathbf{r}^{\bar{B}}}{dt^{2}}+ \mathbf{\omega}^{\bar{\mathcal{B}}/\bar{I}}\times\frac{d\mathbf{r}^{\bar{B}}}{dt}\right)= \sum\mathbf{F} \tag{6}\]
\[\sum\mathbf{F}=\mathbf{F}_{\bar{g}}+\mathbf{F}_{\mathrm{th}}=\begin{bmatrix}0\\ 0\\ -mg\end{bmatrix}+\prescript{T}{\mathcal{R}_{\bar{B}}}{}{}\begin{bmatrix}0\\ 0\\ \sum_{n=1}^{4}T_{n}\end{bmatrix} \tag{7}\]
Where \(T_{n}\) is the motor force.
As the control actuation are applied on the body coordinate, Eq. 6. will be expressed in body coordinate and becomes
\[\frac{d^{2}\mathbf{r}^{\bar{B}}}{dt^{2}}=\frac{1}{m}\sum\mathbf{F}-\mathbf{\omega}^{\bar{ \mathcal{B}}/\bar{I}}\times\frac{d\mathbf{r}^{\bar{B}}}{dt} \tag{8}\]
#### Ii-B2 Euler's Equations of Motion
Applying Newton's law to the rotational motion
\[\frac{d\mathbf{h}^{\bar{I}}}{dt}=\frac{d\mathbf{h}^{\bar{B}}}{dt}+\mathbf{ \omega}^{\bar{\mathcal{B}}/\bar{I}}\times h=\sum\mathbf{M} \tag{9}\] \[\dot{\mathbf{\omega}}=\mathbf{J}^{-1}\left(\mathbf{M}-\mathbf{\omega}\times\mathbf{J }\mathbf{\omega}\right) \tag{10}\]
### Quadrotor kinematics
The relation between quadrotor velocity in the body frame and position in the inertial frame can be expressed as
\[\frac{d\mathbf{r}^{\bar{I}}}{dt}=\prescript{T}{\mathcal{R}_{\bar{B}}}{}{}\frac{d\bm {r}^{\bar{B}}}{dt} \tag{11}\]
The relation between quadrotor compenets of angular velocity in the body frame and the roll, pitch and yaw derivatives in the inertial frame can be expressed as
\[\mathbf{\omega}=\mathcal{T}\dot{\mathbf{a}} \tag{12}\]
which can represented in a matrix format as
\[\begin{bmatrix}p\\ q\\ r\end{bmatrix}=\begin{bmatrix}c\theta&0&-c\phi s\theta\\ 0&1&s\phi\\ s\theta&0&c\phi c\theta\end{bmatrix}\begin{bmatrix}\dot{\phi}\\ \dot{\theta}\\ \dot{\psi}\end{bmatrix} \tag{13}\]
### Control inputs
The first control actuation input is defined as the sum of the forces from each of the four rotors. So,
\[u_{1}=\left[T_{1}+T_{2}+T_{3}+T_{4}\right] \tag{14}\]
As shown in Fig 1, rotors one and three both rotate in the negative \(z_{\bar{B}}\) direction. Rotors two and four both rotate in the positive \(z_{\bar{B}}\) direction. Since the moment exerted on the quadrotor oppose the direction of blade rotation. So, the moments \(M_{1}\) and \(M_{2}\) are directed to the positive \(z_{\bar{B}}\) direction, while the moments \(M_{3}\) and \(M_{4}\) are directed to the positive \(z_{\bar{B}}\) direction. Recalling to Eq. 10, it can be rewritten as
\[\mathbf{J}\begin{bmatrix}\dot{p}\\ \dot{q}\\ \dot{r}\end{bmatrix}=\begin{bmatrix}L(T_{2}-T_{4})\\ L(T_{3}-T_{1})\\ M_{1}-M_{2}+M_{3}-M_{4}\end{bmatrix}-\begin{bmatrix}p\\ q\\ r\end{bmatrix}\times\mathbf{J}\begin{bmatrix}p\\ q\\ r\end{bmatrix} \tag{15}\]
From Eq. 14 and Eq. 15, the total control actuation inputs, defined as the vecotr \(\mathbf{U}=[u_{1},u_{2},u_{3},u_{4}]^{\top}\),
\[\mathbf{U}=\begin{bmatrix}u_{1}\\ u_{2}\\ u_{3}\\ u_{4}\end{bmatrix}=\begin{bmatrix}1&1&1&1\\ 0&L&0&-L\\ -L&0&L&0\\ \gamma&-\gamma&\gamma&-\gamma\end{bmatrix}\begin{bmatrix}T_{1}\\ T_{2}\\ T_{3}\\ T_{4}\end{bmatrix} \tag{16}\]
where \(L\) is the distance from the rotor axis of rotation to the quadrotor center of mass and \(\gamma=\frac{K_{M}}{K_{F}}\), where \(K_{m}\) and \(K_{t}\) are respectively the aerodynamic motor moment and force constants [42, Section 3.2].
### Simulation
For the scope of this manuscript, we will partially focus on the simulation of a 3 DOF quadrotor system, as shown in Figure 2 Quad. This design, also known as a planar quadrotor, restricts the quadrotor's motion to the \(y-z\) plane exclusively. At first, we wanted to use a simplified model to validate and test SINDy algorithm to the problem in hand. Then, gradually we transition to the entire 6-DOF analysis by studying this simplified subset derived from the comprehensive 6-DOF scenario mentioned in Section III.B. The simulation of the system's equations of motion will be done discretely using the well-known Runge-Kutta fourth-order (RK4) numerical technique.
\[\mathbf{x}_{k+1}=\mathbf{f}_{\text{RK4}}(\mathbf{x}_{k},\mathbf{u}_{k},\Delta t) \tag{17}\]
where \(k\) is discrete time index, \(\Delta t\) is the time step, \(\mathbf{u}\) is the control actuation. For the full picture formulation of RK4 readers may refer to [43].
The 3 DOF equations of motion are
\[\begin{split}\ddot{y}&=-\frac{u_{1}}{m}\sin\phi\\ \ddot{z}&=\frac{u_{1}}{m}\sin\phi-g\\ \ddot{\phi}&=\frac{u_{2}}{J_{x}}\end{split} \tag{18}\]
or in a matrix format as
\[\begin{bmatrix}\ddot{y}\\ \ddot{z}\\ \ddot{\phi}\end{bmatrix}=\begin{bmatrix}0\\ -g\\ 0\end{bmatrix}+\begin{bmatrix}-\frac{1}{m}\sin(\phi)&0\\ \frac{1}{m}\cos(\phi)&0\\ 0&\frac{1}{J_{xx}}\end{bmatrix}\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix} \tag{19}\]
We consider the state vector \(\mathbf{x}=\begin{bmatrix}y,z,\phi,\dot{y},\dot{z},\dot{\phi}\end{bmatrix}^{\top}\). Then, Equations 19 are linearized about the hovering position using the first order Taylor series expansion approximation of the non-linear terms and developed PD controller for the robot.
The linearized version of Eqs. 19
\[\begin{split}\ddot{y}&=-g\phi\\ \ddot{z}&=-g+\frac{u_{1}}{m}\\ \ddot{\phi}&=\frac{u_{2}}{I_{xx}}\end{split} \tag{20}\]
Figure 2: 3 DOF quadrotor with the inertial and the body reference frames.
Introducing a generic state variable \(\varrho\) that can represent either \(\mathbf{y}\), \(\mathbf{z}\), or \(\mathbf{\phi}\) at a time. For this generic state variable \(\varrho\) to be driven to a new desired state, the commanded acceleration \(\ddot{\varrho}_{c}\) is needed. For this we define the proportional and derivative error as
\[e_{p}=\varrho_{d}-\varrho \tag{21}\]
\[e_{v}=\dot{\varrho}_{d}-\dot{\varrho}\]
where the \(\varrho_{d}\) is the desired value. Then we want both \(e_{p}\) and \(e_{v}\) to satisfy,
\[(\ddot{\varrho}_{d}-\ddot{\varrho}_{c})+k_{p}e_{p}+k_{v}e_{v}=0 \tag{22}\]
in order to guarantee the error's convergence under some values of \(k_{p}\) and \(k_{v}\). So
\[\begin{split} u_{1}&=mg+m\ddot{z}_{c}=m\left(\ddot{ z}_{d}+k_{v,z}\left(\dot{z}_{d}-\dot{z}\right)+k_{p,z}\left(z_{d}-z\right)+g \right)\\ u_{2}&=J_{x}\ddot{\phi}_{d}=J_{x}\left(\ddot{ \phi}_{c}+k_{v,\phi}\left(\dot{\phi}_{c}-\dot{\phi}\right)+k_{p,\phi}\left( \phi_{c}-\phi\right)\right)\\ \phi_{c}&=-\frac{\ddot{y}_{c}}{g}=-\frac{1}{g}\left( \ddot{y}_{d}+k_{v,y}\left(\dot{y}_{d}-\dot{y}\right)+k_{p,y}\left(y_{d}-y \right)\right)\end{split} \tag{23}\]
From this, we can simulate the 3 DOF quadrotor based on different desired trajectory cases found in Table 1.
We have set the _mass_ of the quadrotor to be 0.18 Kg, the _arm length_ is 0.086 m, and the quadrotor's _moment of inertia_ (\(J_{x}\)) is 0.00025 Kg.m\({}^{2}\).
The response of the three cases is summarized in Figure 3.
The resulted data from the numerical simulation will be used as a training and test data for SINDy algorithm in the upcomming section.
## 4 Methodology
### SINDy With Control Framework
Nonlinear dynamical systems can be represented as
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x}) \tag{24}\]
and we consider more general case when control takes place in the nonlinear dynamics described as
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u}) \tag{25}\]
where \(\mathbf{x}\in\mathbb{R}^{n}\) is the system states vector, \(\mathbf{u}\in\mathbb{R}^{q}\) is the control vector, and \(\mathbf{f}\) is a nonlinear such that \(\mathbf{f}:\mathbb{R}^{n}\times\mathbb{R}^{q}\longrightarrow\mathbb{R}^{n}\).
SINDy with control technique utilizes sparse regression to identify a minimal set of active terms from a candidate library \(\mathbf{\Pi(x,u)}\) of linear and nonlinear terms in the system states \(\mathbf{x}\) and actuation variables \(\mathbf{u}\) that approximate the underlying nonlinear dynamics represented by \(\mathbf{f}\). On the premise that a considerable number of systems manifest relatively sparse active terms in their dynamics. To construct \(\mathbf{\Pi}\) we measure \(m\) snapshots of the state \(\mathbf{x}\) and actuation \(\mathbf{u}\) by experiment or from numerical simulation.
\[\mathbf{X}=\begin{bmatrix}\mathbf{x}^{\top}(t_{1})\\ \mathbf{x}^{\top}(t_{2})\\ \mathbf{x}^{\top}(t_{3})\\ \vdots\\ \mathbf{x}^{\top}(t_{m})\end{bmatrix}=\begin{bmatrix}x_{1}(t_{1})&x_{2}(t_{1})& \ldots&x_{n}(t_{1})\\ x_{1}(t_{2})&x_{2}(t_{2})&\ldots&x_{n}(t_{2})\\ x_{1}(t_{3})&x_{2}(t_{3})&\ldots&x_{n}(t_{3})\\ \vdots&\vdots&\ddots&\vdots\\ x_{1}(t_{m})&x_{2}(t_{m})&\ldots&x_{n}(t_{m})\end{bmatrix},\quad\mathbf{U}= \begin{bmatrix}\mathbf{u}^{\top}(t_{1})\\ \mathbf{u}^{\top}(t_{2})\\ \mathbf{u}^{\top}(t_{3})\\ \vdots\\ \mathbf{u}^{\top}(t_{m})\end{bmatrix}=\begin{bmatrix}\mathbf{u}_{1}(t_{1})&u_{2}(t_{1}) &\ldots&u_{q}(t_{1})\\ u_{1}(t_{2})&u_{2}(t_{2})&\ldots&u_{q}(t_{2})\\ u_{1}(t_{3})&u_{2}(t_{3})&\ldots&u_{q}(t_{3})\\ \vdots&\vdots&\ddots&\vdots\\ u_{1}(t_{m})&u_{2}(t_{m})&\ldots&u_{q}(t_{m})\end{bmatrix} \tag{26}\]
\begin{table}
\begin{tabular}{l l l l} \hline
**Case** & **Trajectory type** & **Initial states** & **Desired states** \\ \hline A & step & \(x_{0}=[0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0]^{\top}\) & \(x_{d}=[0.5,\ 0.2,\ 0,\ 0,\ 0,\ 0,\ 0]^{\top}\) \\ B & Sine & \(x_{0}=[0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0]^{\top}\) & \(x_{d}=[4,\ 0,\ 0,\ 0,\ 0,\ 0]^{\top}\) \\ C & Diamond & \(x_{0}=[0,\ 1.8,\ 0,\ 0,\ 0,\ 0]^{\top}\) & \(x_{d}=[0,\ 0,\ 0,\ 0,\ 0,\ 0]^{\top}\) \\ \hline \end{tabular}
\end{table}
Table 1: Simulation cases for different trajectories.
The matrix \(\mathbf{\Pi}\) can be reconstructed now by
\[\mathbf{\Pi}\left(\mathbf{X},\mathbf{U}\right)=\begin{bmatrix}\left|\begin{array}{cccccc} \left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{ ccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{ ccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{ ccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left| \begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc} \left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{ ccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left| \begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc} \left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left|\begin{array}{ ccc}\left|\begin{array}{cccccccccc}\left|\begin{array}{cccccc}\left|\begin{array}{cccccc}\left| \begin{array}{cccccc}\left|\begin{array}{cccccccccc}\left|\begin{array}{cccccc} \left|\begin{array}{cccccccccc}\left|\begin{array}{cccccccccc}\left| \begin{array}{cccccccccc}\left|\begin{array}{cccccccccc}\left|\begin{array}{cccccccccc} \left|\begin{array}{cccccccccc}\left|\begin{array}{cccccccccc}\left| \begin{array}{cccccccccc}\left|\begin{array}{cccccccccc}\left|\begin{array}{cccccccccc}\left| \begin{array}{cccccccccccccc}\left|\begin{array}{cccccccccccccc}\left| \begin{array}{cccccccccccccc}\left|\begin{array}{cccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc} \left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc} \left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccccccc}\left|\begin{array}{cccccccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}[cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}[cccccccccccc}\left|\hline\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}[cccccccccccc}\left|\begin{array}{cccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}[cccccccccccccccc}\left|\begin{array}{cccccccccccc}\left| \begin{array}{cccccccccccc}\left|\begin{array}[cccccccccccccccc}\left|\begin{array}{cccccccccccc}\left
We assume the generic prediction of a generic state vector \(\mathbf{\varrho}\) is denoted as \(\hat{\mathbf{\varrho}}\) and the RMSE of \(\mathbf{\varrho}\) is denoted as \(\tilde{\varrho}\)
\[\tilde{\varrho}=\sqrt{\frac{\sum_{i=1}^{N}\left(\varrho_{i}-\hat{\varrho}_{i} \right)^{2}}{N}} \tag{30}\]
The chosen lambda corresponds to the RMSE of the states of the planar quad case.
#### Iv-A2 Candidate functions
In the present study, we exploit our comprehensive understanding of the physical system in hand. Specifically, we propose that the presence of nonlinearity in the system can be expressed through polynomials and Fourier basis functions, such as those involving sine and cosine. Through mathematical analysis of the system, we have discerned that from the entire state space of the system, only the Euler angles can be represented as Fourier basis functions, and the rest can be characterized by polynomials.
### Training
The numerical data from the 3 DOF planar quadrotor simulation was utilized. We first choose Case C because we thought it would allow the quadrotor to span the entire state space. Consequently, it will be the ideal case to start with for training. It was subsequently demonstrated by the other cases that the algorithm is generic regardless of trajectory. As long as the trajectory gave sufficient data and spanned the state space appropriately, the algorithm successfully captured the complete dynamics. If, on the other hand, the quadrotor followed a signal that led it to move only in a straight line along the \(y\) or \(z\) axis, the algorithm may fail to distinguish the unseen states, resulting in an incomplete identification of the dynamics.
We used snapshots with \(\Delta t=0.05\) with 1000 snapshots in time. We differentiated the states using finite difference numerical scheme of order one.
## V Results and Discussion
### Discovered Model
The model is trained with the data extracted from case C as discussed earlier and it came out with the following dynamics
\[\begin{split}\dot{y}&=\dot{y}\\ \dot{z}&=\dot{z}\\ \dot{\phi}&=0.993\dot{\phi}\\ \dot{y}&=-5.549u_{1}\sin(\phi)\\ \ddot{z}&=-9.811+5.556u_{1}\cos(\phi)\\ \ddot{\phi}&=4000.000u_{2}\end{split} \tag{31}\]
That nearly matches the original derived mathematical model from the first principles. Table 3 shows a comparison between the discovered dynamics by SINDy and the gound truth mathematical model. However, we also used the other cases to train the model and it resulted in models that compare to the original one as we discussed in section IV.B.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline State & \(y\) & \(z\) & \(\phi\) & \(\dot{y}\) & \(\dot{z}\) & \(\dot{\phi}\) \\ \hline RMSE \(\times 10^{-3}\) & 0.0088 & 0.0152 & 0.0227 & 0.0294 & 0.0070 & 0.1379 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The RMSE for the chosen \(\lambda=0.45\) and with diamond shaped trajectory.
### Testing
#### 2.2.1 Case A
Here we simulate the discovered dynamics using the step trajectory as desired trajectory. We compare the system behaviour over time for both the discovered SINDy dynamics and the ground truth. Figure 3(b) shows the absolute error between the predicted states \(\hat{\varrho}\) and the original states \(\varrho\).
The results show that the absolute error approaches zero, giving strong validation for the identified model. This shows that the SINDy algorithm accurately captures the underlying dynamics of the system, closely matching the ground truth dynamics. The relatively small error between anticipated and original states confirms the discovered model's efficacy and reliability in capturing the main characteristics of quadrotor dynamics.
The close agreement between the identified dynamics and the ground truth confirms the SINDy algorithm's utility as a good tool for extracting mathematical models from data.
#### 2.2.2 Case B
As in the previous section, we simulate the discovered dynamics using the sinusoidal trajectory as desired trajectory. We compare the system behaviour over time for both the discovered SINDy dynamics and the ground truth. Figure 4(b) shows the absolute error between the predicted states \(\hat{\varrho}\) and the original states \(\varrho\). We can say that the error nearly equal zero. This validates the discovered model.
## 3 Conclusion
In this study, we have employed the SINDy algorithm to extract a mathematical model from simulation data of a quadrotor. Initially, the quadrotor dynamics were modeled in the 6 DOF configuration. Subsequently, for the purpose of simplification and model validation, the system was constrained to a 3 DOF representation within a 2-dimensional plane.
To assess the effectiveness of the SINDy algorithm, three distinct simulation cases, as depicted in Figure 3, were considered. A systematic exploration of the hyperparameter \(\lambda\) space was conducted. Through this thorough analysis, we successfully identified the optimal model with an associated \(\lambda\) value of 0.45, as demonstrated in Table 1, based on the RMSE metric.
Table 3 and Figures 3(b), 4(b) show that the algorithm captured almost the same dynamics as the quadrotor ground truth mathematical model.
This study demonstrates the SINDy method as a powerful tool for obtaining solid mathematical models from data, which will aid in the advancement of modeling in the field of aerospace applications, particularly UAVs. Furthermore, in a future version of this research, we will attempt to analyze the algorithm's resilience in the face of noisy measurements in order to assure its robustness in real-world circumstances. Understanding how measurement errors affect the identified model's accuracy and reliability.
Given the occurrence of mathematically unmodeled disturbances and uncertainties in real-world scenarios, future research should concentrate on building discrepancy models capable of properly capturing and accounting for these aspects. Incorporating these models with the SINDy algorithm can improve its ability to deal with disturbances, resulting in more robust and accurate predictions.
Furthermore, we intend to extend the SINDy algorithm's applicability to the full 6 DOF quadrotor model, encompassing all elements of its dynamic behavior. A more thorough understanding of the quadrotor's dynamics can be
\begin{table}
\begin{tabular}{l l l} \hline \hline
**States** & **SINDy** & **Mathematical Model** \\ \hline \(\dot{y}\) & \(1.000\dot{y}\) & \(1.000\dot{y}\) \\ \(\dot{z}\) & \(1.000\dot{z}\) & \(1.000\dot{z}\) \\ \(\dot{\phi}\) & \(0.993\dot{\phi}\) & \(1.000\dot{\phi}\) \\ \(\ddot{y}\) & \(-5.549\ u_{1}\sin(\phi)\) & \(-5.5556\ u_{1}\ \sin(\phi)\) \\ \(\ddot{z}\) & \(-9.811+5.556\ u_{1}\cos(\phi)\) & \(-9.81+5.556\ u_{1}\cos(\phi)\) \\ \(\ddot{\phi}\) & \(4000.000\ u_{2}\) & \(4000.000\ u_{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between the discovered dynamics and ground truth mathematical model.
obtained by including the entire model, including translational and rotational motion.
There will also be an in-depth examination of the optimizers used in the SINDy algorithm. Examining various optimization strategies and their impact on the quality of the identified model can result in improvements in convergence speed, accuracy, and the capacity to handle complicated datasets successfully.
By addressing these areas of future research, we can further enhance the effectiveness and applicability of the SINDy algorithm in modeling complex dynamical systems, such as quadrotors, and pave the way for its utilization in various practical applications.
## Acknowledgments
Z. M would like to acknowledge the fruitful discussions with Dr. Ayman Abdallah and his continuous support and encouragement throughout the project.
|
2305.15009 | Creation of NV centers in diamond under 155 MeV electron irradiation | Single-crystal diamond substrates presenting a high concentration of
negatively charged nitrogen-vacancy centers (NV-) are on high demand for the
development of optically pumped solid-state sensors such as magnetometers,
thermometers or electrometers. While nitrogen impurities can be easily
incorporated during crystal growth, the creation of vacancies requires further
treatment. Electron irradiation and annealing is often chosen in this context,
offering advantages with respect to irradiation by heavier particles that
negatively affect the crystal lattice structure and consequently the NV-
optical and spin properties. A thorough investigation of electron irradiation
possibilities is needed to optimize the process and improve the sensitivity of
NV-based sensors. In this work we examine the effect of electron irradiation in
a previously unexplored regime: extremely high energy electrons, at 155 MeV. We
develop a simulation model to estimate the concentration of created vacancies
and experimentally demonstrate an increase of NV- concentration by more than 3
orders of magnitude following irradiation of a nitrogen-rich HPHT diamond over
a very large sample volume, which translates into an important gain in
sensitivity. Moreover, we discuss the impact of electron irradiation in this
peculiar regime on other figures of merits relevant for NV sensing, i.e. charge
state conversion efficiency and spin relaxation time. Finally, the effect of
extremely high energy irradiation is compared with the more conventional low
energy irradiation process, employing 200 keV electrons from a transmission
electron microscope, for different substrates and irradiation fluences,
evidencing sixty-fold higher yield of vacancy creation per electron at 155 MeV. | Elena Losero, Valentin Goblot, Yuchun Zhu, Hossein Babashah, Victor Boureau, Florian Burkart, Christophe Galland | 2023-05-24T10:48:47Z | http://arxiv.org/abs/2305.15009v1 | # Creation of NV centers in diamond under 155 MeV electron irradiation
###### Abstract
Single-crystal diamond substrates presenting a high concentration of negatively charged nitrogen-vacancy centers (NV\({}^{-}\)) are on high demand for the development of optically pumped solid-state sensors such as magnetometers, thermometers or electrometers. While nitrogen impurities can be easily incorporated during crystal growth, the creation of vacancies requires further treatment. Electron irradiation and annealing is often chosen in this context, offering advantages with respect to irradiation by heavier particles that negatively affect the crystal lattice structure and consequently the NV\({}^{-}\) optical and spin properties. A thorough investigation of electron irradiation possibilities is needed to optimize the process and improve the sensitivity of NV-based sensors. In this work we examine the effect of electron irradiation in a previously unexplored regime: extremely high energy electrons, at 155 MeV. We develop a simulation model to estimate the concentration of created vacancies and experimentally demonstrate an increase of NV\({}^{-}\) concentration by more than 3 orders of magnitude following irradiation of a nitrogen-rich HPHT diamond over a very large sample volume, which translates into an important gain in sensitivity. Moreover, we discuss the impact of electron irradiation in this peculiar regime on other figures of merits relevant for NV sensing, i.e. charge state conversion efficiency and spin relaxation time. Finally, the effect of extremely high energy irradiation is compared with the more conventional low energy irradiation process, employing 200 keV electrons from a transmission electron microscope, for different substrates and irradiation fluences, evidencing sixty-fold higher yield of vacancy creation per electron at 155 MeV.
## 1 Introduction
Negatively charged nitrogen vacancy (NV\({}^{-}\)) centers in diamond have raised a lot of attention in the last decades for their promising sensing capabilities under ambient conditions, particularly for measuring magnetic fields, electric fields and temperature [1, 2, 3, 4]. The sensing mechanism is typically based on optical polarization and readout of their the spin states [5]. Both single NVs [6, 7, 8] and ensemble [9, 10] can be used. The first approach offers atomic spatial resolution and high readout contrast, but the photon flux is low and limits the absolute sensitivity. On the other hand, NV ensembles are an attractive option for large scale applications: in this case the spatial resolution is given by the optically probed volume and the extracted signal scales with \(N\), while the photon shot noise with \(\sqrt{N}\), \(N\) being the number of excited NV centers. Therefore, having high NV density is of utmost importance for achieving high signal to noise ratio and sensitivity [11].
A lot of efforts have been invested in engineering diamond with high NV concentration and long coherence time for quantum sensing applications. One promising approach focuses on tailoring NV properties during the growth of diamond itself [12, 13, 14]. However, despite impressive progress in this direction, these processes are still not standard. They are typically employed for thin layers only and result in expensive end products. Another possible approach consists in increasing the NV density inside more affordable commercially available diamond substrates, synthesized with conventional techniques. This is the approach we focus on in this work.
In order to create new NV centers in a pristine diamond substrate, substitutional nitrogen atoms (N) have to bond with vacancies (V) in the diamond lattice. Nitrogen is naturally present in most commercially available diamond samples, with very different concentrations [N] depending on the growth technique [15]. Note that throughout this work, we mainly use ppm as concentration measurement unit. Considering diamond lattice parameters, we recall that 1 ppm = \(1.75\cdot 10^{17}\) atoms\(\cdot\)cm\({}^{-3}\). High-pressure-high-temperature (HPHT) diamonds (type Ib) are low cost and high yield and typically present a large amount of non-intentionally doped nitrogen ([N]\(\sim\)10-300 ppm), while chemical vapour deposition (CVD) synthesis allows for a better control of nitrogen concentration. 'Optical grade' CVD-grown substrates typically present nitrogen concentrations of few ppm and are routinely produced by many suppliers.
In contrast, all commercially available diamond substrates usually contain negligible amount of vacancies. The most common strategies for vacancy creation are ion irradiation [16, 17, 18, 10](C, N, He, H, etc.) and electron irradiation [19, 20, 10, 21, 22, 23, 24]. Additional techniques were proposed and experimentally demonstrated, including laser writing [25] and neutron irradiation [26], among others. Following irradiation, an annealing process allows vacancies to move and bond with the substitutional nitrogen atoms, converting a fraction of each of these two constituents into desired NV centers. Vacancies migration is typically achieved through thermal annealing above 600\({}^{\circ}\)C, which is the threshold temperature at which they become mobile [27]. Standard annealing process consists in few hours at 700-900\({}^{\circ}\)C. Low pressure is used to avoid graphitization. Annealing may also be performed directly during irradiation, to lower the formation of vacancies complexes [28, 29, 30, 31].
Vacancy creation is thus the main tool for engineering NV centers post diamond synthesis. In particular, electron irradiation offers advantages compared to heavier particles in terms of the induced lattice damage [32]. Historically, electron irradiation was first performed at acceleration energy of few MeV (2-10 MeV), with large beams uniformly irradiating all the sample area and resulting in penetration depth up to a few millimeters. Later, low energy irradiation was explored using transmission electron microscopes (TEM), with electrons typically accelerated at 200 keV [33, 34, 21, 35]. This value is close to the minimum electron energy necessary to create a vacancy, estimated as \(\sim\)165 keV in [36]. In this energy regime the penetration depth and the beam diameter are typically lower (few tens of \(\mu\)m), but these features can be used to create NV ensembles located in limited volumes. Moreover, TEM microscopes are available in many institutes, making the process widely accessible. However, the regime of extremely high energy irradiation, above 100 MeV, remains unexplored. With the growth of applications for quantum sensing with NV ensembles, this regime could present interesting opportunities for NV center engineering, in particular for high density ensembles over macroscopic volumes.
Here, we investigate the effect of extremely high energy (155 MeV) electron irradiation of bulk diamond for NV creation and NV sensing application. First, we develop a new simulation tool, able to evaluate the number of vacancies created by electrons with energy above 100 MeV. Simulation of vacancy creation is crucial to set the optimal irradiation parameters. As a rule of thumb, the concentration of created vacancies should be of the order of half the concentration of substitutional nitrogen present in the substrate, so that all vacancies can bond to a nitrogen atom and trap a free electron from a second nitrogen atom [32]. Without sufficient nitrogen to act as electron donors, most NV are in their neutral charged state NV\({}^{0}\), which cannot be used in spin-based sensing protocols. In most experimental paper the vacancy concentration [V] is extrapolated from a couple of theoretical papers [36, 37], where simulation was limited to the regime of few MeV. These results cannot be extrapolated to much higher electron energy, in particular due to the onset of secondary cascade
processes. Our model describes vacancy creation in all energy regimes, from few keV to several hundreds of MeV.
We then perform electron irradiation with electrons at 155 MeV, on a commercially available HPHT sample ([N] \(\lesssim\) 200 ppm). Irradiation is performed in a linear accelerator at the Deutsches Elektronen-Synchrotron (DESY). We characterize the obtained sample using different techniques to confirm that the lattice quality is not degraded and to quantify the sensing performances of the created NV centers. We report an increase in NV\({}^{-}\) density by more than a factor 2000, up to 0.6 ppm, over the entire sample length (3 mm) along the beam propagation axis, and unaffected optically detected magnetic resonance (ODMR) contrast and linewidth. This corresponds to an increase in projected sensitivity by a factor \(\sim 45\) with respect to the sample before irradiation.
Finally, the results are also compared with those obtained using low energy irradiation. Two different substrates, HPHT and CVD, presenting different levels of initial nitrogen concentration ([N]\(\lesssim\)200 ppm and [N]\(\lesssim\)1 ppm, respectively) are investigated. A TEM is used to irradiate several areas (diameter \(\sim\) 15 \(\mu\)m) on the substrates, with 200 keV electrons and different irradiation fluences. For a same substrate and fluence, we observe that the 155 MeV irradiation provides \(\sim\)60 times higher NV\({}^{-}\) concentration, in qualitative agreement with our simulation. Moreover, we show in the 200 keV-irradiated samples that conversion efficiency from V to NV\({}^{-}\) saturates as electron fluence is increased, with a saturation threshold fixed by [N], and clearly identify the presence of the same mechanism in the 155 MeV-irradiated sample.
Overall, our work opens a new path of exploration for NV center engineering in diamond with electron irradiation, considering a new electron energy irradiation regime. A peculiar aspect of the ultra-high energy regime is that it allows to create NV\({}^{-}\) ensembles well suited to quantum sensing over macroscopic volumes: we estimate that vacancies can be efficiently generated through more than 10 cm of diamond, permitting massive saving in batch irradiation costs if hundreds of substrates are stacked together, for example.
## 2 Simulation
Electron trajectories inside diamond can be computed with the CASINO software, a freely available Monte Carlo simulation tool [38]. We use it to estimate the electron penetration depth. The results obtained are reported in Fig. 1 a and b, for extremely high energy (155 MeV) and low energy (200 keV) respectively. 200 electron trajectories are displayed in both cases. In the first case, almost all electrons go through the 3 mm diamond in a straight line, due to the very high electron energy. In the second case, their low kinetic energy does not allow electrons to exit from the sample, the penetration depth being limited to slightly more than 100 \(\mu\)m. Vacancies (and thus NV centers) are here created in a shallower layer only, limited by the minimum energy that an electron should have to displace a carbon atom.
However, CASINO does not model the generation of crystal defects induced by the electron beam interaction with the sample. In particular, the density of created vacancies cannot be directly estimated with this software. Especially at very high energies, the effect of the displaced carbon atoms is expected to play a crucial role since they have enough energy to displace other atoms through a cascade phenomenon [36, 39]. This secondary process is not implemented in CASINO, which only considers the primary electron-carbon interaction. In [36] a different Monte Carlo simulation tool is proposed, to discuss the damage induced by electrons in diamond, in the range from 0.25 to 10 MeV. To explore the effect of electron irradiation in the regime of ultra high energy, where radiative energy losses of electrons and vacancies generated by cascade phenomenon are prevailing, we develop an in-house model that is able to provide reliable results in all energy regimes from few keV to hundreds of MeV.
The simulation has been implemented using the proprietary coding language of Gatan Microscopy Suite (GMS 3.5) software and is available upon request. First, the energy profile of the electron beam is calculated versus the propagation depth in the material. For this purpose, the energy losses by ionization, predominant at low beam energies (e.g. 200 keV), and the radiative energy losses by bremsstrahlung, predominant at very high beam energies (e.g. 155 MeV), are considered. Second, the
Figure 1: a-b) Trajectories of 200 electrons in diamond, simulated using CASINO software. a) 155 MeV electron irradiation, 500 \(\mu\)m beam diameter. b) 200 keV electron irradiation, 15 \(\mu\)m beam diameter. c-d) Simulation of the number of vacancies per centimeter depth generated by 1 accelerated electron at c) 155 MeV energy and d) 200 keV energy. The contribution of the initial recoiled atoms (filled light gray) and the secondary cascade process (filled dark gray) are highlighted. Note the different axes limits in the two cases.
density of lattice vacancies is calculated versus the depth in the diamond crystal. For this purpose, the energy transfer events from the fast electrons to the carbon nuclei are considered. If the energy transfer is higher than the threshold energy capable of ejecting an atom from the crystal lattice, a vacancy-interstitial pair is created in the crystal. This displacement threshold energy is around 35 eV in diamond [36], and corresponds to the maximal energy transferable by a 165 keV electron. If a recoiled carbon atom has an energy higher than the displacement threshold energy, it can then generate further vacancies through atom-atom collisions, known as cascade process. It has been predicted with molecular dynamics that 50% of the displaced carbon atoms in cascade process recombine and do not generate stable vacancies in diamond [40]; this phenomenon is included in our simulation. The details of the model are described in the Supplementary Information.
Figs. 1c and d show the number of vacancies per centimeter depth generated by a single accelerated electron. We observe that the vacancy generation through cascade process is not activated for 200 keV beam while it is predominant for 155 MeV beam. As complementary information, in Fig. S2b the number of generated vacancies versus the beam energy is reported. The 200 keV electrons are not able to generate vacancies after a propagation of 38 \(\mu\)m as the maximal energy transferable to a carbon atom is decreased below the displacement threshold energy at this depth. The vacancy generation of the 155 MeV beam is almost constant after a propagation depth of 3 mm in diamond, with a beam energy decreased to 149 MeV. At this energy the propagation depth exceeds by far the dimension of the sample under study: from the simulation results in Fig. 1c, the number of created vacancies, and consequently NV centers, remains high after a propagation of more than 10 cm inside the diamond. Multiplying the profile by the beam fluence gives the profile of created vacancy density, the knowledge of which helps to set the irradiation parameters for a desired application.
For a fluence of 1.5\(\cdot\)10\({}^{18}\) e/cm\({}^{2}\), the concentration of vacancies generated by a 155 MeV gaussian beam (diameter \(\sim\)500 \(\mu\)m) propagating for 3 mm in a diamond sample is shown in Fig. 2a. We predict no significant attenuation along the y-axis across the 3 mm path. The lateral profile (Fig. 2b) shows that for the chosen fluence the vacancy concentration is of similar magnitude as the nitrogen concentration in typical HPHT substrates, estimated by commercial providers as [N]\(\sim 200\) ppm. The gaussian beam profile allows to explore the impact of different fluences into the same diamond substrate. In comparison, Fig. 2c shows the concentration of vacancies obtained for a wide range of fluences. The gray line refers to the low energy irradiation regime (200 keV), while the 155 MeV case is reported in black. The horizontal dashed lines represent the [N] in the two considered substrates (CVD and HPHT respectively).
Figure 2: a-b) Simulation of the vacancy concentration (in ppm) generated by 155 MeV electrons (500 \(\mu\)m gaussian beam, maximum fluence \(\sim\) 1.5\(\cdot\)10\({}^{18}\) e\({}^{-}\)/cm\({}^{2}\)). a) xy-map, for the beam propagating in the y direction and b) lateral profile, compared with half the [N] present in the substrate. c) Vacancy concentration (in ppm) generated at different irradiation fluences by 200 keV electrons (gray line) and 155 MeV electrons (dark green line). The horizontal lines correspond to half the [N] present in HPHT and CVD substrates, respectively.
## 3 Irradiation experiments
### Extremely high energy electron irradiation
The 155 MeV electron irradiation took place at ARES (Accelerator Research Experiment at SINBAD) at DESY, Hamburg. ARES is a normal-conducting linear accelerator for research and development purposes with ultra-short electron bunches. This experiment was installed at the in-air experimental station located at the end of the accelerator. A dedicated holder was designed in order to properly align the sample to the beam, with 1 \(\mu\)m accuracy. The experiment was performed on a commercial HPHT diamond sample from Element6 (3\(\times\)3\(\times\)0.3 mm\({}^{3}\), [N]\(\sim\) 200 ppm). The beam presents a gaussian profile and passes through the sample along the \(y-\)axis, as schematically reported in Fig. 3a. The diameter of the beam is estimated to be \(2r\sim(500\pm 50)\)\(\mu\)m.
According to the results presented in Fig. 2a and discussed in the previous section, we set the target fluence to \(1.5\cdot 10^{18}\) e/cm\({}^{2}\). To reach this value it was crucial to operate at very high bunch charges, up to 120 pC per pulse, with a repetition rate of 10 Hz. The sample was irradiated for 96 hours and the beam position was monitored regularly. Following the irradiation, a conventional low pressure thermal annealing step (800\({}^{\circ}\)C during 4 h under P=10\({}^{-6}\) mbar) was performed. Both the annealing and the following measurements were carried out at the Ecole Polytechnique Federale de Lausanne (EPFL). In the following, this sample is referred to as Sample 1.
### Low energy electron irradiation
In parallel, we performed a more standard irradiation process, using low energy electrons from a TEM available at the Interdisciplinary Centre for Electron Microscopy - EPFL. We used a Talos F200S TEM providing electrons at 200 keV within a homogeneous beam of 15 \(\mu\)m diameter. The irradiation geometry is reported in Fig. 3b. Moving the sample while changing the beam current and the irradiation time allowed us to easily control the irradiation fluence from one region to the other, within the same sample. As reported in Fig. 2c, tested fluences cover a wide range from \(1\cdot 10^{16}\) to
Figure 3: Schematic of the electron irradiation experiments using a) 155 MeV electrons in a \(\sim\)500 \(\mu\)m beam with a fluence of \(1.5\cdot 10^{18}\) e/cm\({}^{2}\) at ARES (DESY) and b) 200 keV electrons in a 15 \(\mu\)m beam using a TEM. Ten different locations with fluences in the range of \(1\cdot 10^{16}\) to \(5\cdot 10^{20}\) e/cm\({}^{2}\) were irradiated on each of two samples (1. HPHT and 2. CVD), presenting a different initial nitrogen concentration.
\(5\cdot 10^{20}\) e/cm\({}^{2}\). The irradiation time for the highest fluence was around 15 minutes, with a flux density up to \(1\cdot 10^{18}\) e/s/cm\({}^{2}\). Two different samples were subject to the same irradiation procedure: an HPHT sample from Element6 (3\(\times\)3\(\times\)0.3 mm\({}^{3}\), [N]\(\lesssim\)200 ppm, belonging to the same batch as Sample 1), and a CVD sample from Appsilon (1\(\times\)3\(\times\)0.3 mm\({}^{3}\), [N]\(\lesssim\)1 ppm). Also in this case, following irradiation, the same conventional low pressure thermal annealing step as above was performed.
## 4 Results and discussion
The simulation we develop describes well the penetration depth and gives us an estimate of the concentration of created vacancies, thus offering a guide in setting the irradiation parameters and an important tool to interpret the results. In the following we carefully characterize, with different techniques, the obtained samples, in order to experimentally explore the impact of electron irradiation on the crystal lattice and investigate the properties of the created NV\({}^{-}\), having the quantum sensing applications in mind.
### Extremely high energy electron irradiation
Important changes in the diamond sample induced by the irradiation and annealing processes can already be appreciated by looking at the sample with naked eyes, as reported in Fig. 4b. In particular, the central region, corresponding to the highest irradiation fluence, \(1.5\cdot 10^{18}\) e/cm\({}^{2}\), presents a darker color than the non-irradiated region at the edge of the specimen. We refer to the latter as the pristine region. To better investigate the nature of the defects created during irradiation and annealing, we first compare the UV/Visible absorption spectrum at the position of the highest fluence with the one from the pristine region. The results are reported in Fig. 4a. The light green line corresponds to the pristine region and presents a yellow transmission window, typical in HPHT diamonds due to the high N concentration and responsible for their yellow color [41]. The dark green curve was acquired in the highest fluence region: the NV\({}^{-}\) absorption peak at 638 nm and its phonon sideband appear, clearly showing that the process induces an enhancement of the NV\({}^{-}\) concentration. This spectrum can be decomposed in the NV\({}^{-}\) and its sideband plus a broad absorption spectrum, responsible for the dark color of the irradiated region. This second feature could be associated with large vacancy clusters, which may be produced by the ultra-high energy electron irradiation process [41, 42]. Note
Figure 4: a) UV/Visible absorption spectroscopy. The light green curve corresponds to the pristine region, the dark green curve to the irradiated one. Data are acquired from 0.2x3 mm\({}^{2}\) stripes. b) Optical picture of the sample after irradiation: a dark line corresponding to the irradiated region is clearly visible.
that the spatial resolution of these UV/Visible absorption measurement is poor: each curve in Fig. 4 is an average over a 0.2x3 mm\({}^{2}\) stripe.
To gain more insight into possible structural damages of the crystal lattice, we use Raman spectroscopy [43, 44]. The results obtained under 785 nm laser excitation are reported in Fig. 5. The Raman spectra presented in Fig. 5a are acquired at different \(x\), for \(z\sim 0\) and \(y\sim 1500\)\(\mu\)m (refer to Fig. 3a for the coordinate system). The dashed line corresponds to the spectrum acquired in a pristine region of the sample. For each spectrum, the main peak is fitted with a Lorentzian. Variations of the fit parameters (full width half maximum (FWHM), height and background offset) within a \(xz-\)plane transverse to the irradiation beam (at \(y\sim 1500\)\(\mu\)m) are reported in Figs. 5b-d. We notice that the higher the irradiation fluence the higher the background, while the height and width do not significantly change. The FWHM agrees with typical values measured in pristine diamond, and no Raman peak downshift is observed, nor presence of "damage" peaks (typically reported at 1490 and 1630 cm\({}^{-1}\)). We did not observe any dependence of these features over the longitudinal \(y-\)axis. These results indicate that the treatment did not heavily affect the crystal structure [45]. The high background in the irradiated region could be due to the creation of defects exhibiting PL in the infrared. Vacancy-related defects such as large clusters of vacancies may be involved.
We now investigate the properties of the created NV centers using photoluminescence (PL) spectroscopy and optically detected magnetic resonance (ODMR). The two charge states, NV\({}^{-}\) and NV\({}^{0}\), present different PL spectra with a characteristic zero phonon line (ZPL) at 575 nm and 638 nm, respectively. Relying on a calibrated sample, PL spectroscopy can also be used for a quantitative estimate of the concentrations of the different defects. In Fig. 6, PL spectra obtained under 532 nm excitation wavelength in the maximally irradiated region and in the pristine one are compared. Note that the spectrum from the pristine region is multiplied by 1000.
In both spectra, we clearly recognize the contribution from NV\({}^{-}\) centers, with the ZPL peak at 638 nm and associated broad phonon sideband peaking around 700 nm. A smaller peak, at 575 nm, is also visible in the spectrum of the irradiated region and corresponds to the ZPL from NV\({}^{0}\) centers. The mean PL increases by several orders of magnitude after the irradiation plus annealing process, indicating successful creation of NV\({}^{-}\) with high density. More precisely, comparing the spectra at different stages of the process (before irradiation, after irradiation and after irradiation plus annealing, see Fig. S5 in the Supplementary Material), we confirm that, even though the irradiation process already induces change in the PL, the annealing step is crucial to drastically increase the number of NV\({}^{-}\) into the sample. This is explained by the mobility of the vacancies at high temperature, able to migrate over several nanometers and couple with the substitutional nitrogen atoms to form NV centers.
Fitting the NV\({}^{-}\) ZPL peak after background subtraction and comparing the peak area with the
Figure 5: Raman spectroscopy under 785 nm excitation wavelength. a) Raman spectra for different positions \(x\). Lorentzian fits of the main peak are used to extract \(xz\)-map of its FWHM (b), amplitude (c) and vertical offset (d).
one from an independently calibrated sample, the absolute NV\({}^{-}\) concentration can be estimated [10].
The [NV\({}^{-}\)] at varying \(x\) is reported in Fig. 7a. It reaches a maximum of 0.6 ppm at \(x=0\). A gaussian curve fits well the measured density profile, with a FHWM of \(462\pm 10\)\(\mu\)m. The same measurement was repeated at different positions on the \(y-\)axis, without significant differences, confirming that the effects of irradiation are constant over the electron beam propagation axis inside the diamond. In particular, no beam broadening was observed and the generation of NV centers appears constant at different depths. This is in good agreement with our model (Fig. 2a), which shows an almost negligible decrease of generation of vacancy density from 118 ppm to 117 ppm between the entrance and the exit of the high energy beam in the diamond crystal.
For NV sensing applications, a high concentration of NV\({}^{-}\) is not enough to achieve high performance. The presence of the neutral form of the defect, NV\({}^{0}\), is detrimental for sensing as it causes background luminescence thus reducing the ODMR contrast [32]. Therefore a high [NV\({}^{-}\)] / [NV\({}_{tot}\)] ratio is desirable. We refer to this figure of merit as charge state conversion efficiency \(\xi\) and compute it using the Debye-Waller decomposition method proposed in [46], which neglect the presence of NV\({}^{+}\) and takes into account the different response of NV\({}^{-}\) and NV\({}^{0}\) to 532 nm light:
\[\xi=\frac{I_{ZPL,-}e^{S_{-}}}{k_{532}I_{ZPL,0}e^{S_{0}}+I_{ZPL,-}e^{S_{-}}} \tag{1}\]
where \(I_{ZPL,-}\) and \(I_{ZPL,0}\) are the zero-phonon-line areas for NV\({}^{-}\)and NV\({}^{0}\), respectively, weighted by their Debye-Waller factors \(S_{-}=4.3\) and \(S_{0}=3.3\). \(k_{532}=2.5\) takes into account the different PL rate of the two charge states under 532 nm illumination. The resulting \(\xi\) is reported in Fig. 7b: in the irradiated region more than 90% of the NV centers are in the negatively charged state. The maximum value of \(\xi\) is at \(x\sim 500\)\(\mu\)m. The slightly decreased charge state conversion efficiency around \(x\sim 0\)\(\mu\)m may indicates that, at this location, there are not enough electron donors to efficiently convert NV\({}^{0}\) into NV\({}^{-}\). As suggested by the simulation shown in Fig. 2b, it could be due to a concentration of created vacancies too high for the available nitrogen content. Moreover, the presence of more defects behaving as electron traps is also possible.
Many NV-sensing protocols are based on the acquisition of ODMR spectra [1, 5, 47, 48] and T1 relaxometry [49, 50, 51, 52]. The performance of these protocols are related not only to the NV\({}^{-}\) density but also to their spin lifetimes and coherence properties, which are affected by the specific crystal lattice environment. We acquired continuous wave ODMR spectra in our home-made confocal microscope. More details on the set-up can be found in [53]. We used a permanent magnet to lift the zero-field splitting degeneracy and address a single transition.
An example of ODMR spectrum measured at the maximally irradiated region, \(x=0\), is reported in Fig. 8. From the ODMR spectrum it is possible to estimate the shot-noise sensitivity to magnetic
Figure 6: Photoluminescence spectroscopy under 532 nm excitation wavelength. PL spectra from the maximally irradiated area (dark green curve) and the pristine one (multiplied by 1000, light green curve) are compared.
Figure 7: Plot of relevant quantities versus \(x\), the distance from the maximal irradiation region. a) NV\({}^{-}\) concentration, fitted with a gaussian curve. b) Charge state conversion efficiency, \(\xi\), according to Eq. 1. c) Effective shot-noise sensitivity, according to Eq. 3. d) Values of the longitudinal spin relaxation time T\({}_{1}\).
field according to the formula [54]:
\[\eta(\mathrm{T}/\sqrt{\mathrm{Hz}})=\frac{4}{3\sqrt{3}}\frac{h}{g\mu_{B}}\frac{ \delta\nu}{C\cdot\sqrt{R}}, \tag{2}\]
where \(\delta\nu\) is the Lorentzian FWHM, \(C\) its contrast and \(R\) the rate of detected photons. The numeric prefactor depends on the resonance lineshape, which is assumed here to be a Lorentzian. \(\frac{h}{g\mu_{B}}=2.80\) MHz/G is the NV gyromagnetic ratio. To simplify the discussion, we introduce an effective sensitivity that does not take into account the collection efficiency of the setup:
\[\eta_{eff}(\mathrm{a.u.}/\sqrt{\mathrm{Hz}})=\frac{\delta\nu}{C\cdot\sqrt{A}} \propto\eta(\mathrm{T}/\sqrt{\mathrm{Hz}}), \tag{3}\]
where both \(\delta\nu\) and \(C\) are extracted fitting the ODMR spectrum with a Lorentzian (see Fig. 8). \(A\) is the voltage measured by the photodiode, proportional to the PL count rate. In the Supplementary Material (see Fig. S6b,c and d), the linewidth, contrast and amplitude extracted from the fit for different positions \(x\) are reported. No significant trend is visible in the linewidth, some modulation is present for the contrast that follows well the behavior of the charge state ratio \(\xi\), while the PL presents a clear increase around \(x=0\)\(\mu\)m as expected. The effective sensitivity \(\eta_{eff}\) at different distances from the highest irradiation line is reported in Fig. 7c. We demonstrate an improvement of \(\sim\)40 times compared to the pristine region. The sensitivity enhancement is mainly dominated by the PL enhancement, while the variations in contrast and linewidth have a lesser impact.
The same set-up is used to estimate \(\mathrm{T}_{1}\) across the sample. We use the standard all-optical measurement protocol, reported for example in [53]: a first laser pulse is used to polarize the NV\({}^{-}\), a second pulse allows to read out the spin state after a variable time delay \(\tau\). \(\mathrm{T}_{1}\) is obtained by fitting the decay of readout PL versus \(\tau\) with a single exponential function. In Fig. 7d, the extracted \(\mathrm{T}_{1}\) values at different distances from the maximally irradiated region are reported. Interestingly, \(\mathrm{T}_{1}\) presents a non-monotonous behaviour with two minima at \(x\sim\pm 250\)\(\mu\)m and a local maximum at \(x\sim 0\)\(\mu\)m. This is in contrast with the findings of [55], where a monotonous reduction of \(\mathrm{T}_{1}\) was observed when increasing irradiation fluence. However, the study from [55] was based on CVD diamond irradiated with low energy 200 keV electrons. We hypothesize that other defects, such as vacancy clusters, introduced under 155 MeV irradiation may be responsible for the anomalous evolution of \(\mathrm{T}1\) in the highly irradiated region. However, the mechanism behind the rise of \(\mathrm{T}1\) remains to be clarified in future studies.
Figure 8: Optically detected magnetic resonance (ODMR) spectrum, fitted with a Lorentzian curve, from the maximally irradiated region (\(x=0\)). To remove the degeneracy a bias magnetic field B \(\sim\) 5 mT was applied.
Figure 9: a-b) PL spectra for different irradiation fluences under 200 keV electrons from the HPHT (a) and the CVD samples (b). Note the different vertical scales. c) NV\({}^{-}\) concentration as a function of irradiation fluence for the HPHT (red circles) and the CVD (blue squares) diamonds. Experimental data are linearly fitted (red line), assuming same slope as the one predicted by the simulation (Fig. 2c). The dark green line corresponds to the theoretical values for the 155 MeV case, assuming the same [V] to [NV\({}^{-}\)] conversion yield as in the 200 keV case. d) \(\xi\) as a function of irradiation fluence for the HPHT (red circles) and the CVD (blue squares) diamonds. The experimental values corresponding to Sample 1 are reported as a diamond symbol in panels (c,d).
### Low energy electron irradiation
We now turn to the discussion of the two samples irradiated under TEM by 200 keV electrons. In this case, UV/Visible absorption spectroscopy is not possible due to its poor spatial resolution vs. the small dimension of the irradiated areas. We use PL spectroscopy to estimate the NV concentration and the charge state conversion efficiency \(\xi\) in the different irradiation areas. Fluences in the range \(1\cdot 10^{16}\) to \(5\cdot 10^{20}\) e/cm\({}^{2}\) are investigated. However, the irradiated areas could only be clearly identified by their PL signal for fluences higher than \(1\cdot 10^{17}\) e/cm\({}^{2}\) (resp. \(1\cdot 10^{18}\) e/cm\({}^{2}\)) for the HPHT (resp. CVD) sample.
In Fig. 9a-b, we report the spectra corresponding to different irradiation fluences, for both the HPHT and CVD substrates. The Debye-Waller decomposition analysis presented in the previous section allows us to quantify the NV\({}^{-}\) concentration and charge state ratio \(\xi\) as a function of irradiation fluence for both samples (Fig. 9c-d). In the HPHT sample, where the initial nitrogen concentration is higher, increasing the fluence leads to a continuous and approximately linear increase in PL intensity, i.e. in NV\({}^{-}\) concentration, with a maximum PL enhancement of \(\sim 4\) order of magnitude and a maximum [NV\({}^{-}\)]\(\sim 3\) ppm, without any sign of saturation for the explored range of fluences (red circles in Fig. 9c). Experimental data are linearly fitted, assuming same slope as the theoretical curves reported in Fig. 2c. Comparing the measured [NV\({}^{-}\)] with the simulated [V], we infer that the conversion yield from created vacancy to NV\({}^{-}\) is \(\sim 1.7\cdot 10^{-3}\). The excess of nitrogen in the HPHT diamond provides sufficient electron donors to ensure a high [NV\({}^{-}\)]/[NV\({}_{tot}\)] ratio (\(\xi>0.9\), red circles in Fig. 9d).
On the contrary, for the CVD sample, we observe a sublinear increase of PL intensity vs. irradiation fluence with a saturation at the highest fluence values (blue squares in Fig. 9c). Moreover, a growing contribution from the NV\({}^{0}\) peak to the PL spectrum reflects a decrease in the charge state conversion efficiency \(\xi\) for 0.85 to 0.4 (blue squares in Fig. 9d). These observations confirm that the nitrogen content is a limiting factor for increasing [NV\({}^{-}\)] following electron irradiation and annealing, as it limits the charge state conversion efficiency.
From Fig. 1c-d, the simulated number of created vacancies, close to the surface, at 155 MeV exceeds the number of created vacancies at 200 keV by a factor of roughly 30 (dark green vs red lines in Fig. 9c). Experimentally, we measure a difference in [NV\({}^{-}\)] of \(\sim 60\) times for the same HPHT diamond and same irradiation fluence (black diamond in Fig. 9c), which is in qualitative agreement with the simulation. The factor 2 of discrepancy can be explained by uncertainty in the irradiation fluence and/or different vacancies formation and recombination mechanisms in the two energy regimes. We notice a slightly lower value of \(\xi\) for the 155 MeV process with respect to 200 keV on the HPHT sample (black diamond in Fig. 9d). It is possible that the ultra high energy irradiation creates other vacancy-containing defects that also behave as electron acceptors, thus lowering the fraction of NV\({}^{-}\). As shown in Fig. 7b, moving few micrometers apart from the maximally irradiated region, \(\xi\) increases, reaching values higher than 0.98. The current simulation provides the concentration of vacancies, but cannot infer anything about their possible aggregation states. This aspect may be relevant and should be addressed in a dedicated work.
## 5 Conclusion and outlook
In this work, we have explored the impact of ultra high energy electron irradiation (155 MeV) on diamond with a focus of its application for NV-based quantum sensing. We developed a simulation model able to faithfully describe this regime, we performed irradiation and annealing of an HPHT diamond substrate with large nitrogen content, characterized the NV\({}^{-}\) properties in the obtained sample, and compared the results with those obtained under low energy (200 keV) electron irradiation. The main finding is that 155 MeV irradiation yields approximately 60 times higher NV\({}^{-}\) concentration compared to 200 keV irradiation of the same diamond at the same fluence, while maintaining near ideal charge state conversion ratio (\(\xi>0.95\)). Even though an increase in broadband absorption and some background emission was observed, possibly due to an increased density of vacancy complexes, their effect on estimated ODMR sensitivity was not significant. The main advantage of the extremely high energy electrons is their huge penetration depth, estimated here to be more than ten centimeters.
Accordingly, this approach may be used in the future for batch irradiation of hundreds of substrates stacked together, with a potential for cost reduction (provided that an affordable source of ultra high energy electrons is accessible). This energy range may also present some advantages when irradiating diamond-containing heterostructures in order to reduce electron absorption and/or scattering in other layers.
Due to the limited access to the DESY irradiation facility we could explore only one irradiation energy and fluence. It would be interesting to vary these parameters, in order to better understand their impact and optimize the NV\({}^{-}\) creation while reducing the induced lattice damage. Our experiments and simulations also show that [NV\({}^{-}\)] after irradiation and annealing is limited by [N] in the initial substrate, with a maximal conversion efficiency from nitrogen to NV\({}^{-}\) limited to 0.3%. This could be improved with in-situ annealing, as suggested by previous studies on HPHT samples [28]: with this technique a conversion efficiency higher than 1 % should be achievable.
Finally, we observed an unexpected, non-monotonous behaviour for the \(T_{1}\) relaxation time vs. effective irradiation fluence at 155 MeV. Further experiments and/or modelling should help understanding the nature and density of the different defects in the diamond lattice and how they may impact T\({}_{1}\). Most notably, techniques like electron-paramagnetic resonance and electron spin resonance can be valid tools for this purpose [56, 57] and should be employed to detect the presence of vacancy-related defects, e.g. single vacancies and vacancy clusters that are likely created by the highly accelerated electrons and secondary displaced atoms. The numerical model that we developed allows to explain our experimental observations in terms of NV density and charge state conversion efficiency, but cannot capture the formation of such vacancy-related defects. More refined models could be developed in order to better understand what is happening at the atomic level.
## Acknowledgement
This project has received funding from the EPFL Interdisciplinary Seed Funds, from the EPFL Center for Quantum Science and Engineering, from the Swiss National Science Foundation (grants No. 98898 and 204036) and from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 754354. We sincerely thank the EPFL Crystal Growth Facility (especially Yoan Trolliet), for the help provided in the annealing process. We acknowledge Umut Yazlar from Appsilon B.V. (Delft, Netherlands) for his valuable contribution in providing single crystal CVD diamond samples with desired characteristics.
|
2308.13484 | Ultra-clean assembly of van der Waals heterostructures | Layer-by-layer assembly of van der Waals (vdW) heterostructures underpins new
discoveries in solid state physics, material science and chemistry. Despite the
successes, all current 2D material (2DM) transfer techniques rely on the use of
polymers which limit the cleanliness, ultimate electronic performance, and
potential for optoelectronic applications of the heterostructures. In this
article, we present a novel polymer-free platform for rapid and facile
heterostructure assembly which utilises re-usable flexible silicon nitride
membranes. We demonstrate that this allows fast and reproducible production of
2D heterostructures using both exfoliated and CVD-grown materials with perfect
interfaces free from interlayer contamination and correspondingly excellent
electronic behaviour, limited only by the size and intrinsic quality of the
crystals used. Furthermore, removing the need for polymeric carriers allows new
possibilities for vdW heterostructure fabrication: assembly at high
temperatures up to 600{\deg}C, and in different environments including
ultra-high vacuum (UHV) and when the materials are fully submerged in liquids.
We demonstrate UHV heterostructure assembly for the first time, and show the
reliable creation of graphene moir\'e superlattices with more than an order of
magnitude improvement in their structural homogeneity. We believe that broad
adaptation of our novel inorganic 2D materials assembly strategy will allow
realisation of the full potential of vdW heterostructures as a platform for new
physics and advanced optoelectronic technologies. | Wendong Wang, Nicholas Clark, Matthew Hamer, Amy Carl, Endre Tovari, Sam Sullivan-Allsop, Evan Tillotson, Yunze Gao, Hugo de Latour, Francisco Selles, James Howarth, Eli G. Castanon, Mingwei Zhou, Haoyu Bai, Xiao Li, Astrid Weston, Kenji Watanabe, Takashi Taniguchi, Cecilia Mattevi, Thomas H. Bointon, Paul V. Wiper, Andrew J. Strudwick, Leonid A. Ponomarenko, Andrey Kretinin, Sarah J. Haigh, Alex Summerfield, Roman Gorbachev | 2023-08-25T16:47:12Z | http://arxiv.org/abs/2308.13484v1 | # Ultra-clean assembly of van der Waals heterostructures
###### Abstract
We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals heterostructures. We present a new method for ultra-clean assembly of van der Waals Waals heterostructures. We present a new method for ultra-clean assembly of van der
**Abstract:** Layer-by-layer assembly of van der Waals (vdW) heterostructures underpins new discoveries in solid state physics, material science and chemistry. Despite the successes, all current 2D material (2DM) transfer techniques rely on the use of polymers which limit the cleanliness, ultimate electronic performance, and potential for optoelectronic applications of the heterostructures. In this article, we present a novel polymer-free platform for rapid and facile heterostructure assembly which utilises re-usable flexible silicon nitride membranes. We demonstrate that this allows fast and reproducible production of 2D heterostructures using both exfoliated and CVD-grown materials with perfect interfaces free from interlayer contamination and correspondingly excellent electronic behaviour, limited only by the size and intrinsic quality of the crystals used. Furthermore, removing the need for polymeric carriers allows new possibilities for vdW heterostructure fabrication: assembly at high temperatures up to 600degC, and in different environments including ultra-high vacuum (UHV) and when the materials are fully submerged in liquids. We demonstrate UHV heterostructure assembly for the first time, and show the reliable creation of graphene moire superlattices with more than an order of magnitude improvement in their structural homogeneity. We believe that broad adaptation of our novel inorganic 2D materials assembly strategy will allow realisation of the full potential of vdW heterostructures as a platform for new physics and advanced optoelectronic technologies.
**Introduction:**
Van der Waals heterostructures are a versatile platform for creating unique quantum and optoelectronic metamaterials with atomically sharp and clean interfaces[1, 2]. Over the last decade, hundreds of proof-of-principle optoelectronic devices with novel functionalities and exciting physical properties have been demonstrated, based both on individual atomically thin crystals and their heterostructures. The fabrication of such artificial solids generally relies on the mechanical stacking of individual two-dimensional materials (2DM). To date, a wide variety of polymer-assisted transfer approaches exist[3], which can be generally categorised in two groups: **1.** where 2DMs come in direct contact with liquid, often referred to as "wet"[4, 5] and **2.** where the surface of 2DMs remain dry throughout the transfer process, thus termed "dry" transfer[6]. These techniques are employed to pick up grown[7, 8, 9, 4] or exfoliated 2D crystals[5], and can be used successively to assemble multi-layer stacks[10, 11]. For all approaches used to date a polymer support is used to manipulate the 2DM or vdW heterostructure, creating the
stack and depositing it onto a substrate compatible with the desired function of the final device.
Achieving 2D heterostructures that behave as artificial crystals, rather than independent materials, requires atomically clean interfaces; transistors and LEDs fabricated from vdW heterostructures only operate in those specimen areas free from contamination [3]. Furthermore, many of the most challenging fundamental physics experiments are achieved only when charge carriers exhibit ballistic transport [12], behaviour which is only seen in the limit of extreme cleanliness. Unfortunately, the necessity to use polymers in the 2D transfer and stacking process provides a key source of interlayer contamination. The polymer layers provide excellent adhesion and flexibility, but their residues remain at buried interfaces and/or on the surface of the completed heterostructures (depending on the method used) [3]. A second source of undesirable surface and interfacial contamination comes from the surrounding environment. High and ultra-high vacuum (HV/UHV) processing would provide a route to remove the potential for volatile species to condense on surfaces during the stacking process but it is not compatible with the polymer transfer approaches (due to degassing of the polymer). Consequently, to date, all vdW heterostructures have been assembled in air, inert gases [13, 14] or low vacuum [15]. The strong adhesion between 2DM causes contaminants to aggregate into localized pockets known as blisters [16] or bubbles [8, 12] consisting of air [16], water [17], and hydrocarbons [2]. This localized contamination limits the size of homogeneous regions with atomically clean interfaces [18] to micrometres, severely restricting the experiments that can be performed and hampering the development of advanced electronic applications where large uniform areas are required for dense and reproducible patterning of circuit elements. For graphene and hexagonal boron nitride (hBN) heterostructures, performing transfers at elevated temperatures (e.g. 110degC) and gradually collapsing the contact front of the polymer (i.e. "slow transfer") [16] has shown to promote the diffusion of surface contaminants, preventing them from being trapped at the interface as it is formed. The use of higher temperatures could facilitate the formation of larger areas with atomically clean interfaces for a wider range of 2DMs but the use of a polymer carrier limits the permitted temperature to <150degC (varies with the glass transition temperature of the chosen polymer). Besides this, the presence of polymers also prevents the use of other methods to achieve high cleanliness such as aggressive cleaning techniques (e.g. most organic solvents swell/dissolve typically
used polymers). Strategies to remove the contamination from already assembled heterostructures exist[18, 19], however the cleaning process is slow even for micrometre-scale devices, cannot be widely adopted, and risks damaging the structure due to the excessive heat and/or high mechanical stress involved.
Lastly, some of the most exciting recent advances in 2DM have been centred on crystals that experience chemical degradation during exfoliation and transfer in ambient environmental conditions[3, 20]. This problem is often mitigated by operation in pure argon or nitrogen atmospheres[13, 14]. However, when atomically thin, many of the most interesting crystals such as CrI\({}_{3}\) and InSe[21] still degrade when processed in a state-of-the-art inert glove box environment due to the residual oxygen and water. Developing a transfer method compatible with UHV is therefore essential to enable the characterisation and exploitation of the unique electronic, optical, and magnetic properties of atmospherically sensitive next generation 2DMs.
In this work we present a new technology platform for fabricating ultra-clean 2D material heterostructures which entirely avoids the use of any organic materials. This approach allows far greater flexibility in the transfer conditions than has been possible to date, including the first assembly in ultra-high vacuum (UHV) at a pressure of 10-10 mbar. To replace the polymers, we employ inorganic flexible SiN\({}_{\mathrm{x}}\) membranes covered with thin metallic films. The film thickness and composition can be changed to allow precise tuning of the adhesion properties, enabling high transfer reliability with nearly 100% yield for a wide range of 2DMs. We exemplify this technique by performing transfers in air, glove box and UHV environments at temperatures up to 350degC, enabling reliable fabrication of devices with clean areas limited only by the size of the crystals used. All studied graphene devices have high quality and demonstrate carrier mobilities higher than 1.0x106 cm2V-1s-1 at 5K. In addition, we demonstrate dramatic increases in the atomically clean areas of complex multilayer heterostructures, and moire uniformity in twisted devices, as well as demonstrate scalable transfer of CVD grown materials. We envisage this technique becoming a new standard in 2DM nanofabrication, removing the cleanliness 'bottlenecks' which have been plaguing researchers for the last decade, as well as allowing for alternative scale-up pathways[22, 23].
### Inorganic assembly in air/glove box
The transfer relies on chemically inert, flexible and transparent SiN\({}_{\mathrm{x}}\) membranes. For smaller heterostructures built from mechanically exfoliated crystals with lateral dimensions of ~10s micrometres, the membranes are typically shaped into a cantilevers, each 320-480 \(\upmu\)m long by 160 \(\upmu\)m wide protruding from a silicon chip (Fig.1a). These are fabricated at wafer-scale using commercially available silicon wafers coated with 500 nm low pressure CVD-grown SiN\({}_{\mathrm{x}}\) on both sides. Optical lithography and reactive ion etching (RIE) are used to pattern the nitride layers into the required cantilever geometry before wet etching is employed to selectively remove the underlying silicon and release the cantilevers (see Supplementary Section 1).
The strong adhesion between 2DMs and polymer membranes is the key factor that has made polymer-based transfer methods universal. In comparison, bare SiN\({}_{\mathrm{x}}\) presents poor adhesion to 2DMs. To address this problem, we coat the cantilevers with a tri-metal stack consisting of 1 nm Ta, 5 nm Pt and 0.1-1 nm Au (Fig.1b-d). Gold generally provides strong adhesion to 2DMs[24, 25] but adjusting its thickness allows us to tuning the adhesion strength for a specific 2DM/substrate combination (higher gold surface density leads to stronger adhesion). This is highly beneficial as it allows the completed heterostructures to be released onto various substrates once the desired layer combination is assembled. The role of the Pt layer is to compensate for the variable SiN\({}_{\mathrm{x}}\) roughness and ensure consistent surface quality between different SiN\({}_{\mathrm{x}}\) substrates. Additionally, the presence of Pt facilitates the catalytic decomposition of mobile surface hydrocarbons[26] at elevated temperatures. Finally, Ta serves as the adhesion layer for Pt. An AFM image showing the roughness of the cantilever surface is shown in Fig.1e, with typical root mean square (RMS) deviation of 130-140 pm. Energy dispersive X-ray spectroscopy (Supplementary Figure S11) shows uniform distribution of all three metals across the surface, which is not the case if the Pt layer is removed, due to the tendency of Au to form nanoclusters.
Figure 1: Inorganic assembly of vdW heterostructures. _(a) SEM micrograph of several cantilevers protruding from a silicon chip. (b) Schematic and (c) cross-sectional high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) image showing multilayer metallic coating of the cantilever holding a 2DM specimen (sample shown in a multilayer MoS\({}_{2}\) crystal). (d) Elemental mapping of the region presented in (c) using energy dispersive X-ray spectroscopy. (e) AFM micrograph of the cantilever surface after the coating process. The root mean square roughness values (R\({}_{\rm RMS}\)) is indicated on the image. **(f-h)** Steps employed to pick up an hBN crystal onto the fabricated cantilever: (f) alignment, (g) contact and (h) lift-off. SEM (l) and optical (m) micrographs of the cantilevers after pick-up of a thick (~40 nm) hBN crystals. (i, j) Steps involved in the pick-up of a graphene crystal: alignment (i), contact and lift-off (j). **(n)** Optical micrograph showing the cantilever in contact with the graphene (edges highlighted with dashed line) on SiO\({}_{2}\). The flexible nature of the cantilevers allows accurate control of the lamination process. (k) Graphene/hBN stack deposited onto the bottom hBN crystal. The lamination process is stopped before the entire bottom hBN crystal is covered by the cantilever to selectively release the stack instead of picking it up. (o) Optical micrograph showing the resulting heterostructure on an oxidised silicon wafer presenting large uniform areas. Further data on additional samples can be found in Supplementary Section 2._
We benchmark the polymer-free transfer approach by demonstrating the exceptional cleanliness achievable for the archetypal hBN/graphene/hBN vertically stacked heterostructure, fabricated from mechanically exfoliated 2D crystals on an Si/SiO\({}_{2}\) substrate. As illustrated in Fig.1f, firstly the commercial 2D crystal transfer set-up (see methods) is used to align the custom cantilever over a target hBN crystal (Fig.1g) and lowered with a 20\({}^{\circ}\) tilt
until contact is made at 120-150\({}^{\circ}\)C. The cantilever is raised after few seconds, peeling the crystal away from the substrate (Fig.1h). We find that the optimal gold thickness for transferring most 2DM on to SiO\({}_{2}\) is 0.65nm, providing >95% successful transfers based on over 200 operations. The cantilever with the hBN attached is then used to pick up the next layer in the heterostructure (graphene, as shown in Fig.1i,j) and the resulting stack is then deposited onto the bottom hBN by exploiting the greater adhesion to this larger bottom crystal (shown in Fig.1k). Fig.1o shows an optical micrograph of the finished stack consisting of a large clean region with no contamination or bubbles over an area larger than 25\(\upmu\)m \(\times\) 40\(\upmu\)m, which has been formed despite a very fast (few seconds) lamination speed. The absence of polymers during the processing allows for much higher temperatures to be employed: for the stack in Fig.1 we used 150\({}^{\circ}\)C for the first hBN, 120\({}^{\circ}\)C for the graphene and 230\({}^{\circ}\)C for the second hBN. For transfers in air or inert gas, we have tested transfer temperatures up to 300\({}^{\circ}\)C and observed the complete disappearance of hydrocarbon bubbles above ~220\({}^{\circ}\)C. However, heating above 150\({}^{\circ}\)C in argon during the pick-up of monolayer graphene from SiO\({}_{2}\) may produce microcracks in the graphene, presumably due to the increase in adhesion of graphene to SiO\({}_{2}\)[27] (see SI section 2 for more details). A further advantage of our approach is the improved positioning accuracy compared to polymer-based methods. We find that this is limited only by the optical resolution of the microscope used (better than 400 nm) as the inorganic SiN\({}_{x}\) membrane displays negligible drift and warping when heated and compressed (see supplementary video 1). After the transfer, membranes can be cleaned in Ar/O\({}_{2}\) plasma and reused multiple times.
To characterize the quality of the encapsulated graphene, we fabricated 4 multi-terminal Hall-bar devices with various dimensions in an argon environment, and studied them at 5K. For 3 of the samples, displaying a width \(\leq\)12.5 micrometres, we have found that the carrier mean free path (\(I\)) is limited by their physical dimensions, and the field-effect mobility follows the characteristic dependence \(\mu=l\,(2e/h)\sqrt{\pi/n}\) (Fig. 2a) reaching well over 10\({}^{6}\) cm\({}^{\text{-2}}\)V\({}^{\text{-1}}\)s\({}^{\text{-1}}\) at low carrier concentrations. This behaviour is typical for the current state of the art devices, with the hall-bar dimensions limited by to 20 micrometres when using polymer transfer[10, 28, 29]. However, for a larger device, with dimensions 38 x 33 \(\upmu\)m, we do not see such saturation, in fact the carrier mobility increases with carrier density, reaching up to \(\approx\) 3\(\cdot\)10\({}^{6}\) cm\({}^{\text{2}}\)V\({}^{\text{-1}}\)s\({}^{\text{-1}}\). This behaviour suggest that a different scattering mechanism becomes relevant for large devices,
probably related to carbon and oxygen substitutional impurities intrinsically present in the hBN encapsulation[30].
Despite many research articles demonstrating outstanding proof-of-principle optoelectronic performance, the random network of contamination bubbles on micrometre length-scale renders promises of such applications unrealistic. Electronic-grade materials require nearly 100% areal uniformity for the design of the dense circuit elements such as optical emission arrays or transistors, which makes even simple heterostructures such as polymer-assembled encapsulated graphene unusable for high-end electronic applications. Even more challenging is the future of applications for multi-layer heterostructures containing 2D semiconductors, such as vertical Light-Emitting Devices[31, 11] typically displaying very small clean areas (<2um x <2um, with over 50% area covered by contamination bubbles) and taking several days or weeks to fabricate. Here, we assemble a complex structure, consisting of 8 individual 2D layers with different thicknesses ranging from bulk to a monolayer, where the optically active layers are two twisted few-layer MoS\({}_{2}\) crystals (outlined with the yellow and orange dashed lines in Fig.2b). The entire heterostructure, fabricated in an argon atmosphere, displays no bubbles, including a completely clean 9x15 um area where all the eight layers overlap as confirmed by overall AFM and local cross-sectional STEM measurement (Fig.2c,d). A comparison of the topography to similar devices fabricated using polymer based transfer techniques in the same atmosphere is presented in Figure S6. Absence of contamination can also be seen in another LED structure (Fig.2e,f) fabricated, featuring monolayer WS\({}_{2}\) as the optically active layer, and showing an overall overlap area of 25x30 um. Both of these assemblies took only 10 minutes to complete and resulted in over a 100-fold increase of uniform heterostructure area, limited in this instance only by the size of exfoliated crystals. When a bias voltage is applied to the graphene layers we observe a bright uniform electroluminescence signal with a terrace-like distribution of the intensity across the device (shown in Fig.2g), which is related to the change in thickness of the tunnelling hBN layers. Narrow exciton emission lines (4-5 meV) confirm the state-of-the-art optoelectronic quality of the device (Fig.2h), similar to that recently reported in small samples[32], with a slight variation in the emission energy commonly attributed to the non-uniform impurity distribution in the original crystal[33]. This exemplifies the importance of our technique, as the
advanced optical performance previously only observed on selected specimen locations ~2 \(\upmu\)m large can now be realistically considered in the context of advanced applications.
The developed transfer process was also tested for a variety of air sensitive 2D materials that require fabrication in an argon environment (e.g. black phosphorus, WSe\({}_{2}\), NbSe\({}_{2}\)) exfoliated on different substrates, including SiO\({}_{2}\) and polymers such as PMMA and PPC spin-coated onto Si wafers. In all the cases we observe complete absence or considerable reduction in the presence of hydrocarbon pockets, see Supplementary Fig.S5a-c. Lithographically patterned 2D crystals can also be transferred (Fig.S5d-e), however one needs to be aware that some dry etching processes lead to chemical bonding of 2D crystals to the substrate's surface along the etched perimeter.
Figure 2: Heterostructures assembled using our fully inorganic transfer technique in air and argon. (a) Carrier mobility at 4 K for 4 graphene devices. For large devices (>20 \(\mu\)m) the mobility is no longer limited by the edge scattering and increases with carrier density. For smaller devices, the extracted values of \(l\) agree well with the dimensions of the studied devices. Points indicate mobility values (calculated using Drude relation) acquired from published literature where polymeric transfer has been used. Purple circles correspond to samples fabricated at Manchester University, and red to other groups (see Supplementary Table 1 for full breakdown). (b) Optical and (c) AFM micrographs of an 8-layer stack assembled in air. No bubbles are present, although some crystals are folded during the pick-up process. (d) Cross-sectional high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) image showing cross-section of the device at the location marked with blue line in (b), graphene and hBN layers are indistinguishable due to the close atomic mass values. Local contrast enhancement has been applied to the image to highlight the layered structure. (e, f) optical and AFM micrographs of an LED-type structure [11] with monolayer WS\({}_{2}\) selected as the optically active layer. (g) Electroluminescence intensity map for the device shown in (e) showing a highly uniform emission distribution with a pronounceable terraced structure due to variation in the hBN barrier thickness (3-5 layers). (h) Example of electroluminescence spectra acquired on the points labelled in (g).
Previous work has shown improved heterostructure device performance when the transfer is performed in organic solvents to reduce wrinkling and strain[34, 35]. However, these experiments were limited to polymer compatible solvents (alcohols, alkanes). In contrast, our fully inorganic transfer method allows transfer when submerged in almost any organic solvent, for example ketones or chloroform, which are more likely to effectively remove contamination[36, 27] during the transfer. Indeed, we have confirmed the transfer also works in a variety of organic solvents incompatible with the polymer-based transfer method (toluene, chlorobenzene, cyclohexane, chloroform, various ketones, etc), although in some cases (mostly for polar solvents) a second micromanipulator was required to push the cantilever down to touch the substrate, presumably due to the formation of a repulsive solvent shell on the solid surfaces. Example of this process performed in acetone can be found in Supplementary Figure S5.
### Ultra-high vacuum assembly
Further improvements in vdW cleanliness are impossible in the presence of atmospheric contamination.[37] To eliminate the interlayer contamination completely, we designed and built a 15-axis micromanipulation setup enclosed within a UHV chamber, with a base pressure of 4x10-10 mbar, Fig.3a. The heterostructure assembly followed the procedure described in Fig.1, using a custom-built UHV optical microscope integrated into the UHV chamber. Further details of the UHV system design can be found in Supplementary Section 3.
Figure 3: Ultra-High Vacuum assembly of van der Waals heterostructures. (**a**) Instrument for performing UHV assembly (containing a 15-axis manipulation 2DM transfer set-up and an integral optical microscope). (**b**) Close-up of the optical lens, cantilever and sample stage. (c) Example of a UHV-fabricated vdW heterostructure: hBN - encapsulated graphene on graphite (serving as a local back-gate). Graphene is outlined with a black dashed line. No hydrocarbon contamination is observed in the entire sample. Inset shows a 25 \(\mu\)m AFM scan of the area highlighted by a blue-dashed box. (**d**) Magnetic focusing experiment performed at 5K on a similar heterostructure as in (c), indicating ballistic transport on the scale of >30 micrometres, corresponding to a carrier mobility over 6x10\({}^{6}\) cm\({}^{-2}\)V\({}^{-1}\)s\({}^{-1}\). Inset shows the measurement schematic: the electrons injected from one contact are magnetically deflected into the other contact (collector) while the corresponding voltage is measured. The transfer resistance displays multiple focusing peaks, with the first peak corresponding to the direct transfer of electrons from the injector to the collector, and for the subsequent peaks (at higher magnetic field) the electrons scatter from the edge of the device before reaching the collector. (**e**) AFM topography map of a twisted graphene monolayer/bilayer heterostructure placed on a bulk hBN crystal. Numbers provide the x-component of the moire period extracted at indicated locations using cAFM measurements. Small white bead-like objects are gold clusters resulting from contact deposition using a stencil mask, and the elevated areas are protruding roughness of the SiNx support. Inset shows the optical micrograph of the stack on the cantilever broken on the edge of a silicon wafer. (**f**) example of cAFM micrograph collected on sample shown in (e) used for moire period analysis.
The UHV fabricated vdW heterostructures do not display contamination bubbles across the entire interface (Fig.3c), regardless of the transfer speed and crystals' dimensions. Electronic transport measurements on an exemplar UHV fabricated vdW heterostructure, comprising hBN/graphene/hBN/graphite, shows ballistic transport over a scale of ~40um micrometres, seen via magnetic focusing experiments[38] (Fig.3d), and a field-effect mobility of ~3x10\({}^{6}\) cm\({}^{2}\)V-1-5-1 (at n=0.5x10\({}^{12}\)cm-2). To achieve this, all 2D crystals on Si/SiO\({}_{2}\) substrates introduced to the chamber went through UHV annealing at 400\({}^{\circ}\)C and were assembled into heterostructures at 150\({}^{\circ}\)C (Fig.3b). We have found that higher conditioning temperatures (600\({}^{\circ}\)C) result in a noticeable decrease in the low-temperature mobility of graphene (\(\mu\)<4x10\({}^{5}\) cm-2V-1-5-1 in four samples studied). Although the hBN-encapsulated graphene is immune to degradation at high temperatures[39], annealing on the Si/SiO\({}_{2}\) substrate can likely lead to local damage to the graphene lattice.
This extreme cleanliness is only possible when polymer-free transfer and UHV conditions are combined. To verify this, we have fabricated similar hBN/graphene/hBN heterostructures in UHV, but using the traditional transfer technique with small hemispherical PMMA droplet deposited on a glass slide instead of the SiNx cantilever (see Supplementary Section 5). Despite using an analogous annealing protocol, we have found a considerable number of hydrocarbon pockets in the heterostructures. This indicates that the use of polymer makes contamination unavoidable even in the UHV environment since the loose polymer matrix can emit hydrocarbons after a prolonged temperature treatment and pumping.
In addition, we demonstrate the applicability of the fully inorganic transfer approach in UHV environment to producing twisted vdW heterostructures, where perfect cleanliness leads to exceptional uniformity, opening opportunities for new experimental studies. The ability to exploit a small local twist between neighbouring 2DM to generate exquisite physics is one of the degrees of freedom not seen in any other realm of synthesis. Yet the understanding and exploitation of the properties of such structures is held back by the poor specimen uniformity, with micrometre-sized areas often displaying local variations in twist angle up to 0.2\({}^{\circ}\), resulting from wrinkles present due to contamination bubbles[40]. In twisted bilayer graphene, this variation leads to difficulties in studying strongly correlated phenomena such as superconductivity[41] and magnetic states[42], which are extremely sensitive to the twist angle
disorder. We have used our fully inorganic UHV assembly to fabricate a twisted monolayer/bilayer graphene on a thick hBN substrate.
AFM topography of the fabricated twisted bilayer reveals no contamination bubbles over the entirety of the heterostructure (Fig.3e). Additionally, conductive AFM (cAFM) measurements reveal that the moire period of the superlattice is exceptionally uniform (Fig.3f) with the extracted twist angle \(\theta\approx 8.5^{\circ}\), varying by just 0.016\({}^{\circ}\) over a length scale of 10 \(\upmu\)m (areas labelled 1,2,3), and within 0.04\({}^{\circ}\) for the entire area shown. This result represents an order of magnitude improvement compared to the current twist angle disorder achieved with polymer-based transfer techniques.
### Scalability
The presented cantilever geometry can also be employed for transfer of CVD-grown materials, as exemplified in Supplementary Fig.S4, where WS\({}_{2}\) monolayer crystals[43] have been picked up from their growth substrate (SiO\({}_{2}\)) and encapsulated in hBN. However, the free-standing nature of the 500 nm thick SiN\({}_{\mathrm{x}}\) makes it difficult to apply this technique over lateral scales larger than ~200 \(\upmu\)m. To overcome this problem, we have laminated 20x20 mm metal-coated SiN\({}_{\mathrm{x}}\) membrane onto a 1 mm thick PDMS film, Fig.4a. Direct contact of 2D crystals with PDMS surface is known to produce surface contamination[44] due to the presence of uncrosslinked oligomers within the PDMS bulk. However, the SiNx membrane provides an impermeable barrier for this contamination, allowing us to combine the best of both worlds: the mechanical flexibility and support of PDMS with the ultraclean interface of the metallized SiN\({}_{\mathrm{x}}\). This arrangement has allowed us to pick up large CVD-grown WS\({}_{2}\) sheet from its growth substrate (SiO\({}_{2}\)) and deposit it onto a ~6L thick WS\({}_{2}\) film grown on sapphire substrate. The resulting heterostructure, shown in Fig.4b, shows 0.7\(\pm\)0.1 nm AFM step-height of the top layer suggesting absence of contamination in the interface and consistent with the 50-60% reduction in the PL intensity in the overlapping area (Fig.4d). Despite the use of PDMS, we are still able to perform transfers at 250\({}^{\circ}\)C, though other grades of PDMS allow for higher working temperatures (e.g. 350 \({}^{\circ}\)C[45]).
However, we found that the roughness of the growth substrate plays an important role: while the process works for flat substrates displaying few nm RMS (e.g. SiO\({}_{2}\)/sapphire), we had difficulties picking-up commercially produced graphene grown on copper foil as its roughness
reaches micrometre scale, thus preventing the uniform contact of the SiN\({}_{\rm x}\) membrane with graphene. However, recent developments in growth of ultraflat metallic surfaces and their use as substrates for CVD growth of 2DMs [46, 47, 48], optimised growth processes for increased large scale uniformity [49, 50], and improvements in direct growth on flat non-metallic substrates [51], should allow for use of inorganic transfer when the materials are made widely available.
Figure 4: **Large area transfer of CVD grown 2D materials.** **(a)** schematic showing arrangement of PDMS polymer-supported SiN\({}_{\rm x}\) membrane. Inset image shows an 18mm SiN\({}_{\rm x}\) membrane laminated onto PDMS film. **(b)** A heterostructure consisting of CVD-grown monolayer WS\({}_{2}\) transferred onto CVD-grown few-layer WS\({}_{2}\) on sapphire substrate fabricated using the laminates shown in **(a)**. A square grid has been scratched into both layers prior to transfer to enable visualisation. The inset shows the two layers in red and blue to highlight the covered areas. **(c)** Topography of the area in **(b)** indicated by the yellow rectangle. A height profile at the indicated position is shown in the inset, measuring a step of approximately 0.7 nm. **(d)** Integrated intensity map of the primary WS\({}_{2}\) photoluminescence peak around 1.97 eV. **(e)** A thin WS\({}_{2}\) layer encapsulated in thick graphite fabricated using the large area stamping method. **(f)** Map of the intensity of the WS\({}_{2}\) 2LA/E\({}_{2g}\) Raman peak normalised by the Si peak and a representative Raman spectrum from the location indicated by the red cross. **(g)** A monolayer WS\({}_{2}\) CVD grown layer transferred onto a mechanically exfoliated graphite crystal. A square pattern has been scratched into the WS2 monolayer to enable visualisation.
## Conclusion and future outlook
The presented fully inorganic 2D transfer technology platform enables a significant improvement in the cleanliness, size uniformity, and quality of vdW heterostructures assembled in air and inert gasses. Furthermore, it allows complete elimination of all interlayer contamination when used in a UHV environment. This development provides access to highly uniform vdW heterostructure devices where the performance is limited only by the intrinsic quality and dimensions of the 2D crystals, which will to lead to new research avenues for uncovering novel scientific phenomena.
The SiN\({}_{\mathrm{x}}\)films are readily commercially available at wafer scale, and possess superior chemical, thermal and mechanical stability compared to organic polymers. Their use as transfer membranes will significantly improve the chances of successful development of 2D heterostructures for advanced electronic technologies. While the free-standing "cantilevers" are more suitable for small scale research applications, the technology developed here relies on the general properties of the SiN\({}_{\mathrm{x}}\)/metal interface and can be used in large area laminates, providing a path to wafer-scale transfer. In addition, compatible alternative pathways to scale up have been developed using autonomous robotic searching and assembly of two-dimensional crystals[22, 23]. We envisage that the outstanding reproducibility, cleanliness and speed offered by our technique, combined with machine learning algorithms to guide[52] the decision-making process, can lead to small scale manufacturing of custom high-end optoelectronic devices for applications where large batch production is not required, such as quantum technologies, aerospace and healthcare.
## Methods:
**SiNx cantilever fabrication:** SiN\({}_{\mathrm{x}}\) transfer cantilevers were fabricated from silicon wafers coated on both sides with low stress (non-stoichiometric) SiN\({}_{\mathrm{x}}\) films, purchased from Inseto UK. The nitride layers on either side were patterned using optical lithography and subsequent reactive ion etching to define both the cantilevers and a hard mask for anisotropic etching of the Si. KOH solution was then used to remove the silicon from the area defined by the SiN
mask. After cleaning, the metallic adhesion layer was deposited using direct sequential sputtering of Ta and Pt, followed by e-beam deposition of Au immediately before use. See Supplementary Section 1 for details.
**2D heterostructure Fabrication:** Crystals were mechanically exfoliated onto oxidized silicon wafers. For assembly in air or argon atmosphere, cantilevers were used to pick-up/drop-off 2D crystals using a standard micromanipulator system[3] with a temperature controlled stage. For UHV assembly, exfoliated crystals were loaded into a custom UHV system and annealed in a load-lock (pressure ~10-7 mbar) prior to introduction into the main UHV chamber (pressure ~10-10 mbar) to remove atmospheric hydrocarbon contamination and adsorbed H2O. Separately, a metal-coated SiNx cantilever was loaded and annealed prior to introduction to the main chamber via the same load-lock. After annealing, cantilevers/substrates were loaded into the top/bottom precision manipulators respectively, using a combination of linear and 'wobble-stick' style UHV manipulator arms. For stacking operations, the manipulators were controlled using a custom LabView-based program with the substrate/stamp positioned underneath an array of UHV objective lenses (5/20/50/100 X magnification) for inspection and precise alignment of the 2D crystals. Cantilever and 2D crystals were brought into contact at an elevated substrate temperature, ~150degC, to form the heterostructure stacks. Between each stacking step the partially assembled heterostructure was annealed to remove surface contaminants. Full technical details of the UHV transfer procedure and system, together with the annealing recipes for each step can be found in Supplementary Sections 3-5.
**Transport Device Fabrication:** After assembly, hBN/graphene/hBN stacks were dropped-off from the cantilever onto a large graphite crystal (50-100nm thick) pre-exfoliated onto a Si/SiO2 wafer and used for applying the back gate. The hBN/graphene/hBN/graphite heterostructures were progressed into standard Hall-bar devices using e-beam lithography and reactive ion etching. Electrical contacts consisting of 3nm Cr and 70nm Au were made to graphene via the one-dimensional technique[8].
**UHV cAFM device Fabrication:** Twisted graphene samples for cAFM were fabricated using the tear and stack method [53]. Firstly, a large hBN crystal was picked up on a cantilever. Graphite was mechanically exfoliated onto an oxidised silicon wafer and monolayer / bilayer graphene crystals were identified using optical contrast. The hBN crystal on the cantilever was lined up with the edge of the bilayer section of the graphene crystal and slowly brought into contact
with the surface. The cantilever was then picked up ripping the graphene along the edge of the hBN crystal. The section of graphene crystal remaining on the substrate was then rotated to the desired angle, realigned with the first half already on the cantilever, and picked up. The cantilever with the hBN/2Lgr/gr heterostructure was then flipped upside down and broken off on the edge of a SiO\({}_{2}\) wafer coated with a metallic adhesion layer (2nm Ti, 20nm Au) such that the twisted graphene surface is exposed. To ensure a reliable electrical contact during the c-AFM measurements a stencil mask was employed to deposit a 150 nm Au film contacting the graphene.
**Atomic Force Microscopy:** AFM topography and conductive AFM (c-AFM) maps were acquired on twisted graphene (TG) samples deposited on top of hBN flakes using a Cypher-S AFM (_Oxford Instruments_) in a cleanroom environment. Topography scans were acquired in AC mode, the c-AFM measurements were acquired in contact mode with a constant tip-sample potential difference of -50 mV and a scan rate 1.5 - 1.75 Hz. Values of the local twist angle were calculated from the component of the moire wavelength along the fast scan direction to minimise the effect of temporal drift. Periodograms[54] were calculated for each scanning line and then summed along the slow scan axis to generate a magnitude spectrum in the fast direction. Identified peaks in the spectrum were then fit with gaussian functions to measure the peak frequencies, and therefore the spatial periodicity. Where the high-symmetry direction of the moire lattice was not aligned with the fast-scan axis, we calculated the angle withstood by both directions, \(\alpha\), and then applied a trigonometric transformation to obtain the period of the moire lattice, and therefore the local twist angle. Full details are supplied in Supplementary Section 6.
**Cross-sectional STEM and EDX:** Cross sectional samples were prepared using a focused ion beam (FIB) with decreasing currents to mitigate beam damage induced during milling. Image data and elemental maps were acquired on a FEI Titan G2 80-200 S/TEM with a 4-quadrant Super-X EDX detector, operating at 200 kV accelerating voltage. The ADF STEM data was acquired with a probe current of 180 pA, a semi-convergence angle of 21.5 mrad and an annular dark field (ADF) detector inner angle of 43 mrad. EDX mapping was performed using the same probe conditions, with a pixel dwell time of 40 \(\upmu\)s. In the map shown in figure 1d, individual x-ray counts are plotted as circles according to the colour scheme shown in the
image. Both Mo and S counts are included as Mo due to peak overlap. Further details in Supplementary Section 8.
**Electroluminescence measurements** were performed at 4K, in an Attocube Attodry 1000 system. A Keithley 2614B source-measure unit was used to simultaneously bias and measure the current through the device. The bias was applied to the top graphene contact while the bottom graphene and silicon back gate were grounded. Luminescence spectra were obtained using a Princeton instruments Acton SpectraPro SP-2500 spectrometer, with a 300g/mm grating, alongside a PyLoN cryogenically cooled CCD camera.
**Competing interests:** The authors declare no competing interests.
**Data and materials availability:** Experimental data, software code for analysis and further information available upon reasonable request from the corresponding author.
**Acknowledgements:** We acknowledge support from European Union's Horizon 2020 Research and Innovation programme: European Graphene Flagship Core3 Project (grant agreement no. 881603), European Quantum Flagship Project 2DSIPC (820378), ERC Starter grant EvoluTEM (no. 715502), ERC Consolidator QTWIST (no. 101001515), ERC Consolidator Grant 3DAddChip (no. 819069), Marie Sklodowska-Curie fellowship PTMCnano (no 751883) and the Royal Society. In addition, we acknowledge support from EPSRC (grant numbers EP/V007033/1, EP/V036343/1), CDT Graphene-NOWNANO and China Scholarship Council (CSC) under grant numbers 202106950021, 201806280036 and 201908890023. This project was supported by the Henry Royce Institute for Advanced Materials, funded through EPSRC grants EP/R00661X/1, EP/S019367/1, EP/P025021/1 and EP/P025498/1. E.T. acknowledges support from OTKA Grant PD-134758 from the National Research, Development, and Innovation Office of the Ministry of Innovation and Technology of Hungary, and the Bolyai Fellowship of the Hungarian Academy of Sciences (Grant No. BO/00242/20/11). |
2306.05992 | Accretion in the Binary System GG Carinae and Implications for B[e]
Supergiants | We simulate the hydrodynamics of the wind flow in the B[e] supergiant binary
system GG~Carinae and obtain the mass accretion rate onto the secondary and the
observed lightcurve. We find an inhomogeneous Bondi-Hoyle-Lyttleton accretion
into a curved accretion tail, and confirm that the accretion rate is modulated
along the orbit, with a maximum close to periastron. We show that the accretion
itself cannot account for the periodical variation in brightness. Instead, we
explain the observed variation in the light curve with absorption by the
accretion tail. Our results are in general agreement with previously derived
stellar masses, orbital parameters, and the system orientation, but imply that
the B[e] supergiant is more luminous. We find an effect related to the orbital
motion of the two stars, in which the accretion tail is cut by the primary and
we term it the Lizard Autotomy Effect. As part of the effect, the primary is
self accreting wind that it ejected earlier. The Lizard Autotomy Effect creates
an outwardly expanding spiral shell made up of broken segments. We suggest that
such a tail exists in other B[e] supergiant systems and can be the source of
the circumstellar material observed in such systems. The accretion also forms a
disc around the secondary near periastron that later vanishes. We suggest that
the formation of such a disc can launch jets that account for the bipolar
structure observed around some B[e] supergiants. | Amit Kashi | 2023-06-09T16:01:28Z | http://arxiv.org/abs/2306.05992v1 | # Accretion in the Binary System GG Carinae and Implications for B[e] Supergiants
###### Abstract
We simulate the hydrodynamics of the wind flow in the B[e] supergiant binary system GG Carinae and obtain the mass accretion rate onto the secondary and the observed lightcurve. We find an inhomogeneous Bondi-Hoyle-Lyttleton accretion into a curved accretion tail, and confirm that the accretion rate is modulated along the orbit, with a maximum close to periastron. We show that the accretion itself cannot account for the periodical variation in brightness. Instead, we explain the observed variation in the light curve with absorption by the accretion tail. Our results are in general agreement with previously derived stellar masses, orbital parameters, and the system orientation, but imply that the B[e] supergiant is more luminous. We find an effect related to the orbital motion of the two stars, in which the accretion tail is cut by the primary and we term it the Lizard Autotomy Effect. As part of the effect, the primary is self accreting wind that it ejected earlier. The Lizard Autotomy Effect creates an outwardly expanding spiral shell made up of broken segments. We suggest that such a tail exists in other B[e] supergiant systems and can be the source of the circumstellar material observed in such systems. The accretion also forms a disc around the secondary near periastron that later vanishes. We suggest that the formation of such a disc can launch jets that account for the bipolar structure observed around some B[e] supergiants.
keywords: stars: massive -- stars: mass-loss -- stars: winds, outflows -- (stars:) binaries: general -- stars: emission-line, Be -- accretion, accretion discs
## 1 Introduction
Massive star evolution is an intriguing process that includes stages that are not yet fully understood (e.g., Smartt, 2009; Owocki, 2011; Davidson & Humphreys, 2012; Vink, 2015; Georgy et al., 2017; Owocki et al., 2019; Farrell et al., 2022; Eldridge & Stanway, 2022). Amongst other massive stars at evolved stages, the B[e] supergiant stars (B[e]SGs or sgB[e]) are rare stars, and only a few tens of them are known. B[e]SGs exhibits optical spectra with strong Balmer emission lines, low excitation permitted emission lines of predominantly low ionization metals, and forbidden emission lines of [Fe II], [O I] and other species, as well as strong infrared excess in their spectrum, strong continua and few if any apparent absorption lines (e.g. Allen & Swings, 1976; Zickgraf et al., 1986, 1996; Lamers et al., 1998; Miroshnichenko, 2006; Zickgraf, 2006; Aret et al., 2016; Humphreys et al., 2017; Oudmaijer & Miroshnichenko, 2017; Kraus, 2019).
As such, and having approximately the same effective temperature but lower luminosities ranging between \(L=10^{4}\)-\(10^{6}\) L\({}_{\sun}\), B[e]SGs may be related as the fainter, lower mass siblings of Luminous Blue Variables (LBVs) (e.g., Zickgraf et al., 1996; Clark et al., 2013; Kraus et al., 2014). As the luminosity ranges overlap it may be difficult to distinguish between them, with the main way being the lack of [O I] emission in LBVs (Humphreys et al., 2017). Lower mass stars sometimes show similar properties to the B[e] phenomenon, but not being SGs they do not belong to the group, and can be classified as FS CMa stars (e.g., Miroshnichenko, 2006; Korcakov et al., 2022; Miroshnichenko et al., 2023) and might be close binaries that are undergoing or have undergone in the past a phase of mass transfer between the components (see also Nodyarov et al., 2022 for a recent example of such a system). Some yellow hypergiants might also appear as B[e]SGs when passing the yellow void in the HR diagram (e.g., Davies et al., 2007; Aret et al., 2017, 2017).
B[e]SGs are surrounded with a circumstellar molecular and dusty disc or ring, but the mechanism of how this circumstellar material arranged into its shape and structure is not clear (Conti, 1997; Lamers et al., 1998; Kraus et al., 2013; Kraus, 2019). The most stable and most abundant molecule emitting in the near-IR in the discs around B[e]SGs is CO. The spectrum also shows emission from the isotopic molecule \({}^{13}\)CO, indicating that the circumstellar material has been released from the evolved star (Kraus, 2009; Kraus et al., 2022). The rarity of B[e]SGs may indicate that they are a
fast transitional phase in massive stars evolution of a certain mass range, characterized by strong mass loss. A further strengthening to this conjecture comes from the fact that many B[e]SGs show not only small-scale molecular and dusty rings, but also large-scale nebulae and ejecta, suggesting previous mass-loss events (e.g., Kraus et al., 2021).
B[e] stars have been found to also have large-scale (up to several pc) gas with various shapes: spherical and ring nebulae, spiral-arms or partial spiral arms, bipolar and unipolar-lobe structures (e.g., Marston and McCollum, 2008). Radial velocities (RV) have been measured in the circumbinary material around B[e] stars, suggesting an outflowing velocity component (Kraus et al., 2010). Observations of the spatial distribution of the circumstellar gas revealed that it is more likely accumulated in multiple rings or arcs, and possibly in spiral arm-like structures (Torres et al., 2018; Kraus et al., 2016; Maraveilas et al., 2018; Limets et al., 2022).
Some B[e]SGs were suggested to be in binary systems, and in this case the circumstellar disc is in fact a circumbinary disc. According to Georgy et al. (2017) it is difficult to explain the asymmetries in the circumstellar medium of B[e]SGs with a single star, since their stellar evolution models predict small rotation rates for blue supergiants at the observed position of B[e]SGs in the H-R diagram. Thus, the similarity between the B[e]SGs, LBVs and FS Cma might support a binary origin for the B[e]SGs as well.
GG Carinae (HD 94878) is one of these confirmed binary B[e]SG systems, that is relatively well observed (Swings, 1974; Hernandez et al., 1981; Gosset et al., 1985; Brandi et al., 1987; Machado et al., 2004; Pereyra et al., 2009; Marchiano et al., 2012; Kraus et al., 2013; Porter et al., 2021, 2022). GG Car is considered to be the archetypal B[e]SG system due to the high B[e]SG mass \(M_{1}=24\pm 4\) M\({}_{\sun}\), luminosity \(\log(L_{1}/\) L\({}_{\sun})=5.26\pm 0.19\)(Porter et al., 2021), and complex circumstellar environment. However, its peculiar behavior on both short and orbital time scales suggests that it is an atypical system. Most previous evolutionary studies of GG Car assumed that the two binary components have evolved as pseudo-single stars (Kraus, 2009, 2019; Kraus et al., 2013; Oksala et al., 2013). Porter et al. (2021) brought a different view, modeling the system as a binary with interaction between its components.
According to radiation transfer calculations with the cmfgen code computed by Porter et al. (2021), the observed spectral behavior of GG Car implies that the emission lines are generated by the primary's wind, and the varying amplitudes of the different line species are attributed to their formation at different radii, with less energetic lines forming at larger radii. They also determined the binary orbit by modeling the atmosphere of the primary and the radial velocity variations of the wind lines. They concluded that the period, which was earlier unclear, is \(P=31.01\pm 0.01\) days and the eccentricity is \(e=0.50\pm 0.03\), much larger than earlier estimates (Marchiano et al., 2012; Kraus et al., 2013). From their fit to the orbital solution Porter et al. (2021) also found that the argument of periastron is \(\omega=339.87^{\circ}\pm 3.1^{\circ}\). Grant and Blundell (2022) used the marginalized GP algorithm and found similar orbital eccentricity and \(\omega=334.36^{\circ}\pm 6.1^{\circ}\). The inclination of the system is \(i\approx 60^{\circ}\)(Kraus et al., 2013), but it is not very well constrained, perhaps to within \(20^{\circ}\)(Porter et al., 2021).
To explain the variation in luminosity along the orbit, Porter et al. (2021) suggested that the system undergoes enhanced _mass transfer and accretion_ from the primary to the secondary at periastron. While the primary component of GG Car is well identified, the class of the secondary component is still unknown, though it is much less luminous than the primary. Porter et al. (2021) obtained a preferred set of parameters for both stars in the system as well as the binary orbit (values cited below).
The primary wind parameters obtained by Porter et al. (2021) are unusual as the mass loss is relatively low and the wind is slow. Massive stars with similar mass loss rate usually have higher wind velocities (e.g. Puls et al., 2008; Owocki, 2015; Vink and Sander, 2021). We therefore test another set of parameters based on the Geneva stellar evolution code (Ekstrom et al., 2012). The simulations results that derived from using the two sets are then compared to observations.
The GG Car system has a very weak x-ray emission, indicating that it is not a colliding-wind binary such as \(\eta\) Car (Pittard and Corcoran, 2002; Akashi et al., 2006; Kashi and Soker, 2009; Kashi et al., 2021) or a few other LBVs (e.g., Naze et al., 2012). Namely, the secondary is in an evolutionary state in which its wind is very weak compared to the primary and has a low wind momentum ratio. This indicates that the accretion is expected to be Bondi-Hoyle-Lyttleton (BHL), or a fraction of this rate (e.g., Kashi et al., 2022).
The spectroscopic study of (Porter et al., 2022) found that GG Car has two circumbinary rings. The rings are orbiting with projected velocities of 84.6 and 27.3 km s\({}^{-1}\) at radii of \(2.8^{+0.9}_{-1.1}\) au and \(27^{+9}_{-10}\) au for the inner and outer rings, respectively. They further found variations in the rings consistent with pumping of the eccentricity of a radially thin circumbinary ring by the inner binary. The ring or rings are far from the binary and its immediate vicinity which are the focus of our paper, but we will discuss how they might have formed.
In this paper we a run numerical simulations for the GG Car binary system obtaining the geometry of the wind as a result of binary interaction as well as accretion onto the secondary. In section 2 we describe the numerical simulation. The results from our two sets of stellar and wind parameters are presented in section 3. We discuss our results in section 4. Our summary and conclusions are given in section 5.
## 2 The numerical simulations
We run the numerical simulation using the hydrodynamic code flash, described in Fryxell et al. (2000). Our 3D Cartesian grid extends over \(x\in(-4,4)\) au, \(y\in(-4,4)\) au and \(z\in(-2,2)\) au. The grid is centered around the secondary star, and the orbital plane is on the \(x\)-\(y\) plane at \(z=0\). To solve the hydrodynamic equations we use the flash version of the split piece-wise parabolic method (PPM) solver (Colella and Woodward, 1984) which is an extension of the Godunov method, that solves a Riemann problem and allows resolving shocks. A multigroup diffusion approximation is used for solving the radiative transfer equation and updating the energy. We also include radiative cooling from Sutherland and Dopita (1993), which is necessary in modeling colliding winds governed by instabilities. The temperature is limited from below to 1 000 K.
We set up the GG Car binary system using two sets of stellar parameters. The first set, designated 'P', is based on the
parameters from in Porter et al. (2021), noting that some of them were derived in earlier papers. The binary semi-major axis is \(a=0.61\) au, the orbital period is \(P=31.01\) days and the eccentricity is \(e=0.5\). The stellar masses of the primary B[e]SG star and its companion are \(M_{1}=24\) M\({}_{\sun}\) and \(M_{2}=7.2\) M\({}_{\sun}\), respectively. We note that for the masses the range in the literature was quite large. The effective temperature of the primary is \(T_{\rm eff,1}=23\,000\) K, its radius is taken to be \(R_{1}=27\) R\({}_{\sun}\) and the luminosity is \(L_{1}\simeq 1.8\times 10^{5}\) L\({}_{\sun}\). The secondary is hotter and smaller with \(T_{\rm eff,2}=16\,000\) K, \(R_{2}=7.2\) R\({}_{\sun}\), and \(L_{2}\simeq 3\,000\) L\({}_{\sun}\). The composition used for both stars is appropriate for an evolved star with CNO-processed material and the He abundance is taken to be 50 percent, and the metalicity is \(Z=0.02\). This composition is also used for calculating the opacity, for which an opal model is used (Iglesias & Rogers, 1996) and low temperature opacities are also included. The interiors of the stars are excluded from the simulation. The gravitational field is modeled by two point sources at the centers of both stars. The winds are ejected from the grid cells in a narrow spherical shell around each star. Both stars accelerate their winds according to the \(\beta\) velocity law \(v(r)=v_{\infty}\left(1-R_{*}/r\right)^{\beta}\), where \(v_{\infty}\) is the terminal velocity, \(R_{*}\) is the stellar radius, \(r\) is the radial distance measured from the center of the star and \(\beta\) is the wind acceleration parameter. The primary mass loss rate is \(\dot{M}_{1}=2.2\times 10^{-6}\) M\({}_{\sun}\) yr\({}^{-1}\) and its wind velocity has a terminal value of \(v_{1,\infty}=265\) km s\({}^{-1}\) with radiative acceleration corresponding to \(\beta=1\). The secondary mass loss rate is \(\dot{M}_{2}=10^{-8}\) M\({}_{\sun}\) yr\({}^{-1}\), and its wind velocity has a terminal value \(v_{2,\infty}=1\,600\) km s\({}^{-1}\) with an acceleration parameter \(\beta=1\). The grid has 6 levels of refinement that are kept constant with time. The largest cell has a side of \(\simeq 18\) R\({}_{\sun}\) and the smallest has a side of \(\simeq 0.6\) R\({}_{\sun}\). The periastron passage takes place when the primary is at \((-0.305,0,0)\) au, and moving counterclockwise when looking from 'above' the grid (positive \(z\)).
The second set of parameters we use, designated 'G', is based on a stellar track from the Geneva code (Ekstrom et al., 2012). Using an evolutionary track of a rotating star with zero age main sequence mass \(M_{1,\rm zAMS}=25\) M\({}_{\sun}\) for the
Figure 1.— _Left:_ Density maps with velocity vectors showing slices in the orbital plane (\(z=0\)), at different times for set P. The times start star two orbits in which we let the system relax and shows the third orbit from periastron (\(t=62\) days), through apastron (middle right, \(t=77\) days) till one day before the next periastron (lower right). The periastron passage takes place when the primary is at \((-0.305,0,0)\) au, and moving counterclockwise when looking from ‘above’ the grid (positive \(z\)). The secondary is at the center, with its radius indicated by a small black circle. The primary, with its radius indicated by a large black circle, orbits the secondary counterclockwise, with periastron occurring when the primary is left of the secondary. The accretion tail is obtained and funnels the primary wind to be accreted onto the secondary. Close to periastron passage (\(t=62\) days) the tail is being cut by the passing of the secondary close to the primary. _Right:_ Focusing on the secondary at the same times as on the panels on the left hand side. Note that the color scheme is the same for both panels but the velocity scale is different, and more velocity vectors are plotted for the focus panels. An accretion disc-like structure around the secondary is obtained shortly after periastron passage. At later times the accretion is inhomogeneous BHL.
primary, we set our simulation at \(M_{1}\simeq 23.6\) M\({}_{\sun}\), when the effective temperature is \(T_{\rm 1,eff}=23\,800\) K, closest to the value in model P. The mass loss rate is \(\dot{M}_{1}\simeq 4\times 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\), and the terminal velocity we derive is \(v_{\rm 1,so}=2\simeq 1400\) km s\({}^{-1}\). The main difference between the two sets of parameters is therefore that \(v_{\rm 1,so}\) is much larger for set G. For the secondary we use \(M_{\rm 2,ZAMS}=7.2\) M\({}_{\sun}\) which gives almost the same mass at the time we take for the primary, as it is still a main-sequence star, \(M_{2}\simeq 7.2\) M\({}_{\sun}\). The terminal wind velocity is \(v_{\rm 2,so}\simeq 900\) km s\({}^{-1}\) and the mass loss rate is \(\dot{M}_{2}\simeq 6.24\times 10^{-12}\) M\({}_{\odot}\) yr\({}^{-1}\). Table 1 gives the full list of parameters in both models. The orbital parameters of set G are similar to those of set P (except for a minor difference of \(<0.7\)% in the semi-major axis). As the secondary is smaller, for set G the grid has 8 levels of refinement, and the smallest has a side of \(\simeq 0.14\) R\({}_{\sun}\).
material flows towards the secondary and not stopped by radiation braking. The accretion treatment includes measuring the accreted material, its arrival direction and other properties. We quantify the accretion rate according to the method described in (Kashi, 2019). The secondary is resolved as a sphere (not a sink) and accretion is calculated in each cell around the secondary.
## 3 Results
### Set P
We start with set P. The simulation outputs are exported and post-processed every 0.5 day. Figure 1 shows a slice of the simulation grid on the orbital plane. The right six panels are zoom-in on the secondary at times corresponding to the left six panels, and have the same density scale, but more velocity vectors. The times given in Figure 1 in the upper left of each panel are during the third orbit, as for two orbital periods we let the system relax from the initial condition that did not include wind from previous orbits but only wind that started to flow from the two stars. Therefore after two orbits the wind structure is representative of the system unlike the initialization of the simulation. The times show the third orbit from periastron (upper left panel in each six panel group), through apastron (middle right) till one day before the next periastron (lower right).
As expected, we find that the accretion takes a geometric form that resembles inhomogeneous BHL accretion (Livio et al., 1986). The primary wind flows from both sides of the secondary and focused by its gravity into a curved tail. Part of the accretion tail flows towards the secondary and gets accreted, and part of it flows outwards and creates a spiral structure. Accretion also takes place from other directions but at a lower rate. After periastron passage a clear dense accretion disc is obtained around the secondary (see the middle left panel at \(t=65.0\) days). The disc lasts for a few more days and dissipates as the system approaches apastron (middle right panel at \(t=77.0\) days). An accretion tail remains even after the dissipation of the disc, and lasts until the next periastron passage. The flow creates a spiral, with higher density close to the secondary.
Figure 3 shows a 3D contour plot of the high density parts of the simulation close to periastron passage. These dense parts include the strong wind of the primary and the tail of the secondary. There is a dense tail behind the secondary, part of which is an accretion tail towards the secondary and part is outflowing away from it. We can see the secondary (added artificially on the right side that views the system from above the orbital plane) close to periastron passage. As the secondary takes a sharp elliptical turn when coming to periastron and passes close to the primary and its wind, the tail is being cut by the primary and a gap in the tail is created. As this process is reminiscent of the self cutting of a lizard's tail we will refer to it as the _Lizard Autotomy Effect_. The process repeats every periastron passage, creating an outflowing spiral with gaps, that also has a rotating velocity component.
Another effect that we obtain as part of the Lizard Autotomy Effect is _self accretion_: the primary passes through the tail and accretes its own wind. We did not measure quantitatively the accreted mass onto the primary, as our code is not designed to do so (it only measures the mass accretion onto the secondary). The self accreted material comes mostly from one direction - the side of the primary facing the tail. As the density of the tail is much larger than the density of the direct primary wind (Figure 3) it is unlikely that the latter can prevent the accretion.
Figure 4 shows the accretion rate onto the secondary as a function of time. The accretion we found is modulated by the orbital period. Accretion occurs throughout the binary period but much stronger close to periastron. When the system is close to apastron there is still accretion taking place from residual material that the primary left around the secondary. There is also a temporary formation of an accretion disc around the secondary that dissipates shortly after periastron passage. The residual accretion is from filaments of gas compressed by the orbital motion that remain gravitational bound to the secondary.
Porter et al. (2021) suggested that the accretion of the primary wind by the secondary is the cause of the photometric variations at the orbital period in GG Car. The photometric variability according to their observations is \(\Delta m_{V}\simeq 0.2\)mag, which corresponds to a decrease of about 20% in luminosity. This conclusion was reached after excluding other options like extinction, scattering, and reflection by circumbinary material. While accretion onto the companion can generally account for increase in brightening of a star or dimming of this light, we argue that in the case of GG Car accretion by itself cannot cause the observed brightening.
Using the mass loss rate of the primary \(\dot{M}_{1}=2.2\times 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\) from Porter et al. (2021), the resulted highest mass accretion rate onto the secondary obtained by our simulations is \(\dot{M}_{2,\rm acc}\simeq 8\times 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\). The resulted accretion luminosity is
\[\begin{split} L_{2,\rm acc}&\simeq 25\left(\frac{M_{2}}{7.2 \ \rm M_{\odot}}\right)\left(\frac{\dot{M}_{2,\rm acc}}{8\times 10^{-7}\ \rm M_{\odot}\ yr^{-1}}\right)\\ &\times\left(\frac{R_{2}}{7.2\ \rm R_{\odot}}\right)^{-1}\ \rm L_{ \odot},\end{split} \tag{1}\]
which is a negligible fraction of the primary luminosity \(L_{2,\rm acc}/L_{1}\simeq 1.4\times 10^{-4}\). Moreover, even if all the mass lost from the primary gets accreted onto the secondary it will still not be enough to explain an increase of \(\simeq 20\%\) in luminosity. The other option, dimming of the secondary by the accreted material, is also not enough to account for the observed increase in luminosity, as the secondary luminosity is at most 4800 L\({}_{\odot}\), only \(<3\%\) of the primary luminosity, and even if obscured completely will not suffice.
We therefore suggest another explanation - attenuation of the primary flux by the surrounding material. We suggest that the luminosity does not periodically increase as a result of the accretion onto the companion, but rather periodically decreases as a result of absorption by the surrounding material, mainly in the tail. The origin of the surrounding material is the wind of the primary itself. The gravity of the companion and the eccentric orbital motion creates an accretion structure with disc and tail, as shown in Figure 3. We take the emission to originate at the surface of the primary star, and calculate the optical depth along the line of sight till reaching a boundary of the simulation box
\[\tau(t)=\int\kappa(t)\rho(t)\,dl. \tag{2}\]
We then calculate the attenuation of the observed magnitude by
\[M_{V}(t)=M_{V,0}+A_{V}(t)=M_{V,0}+1.086\tau(t), \tag{3}\]
Where \(M_{V,0}\) is the non-obscured magnitude of the star and \(M_{V}\) is the observed magnitude through the absorbing material.
The results of our simulation for set P on top of the observed magnitude is shown in Figure 6. The observed visible magnitude is taken from two sources and combined together, archival observations from the Optical Monitoring Camera (OMC) on-board the INTEGRAL satellite (Domingo et al., 2010), marked with blue circles and The All Sky Automated Survey (ASAS) catalog (Pojmanski, 2003), marked with blue crosses. The observation results are binned into multiple bins, using a running window. Each bin includes 1/15 of the orbit, symmetric in both sides. For each bin the average and the standard deviation are calculated. The result is shown in blue squares with 1\(\sigma\) error-bars. The resulted light curve from our simulation is over plotted. We used four viewing angles with different inclination angles and \(\omega\), within the range obtained by Porter et al. (2021) and Grant and Blundell (2022). We find the \(i=60^{\circ}\) and \(\omega=340^{\circ}\) gives the best results out of the four orientations we attempted, in agreement with (Porter et al., 2021).
The implication of our results is that the primary luminosity is in fact \(\simeq 25\%\) larger than previously derived, namely \(\log(L_{1}/\ L_{\sun})=5.36\pm 0.2\). This also indicates that the mass of the primary of GG Car is larger, and by extension also that of the secondary, as the derivation of the latter relies on the mass ratio between the two stars Porter et al. (2021).
The tail and the spiral structure obtained in our simulations should also be evident in other observations. To show that, we process our simulation for set P to obtain the RV of the absorption component for the He I lines. We assume that the lines originate on the surface of the primary. For each length \(dl\) along the line of sight to the primary we calculate \(d\tau\) and the projected velocity of the gas along the path towards the observer \(\mathbf{v_{p}}=(\mathbf{v}\cdot\mathbf{d}\mathbf{l})d\bar{l}\), where \(d\bar{l}\) points to the line of sight, namely \(i=60^{\circ}\) and \(\omega=340^{\circ}\). We thus obtain \(d\tau(v_{p})\) along the line of sight, and get the radial velocity of the absorption. We repeat the process for every 0.5 day along the orbital period.
Porter et al. (2021) showed three trailed spectra of He I lines along the orbital period. The absorption RV showed periodic variation, having larger values close to periastron. In Figure 7 we plot our simulation result over figure 4 of Porter et al. (2021) that shows the trailed spectra of He I 6678 taken by the Global Jet Watch (GJW) telescopes. As we do not use advanced radiation transfer procedure a perfect match to a specific line is not expected. However, our results show that the RV obtained from the simulation (brown line) generally follows the same behaviour, having minimal values close to apastron and maximal values close to periastron passage.
### Set G
We repeat the analysis for parameters set G. Figure 2 shows the density plots for set G. For a large terminal velocity
Figure 3: A 3D view of the simulation with parameters set P. The accretion tail being cut by the primary during periastron passage, shown from two viewing angles. The secondary does not appear in the 3D view, and was added for perspective and to show the scale (\(R_{2}=7.2\) R\({}_{\sun}\)). There is a dense tail behind the secondary, part of which is an accretion tail towards the secondary and part is outflowing away from it. The secondary takes a sharp elliptical turn close to periastron and as a result the close primary and its wind cut the tail and creates a gap in the tail – the Lizard Autotomy Effect. The process repeats every periastron passage, creating an outflowing spiral with gaps.
\(v_{\rm 1,inf}=1400\ {\rm km\ s^{-1}}\) (set G) the orbital velocity at periastron is much smaller \(v_{\rm orb,p}<<v_{\rm 1,inf}\). Therefore a curved tail is obtained but the primary does not cut as in set P and the Lizard Autotomy Effect is non existent. The result is a flow with a narrow curved tail behind the secondary, that changes relatively slowly along the orbit. To have a similar Lizard Autotomy Effect as in set P, a much larger eccentricity is required.
Therefore for set G the mass accretion rate onto the secondary changes almost as \(r^{-2}\) due to the orbital separation and no other effects are involved. The obtained accretion rate is shown in Figure 5. The peak accretion rate for set G is \(\dot{M}_{\rm 2,acc}\simeq 3.4\times 10^{-8}\ {\rm M_{\odot}\ yr^{-1}}\), much smaller than for set P. The reason is the absence of the accretion tail in set G, that leads to no efficient focusing mechanism of material into the secondary.
In Figure 8 we calculate the obscuration from different lines of sight. We obtain that the magnitude of variation is much smaller than the observed one, despite the mass loss rate of the primary being almost twice larger than in set P. No result from the lines of sights that we checked (in any inclination) was found to reproduce the observed periodical light curve. Even for \(i=90\lx@math@degree\), where a maximal eclipse is obtained, the decline is not enough to account for the observed light curve. Moreover, even if we were to change the line of sight such that the eclipse is at phase 0.5 it would not be enough, as the observed decline is larger.
## 4 Discussion
Between the two sets of stellar and wind parameters that we tested, we found that the set of parameters we adopted from Porter et al. (2021) is a much better fit with the observations. For this set we found that the primary cuts the accretion tail close to periastron passage - a process we termed the Lizard Autotomy Effect. A representation of this effect is shown in Figure 3. The Lizard Autotomy Effect that we obtained in this paper causes the primary to self accrete mass from wind that it had blown earlier and formed the tail, and creates an expanding spiral shell with broken segments that expands outwards.
We suggest that such tails exists in other B[e]SGs systems
Figure 4: The mass accretion rate obtained in our simulation or set P. The first two orbits (\(t<62\) days) are the the initialization of the simulation, where the structure of the wind does not yet represent the system. We ran the simulation for seven orbitals period (the first three are shown in the figure) and from the third orbit we obtain repetitive behaviour of the accretion rate from orbit to orbit. We can see the accretion rate is periodic with the orbit, with a peak just before periastron. A sharp decline following the peak indicates the effect we term the Lizard Autotomy Effect, in which the primary cuts the accretion tail, and by that reduces the mass accretion rate.
Figure 5: Similar to Figure 4, but for the G parameters set. The accretion rate onto the secondary is \(\propto r^{-2}\). The Lizard Autotomy Effect is not evident. The mass accretion rate is much smaller that for set P.
and can be the source of the circumstellar material observed in such systems. The accretion onto the secondary also forms a disc around the secondary (not a circumbinary disc) near periastron that later disperses. Some of the gas later remains, and form dense filament (as seen in Figure 1). We suggest that the formation of such a disc can be accompanied by the launching of two opposite jets perpendicular to the plane of the disc. These two jets can push ambient gas and form a bipolar structure.
Such bipolar structures were observed around some B[e]SGs. For example, Liimets et al. (2022) showed an H\(\alpha\) image of the nebula around MWC 314, a well known B[e]SG, that clearly shows its bipolar structure. Marston & McCollum (2008) observed nebular features of the B[e] star MWC 819 seen in 2001, later to be confirmed as bipolar in a deeper image taken in 2020 by Liimets et al. (2022). The B[e] star LHA 120-83 exhibits noticeable changes in its spectral patterns in both the optical and infrared regions and its P-Cygni line profiles of H I, Fe II, and O I indicate the presence of a robust bipolar clumped wind (Torres et al. 2018). The B[e]SG MWC 137 is surrounded by an optical nebula that was modeled as consisting of two double cones, though might be related to stellar pulsations (Kraus et al. 2017, 2021). Sanchez Contreras et al. (2019) suggested an accretion disc around a compact object and jet-launching as a model for forming the red square nebula around the B[e] star MWC 922.
It is well established that jets can shape bipolar nebulae. The process applies to various astrophysical objects (Frank 1999) including planetary nebulae (e.g., Soker 1990, 1998, 2002; Soker & Livio 1994; Frank et al. 1996; Sahai & Trauger 1998; Akashi & Soker 2008; Soker & Kashi 2012; Balick et al.
Figure 6: The simulation results for the light curve of GG Car for parameters set G. The observed magnitude is taken from OMC (blue circles) and ASAS (blue crosses), and folded to the time of the orbit according to the ephemeris in Porter et al. (2021). The observation results are binned into multiple bins, each includes 1/15 of the orbit, symmetric in both sides. For each bin the average and the standard deviation are calculated. The error bars indicate 1\(\sigma\). The light curve follows the binary orbit. The tail and the spiral structure of the wind attenuate the light of the primary. The secondary is a negligible source. The sharp rise close to periastron passage (phases 0, 1, 2) is the result of the primary cutting the tail and being exposed to the observer. The resulted light curve from our simulation is given for different inclination angles and \(\omega\). We find the orientation \(i=60^{\circ}\) and \(\omega=340^{\circ}\) gives the best results.
Figure 7: The result of our simulation for the RV of the absorbed projected velocity, assuming line of sight in the direction \(i=60^{\circ}\) and \(\omega=340^{\circ}\) (brown line). The results are plotted on top of the radial velocity profile of the He I 6678 line from Porter et al. (2021), taken by GJW (the color background with bluer colors represent absorption and yellower colors represent emission; precise colormaps were not provided). The simulation results follow the observed absorption component, showing minimum values close to apastron (phase 0.5, 1.5) and maximal values close to periastron passage (phase 0, 1, 2).
2019), post-AGB nebulae that involve a sudden anisotropic ejection of stellar gas by a very late AGB (or very early post-AGB) star during a common-envelope phase (e.g., Lee et al., 2009; Sahai et al., 2017; Soker, 2020; Zou et al., 2020; Bollen et al., 2022; Lopez-Cimara et al., 2022), young stellar objects (e.g., Shu et al., 1994; Ray & Ferreira, 2021) Luminous Blue Variables (e.g., Nota et al., 1995; Soker, 2001; Kashi & Soker, 2010), core collapse supernovae (e.g., Gilkis et al., 2019; Akashi & Soker, 2021; Soker, 2022; Akashi et al., 2023), and intermediate-luminosity optical transients (e.g., Soker & Kashi, 2016). For planetary nebulae there are also indications that impulsive ejection of gas creates a bipolar structure (e.g., Akashi & Soker, 2008), a similar idea to what we propose here for GG Car and similar B[e]SGs.
The evidence for bipolar structure around B[e]SGs implies that very likely jets ejected from these stars were involved in their formation. This, in turn, implies there should exist an accretion disc to form these jet that exists either continuously or intermittently. _Our simulation obtained such an accretion disc around the secondary_ that exists for every orbit close to periastron passage, and therefore support the observed bipolar structure of nebulae around B[e]SGs.
As earlier noted, some B[e]SGs show large-scale nebulae and ejecta, suggesting previous mass-loss events (e.g., Kraus et al., 2021). These might be the result of the expanding spiral arms that dilute and merge with those of previous cycles. The spectroscopic study of Porter et al. (2022) found that GG Car has two circumbinary rings. These rings are located further out from the simulation grid. However, the outflow we find, in the shape of a spiral moving outward, could have formed such rings further out. This is likely if the spirals decelerate and accumulate in the radial direction, to form ring structures.
We suggest that the circumstellar disc around B[e] stars, or at least some B[e] stars, is in fact a cropped spiral created by the interaction of the stellar wind with the companion star. The results here demonstrate that accretion is an important process in massive binaries, and should be considered in every close binary system that has uneven stellar wind momentum, and especially in ones with eccentric orbit.
## 5 Summary and Conclusions
We used hydrodynamic simulations to show that the B[e]SG binary system GG Car experiences accretion onto the secondary close to periastron passage. By that we confirm the earlier suggestion for this process to occur in this system by Porter et al. (2021). We calculated the accretion rate onto the secondary star and found peaks in the accretion rate (Figure 4) as suggested by Porter et al. (2021). An accretion disc-like structure around the secondary is obtained shortly after periastron passage, and dissipates as the system approaches apastron (Figure 1). We also found that an asymmetric spiral structure is formed around the system.
We showed that the variation in the V light curve is not as a result of an increase to the luminosity from accretion, as the accretion power is not sufficient to account for it. Instead we propose that the primary is more luminous and it is dimmed periodically as the mass that it lost is compressed into the accretion tail and the spiral structure which adsorts its light (Figure 6). The implication is that the primary of GG Car is more luminous by \(\approx 25\%\) than previously derived, and consequently that the stellar masses are larger.
Accretion close to periastron passage is known to occur in other stars. One famous example is \(\eta\) Car, an LBV star that shares similar properties with GG Car. Close to periastron in the eccentric (\(e=0.9\)) and long (\(P=5.54\) yrs) orbit
Figure 8: The simulation results for the light curve of GG Car based on parameters set G. The observations are as in Figure 6. The resulted light curve from our simulation is given for different inclination angles and \(\omega\). The obscuration is periodic and follow the orbit. For \(i=90^{\circ}\) an eclipse is seen at phase \(\simeq 0.87\). We find that for none of the lines of sight the obscuration is sufficient to account for the decline in the light curve.
the star accretes mass from the equatorial region (Kashi & Soker, 2008; Kashi, 2017), and a disc is formed and stays for a few weeks, and accounts for multiple observations across the spectra (Kashi & Soker, 2007a, 2009b), including the variation in He lines (Kashi & Soker, 2007b), as in GG Car.
Porter et al. (2021) suggested that the fact that the blueshifted absorption component in GG Car becomes deeper around periastron passage likely indicates that more mass is being lost from the primary's wind at that time. They concluded that the primary mass-loss rate and the rate of mass being captured by the secondary strongly depends on the phase of the binary due to the eccentric orbit, and therefore we can expect that there will be changes in the brightness due to varying rates of mass transfer and accretion. In our simulation we kept the mass loss rate of the primary fixed, and did not enhance it near periastron. We agree with Porter et al. (2021) that the mass loss enhancement is reasonable. However, the mass accretion rate is modulated according to the orbit regardless of the primary mass loss rate, and this is indeed obtained for the two sets of parameters we used. Enhancing the primary mass loss rate close to periaston can let alone enhance the accretion, but accretion exists even without the enhancement.
In the present simulation we did not modify the stellar atmosphere of either star as a result of accretion. Accretion at a very high rate can lead to a reduction in the effective temperature of the star and in turn weaken its wind, as suggested by Kashi & Soker (2009b) for accretion close to the periastron passage of \(\eta\) Carinae. This effect was later obtained in the simulations of Kashi (2017).
In this paper we tried two representative sets of parameters. We did not make an attempt to run the simulation with tuning the parameters, such as stellar masses, radii, effective temperatures and eccentricity. We showed that one of these parameter set (set P) reasonably reproduce photometric and spectroscopic observations. The parameter set that reproduced the observations has a lower primary star wind velocity then expected from stellar evolution models, revealing an interesting problem that is beyond the scope of this work.
There are many spectroscopic observations for GG Car, and our simulation and following analysis is limited and cannot reproduce all of them. However, we showed that the RV absorption of He I 6678 matches the results of our simulation (Figure 7). This line is emitted from the primary or its wind, very close to the stellar surface (Porter et al., 2021), depends on the line of sight and less sensitive to other parameters. Running more simulations on a set of parameters can very likely improve the constraints on the stellar and binary parameters. Likewise, processing the results with radiation transfer code can give much better reproduction of spectral line observations than the relatively simple procedures we performed in this work.
Our simulations show that binary interaction can have a significant role in the formation of B[e]SGs and in shaping their circumstellar ans circumbinary environment. Hydrodynamic simulations are an important tool for analysing massive binary systems with complex wind structure, and in particular B[e]SGs, as done in this case.
## Acknowledgements
We thank an anonymous referee for helpful comments. We acknowledge support from the R&D Authority at Ariel University. We acknowledge the Ariel HPC Center at Ariel University for providing computing resources that have contributed to the research results reported within this paper.
## Data Availability
OMC V-band photometry observations are available at [https://sdc.cab.inta-csic.es/omc/secure/formbusqueda.jsp](https://sdc.cab.inta-csic.es/omc/secure/formbusqueda.jsp).
ASAS V-band photometry observations are available at [http://www.astrouw.edu.pl/cgi-asas/asascgigetdata?](http://www.astrouw.edu.pl/cgi-asas/asascgigetdata?) 105559-6023.5,asas3.
This paper used Geneva code stellar evolution tracks from SYCLIST [https://www.unige.ch/sciences/astro/evolution/en/database/syclist/](https://www.unige.ch/sciences/astro/evolution/en/database/syclist/).
The data underlying this article will be shared on a reasonable request from the corresponding author.
|
2304.13989 | Cross-Domain Evaluation of POS Taggers: From Wall Street Journal to
Fandom Wiki | The Wall Street Journal section of the Penn Treebank has been the de-facto
standard for evaluating POS taggers for a long time, and accuracies over 97\%
have been reported. However, less is known about out-of-domain tagger
performance, especially with fine-grained label sets. Using data from Elder
Scrolls Fandom, a wiki about the \textit{Elder Scrolls} video game universe, we
create a modest dataset for qualitatively evaluating the cross-domain
performance of two POS taggers: the Stanford tagger (Toutanova et al. 2003) and
Bilty (Plank et al. 2016), both trained on WSJ. Our analyses show that
performance on tokens seen during training is almost as good as in-domain
performance, but accuracy on unknown tokens decreases from 90.37% to 78.37%
(Stanford) and 87.84\% to 80.41\% (Bilty) across domains. Both taggers struggle
with proper nouns and inconsistent capitalization. | Kia Kirstein Hansen, Rob van der Goot | 2023-04-27T07:24:29Z | http://arxiv.org/abs/2304.13989v1 | # Cross-Domain Evaluation of POS Taggers:
###### Abstract
The Wall Street Journal section of the Penn Treebank has been the de-facto standard for evaluating POS taggers for a long time, and accuracies over 97% have been reported. However, less is known about out-of-domain tagger performance, especially with fine-grained label sets. Using data from Elder Scrolls Fandom, a wiki about the _Elder Scrolls_ video game universe, we create a modest dataset for qualitatively evaluating the cross-domain performance of two POS taggers: the Stanford tagger Toutanova et al. (2003) and Bilty Plank et al. (2016), both trained on WSJ. Our analyses show that performance on tokens seen during training is almost as good as in-domain performance, but accuracy on unknown tokens decreases from 90.37% to 78.37% Stanford and 87.84% to 80.41% Bilty across domains. Both taggers struggle with proper nouns and inconsistent capitalization.1
Footnote 1: Data, predictions and code available at:
github.com/kkirsteinhansen/elder-scrolls-fandom
## 1 Introduction
The Wall Street Journal (WSJ) portion of the Penn Treebank Marcus et al. (1993), is a well-known and common benchmark for, among other tasks, POS tagging in English. The WSJ treebank consists of a collection of English newswire from the 1980s, and several existing POS taggers score over 97% in accuracy when evaluated in-domain. In this work, we investigate how well these taggers transfer to data from wikis; online platforms where the content is continuously generated by users as a collaborative effort. Through quantitative and qualitative analyses, we explore the types of errors taggers make when tagging such out-of-domain data, as well as the frequencies of these errors. For this purpose, we create a new, modest English dataset to represent the wiki domain: Elder Scrolls Fandom, or ESF. Seeing as the _Elder Scrolls_ is a video game series, this data is expected to be topically disjoint from WSJ.
Cross-domain evaluation of part-of-speech taggers has for the last decade largely been focused on social media data and shallow tag sets, and POS taggers trained on newswire have been shown to suffer a decrease in accuracy on Twitter data Gimpel et al. (2011); Derczynski et al. (2013), though this decrease is much more significant for unknown tokens than for tokens seen during training Derczynski et al. (2013). While extensive research has been conducted on such microblog data, little is known about out-of-domain performance and errors on other domains. We therefore contribute 1) a modest evaluation data set with fine-grained POS tags on Elder Scrolls wiki data 2) a qualitative evaluation of two different types of POS taggers: traditional machine learning and deep learning 3) an in-depth analysis of challenging cases in the Fandom wiki domain.
## 2 The Elder Scrolls Fandom Dataset
### Data Collection
In order to evaluate the newswire-trained POS taggers on out-of-domain data, we created the
Figure 1: Example ESF sentences with gold tags.
Elder Scrolls Random data set, or ESF. Random is a large online platform that includes wikis for video game and cinematic universes, the _Elder Scrolls_ being one such video game universe. The data is user-generated content of encyclopedic nature, though it also includes text passages that are sourced directly from the video games, e.g., quest descriptions and character dialogue. Some terms describing fictional creatures, ingredients, and locations exist only in the context of these video games. The data used for ESF is English and generally well-edited, but contains some typographic errors and terms that are out-of-vocabulary for newswire. This ESF dataset thus places itself somewhere between the clean newswire data and the noisy and topically diverse microblog data. We used the WikiExtractor2 to extract the elderscrollsfandomcom-20200222-hi story.xml.7z version of the wiki. Target sentences were sampled at random with context (previous and next sentence) included for disambiguation. Short sentences without a clear syntactic structure were skipped3. Tokenization was done using the Python NLTK module4 with manual post-correction.
Footnote 2: [https://github.com/attardi/wikiextractor/](https://github.com/attardi/wikiextractor/)
Footnote 3: E.g., _IE 1200_ and _2920, vol 05 - Second Seed_
Footnote 4: [https://www.nltk.org/](https://www.nltk.org/)
### Annotation
POS annotations were done in accordance with the guidelines for the Penn Treebank (Santorini and MacIntyre, 1995). Tokens with typographical or orthographic errors were tagged with their presumed intended meaning, e.g., the token _got in (...) the Dragonborn must got* to Filnjar (...)_ was tagged as if it said _go_. Two annotators annotated 20 sentences for two rounds of 10, discussing disagreements after each round and ultimately reaching a Cohen's \(\kappa\) of 92.55 (agreeing on 152 out of 164 tokens), after which one annotator annotated the rest of the data for a total of 2084 tokens with 929 types. A data statement is given in Appendix B. One notable challenge in the ESF dataset is distinguishing between proper nouns and regular nouns: Capitalization is inconsistent, and many terms are unique to the _Elder Scrolls_ video game universe. In some cases, even extensive context exploration did not provide enough information to make a decisive distinction; consider, e.g., _Jazbay Grapes_. This expression is not consistently capitalized in the data, so it is unclear whether it is meant as a proper or common noun. Seeing as it refers to an alchemy ingredient and food item, we considered it semantically comparable to a fruit variety (e.g., _Chardonnay_/NNP_grape/NN), and thus tagged it as _Jazbay_/NNP_ (proper noun) _Grapes_/NNS (common noun).
## 3 Experiments
### POS Taggers
We evaluated two taggers on ESF: The Stanford tagger (Toutanova et al., 2003) and Bilty (Plank et al., 2016)5, both of which have reported very high accuracies on the WSJ dataset (97.24% and 97.22%, respectively). They also represent two different architectural implementations: The Stanford tagger relies on a bidirectional dependency network with lexical features and character n-grams, and the Bilty relies on a bidirectional RNN with concatenated word and character embeddings. We trained both taggers on WSJ (OntoNotes 4.0 version with some adaptions, see Appendix A) and report performance both in- and out-of-domain. For fair comparison, we did not use word clusters or word embeddings, but we otherwise used the best-performing models from the original papers6.
Footnote 5: We did not evaluate a transformer-based POS tagger, as this gave us more control over (pre-)training data and saved compute.
Footnote 6: For Bilty, the best performance on WSJ was reported with 30 epochs and \(\sigma\) = 0.3, so these values were used in our experiments as well.
### Evaluation
The results, given in Table 1, are consistent with previous findings for POS taggers trained on WSJ, where "tagging performance degrades on out-of-domain data" (Gimpel et al., 2011). However, ac
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline \multicolumn{1}{c|}{Tgt} & \multicolumn{1}{c|}{Tagger} & All & Known & Unknown \\ \hline WSJ & Stanford & **0.9695** & **0.9714** & **0.9037** \\ WSJ & Bilty & 0.9677 & 0.9702 & 0.8784 \\ \hline ESF & Stanford & 0.9419 & 0.9630 & 0.7837 \\ ESF & Bilty & **0.9458** & **0.9647** & **0.8041** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy split into known (seen during training) and unknown (unseen) tokens (12% of ESF tokens and 2% of WSJ dev tokens).
curacy does not decrease by much for ESF compared to Twitter data, where accuracy dropped to 83.29% [8]. This is likely due to a greater degree of similarity between WSJ and ESF than WSJ and Twitter; unlike Twitter, ESF does not present the same affinity for "conversational nature, [...], lack of conventional orthography, and 140-character limit of each message" [10]. Performance mainly drops for out-of-vocabulary tokens, which makes up 12% of tokens in ESF and 3% of tokens in WSJ dev.
## 4 Error Analysis
In addition to reporting the most frequent tag confusions (Table 2), we qualitatively evaluated the output of each tagger and identified the following error categories:
**Notoriously Difficult Distinctions** These are errors made on tokens that are notoriously difficult to classify, and where linguistic tests are needed to determine the part of speech in the given context [11]. Examples include \(\mathit{out}/\{\texttt{RB}|\texttt{RP}|\texttt{IN}\}\), \(\mathit{back}/\{\texttt{RB}|\texttt{RP}\}\), \(\mathit{willing}/\{\texttt{VBG}|\texttt{JJ}\}\), and \(\mathit{shattered}/\{\texttt{JJ}|\texttt{VBN}\}\).
**Tagging Conventions** These are errors thought to be a consequence of WSJ not adhering to the guidelines by Santorini and MacIntyre (1995). An example is the token \(\mathit{both}\), which should be tagged as CC when it occurs with \(\mathit{and}\). In some cases, it had been tagged as DT instead.
**Encoding Errors** These errors are a result of encoding issues and occur only with the Stanford tagger. Examples include dashes (--) being read as \(\hat{\mathit{O}}\mathit{C}\mathit{\bar{o}}\) or \(\hat{\mathit{O}}\mathit{C}\mathit{\bar{o}}\), and trailing points (...) being read as \(\hat{\mathit{O}}\mathit{C}^{\ast}\). The files had been encoded in UTF-8.
**Proper Noun Errors** These are NNP (S) confusion errors. Examples are given in Table 2. This category makes up 32.23% of the errors made by the Stanford tagger and 27.43% of the errors made by Bilty, though this number also includes any gold standard errors and cases where the taggers were expected to make errors. Consider, e.g., _Restoring the Guardians_; being the name of a quest, it should be tagged as _Restoring_/NNP_the/DT_Guardians/NNPS7, but both taggers have tagged _Restoring_ as VBG.
Footnote 7: Section 5.3 in Santorini and MacIntyre (1995).
**Errors of Quantity** These are errors based on singular and plural tag confusion, e.g., _folk_ in _for most folk (...)_ being tagged as NN rather than NNS.
**Source Text Errors** This category includes errors that likely stem from spelling errors or non-standard constructions in the source text, which complicates both the gold standard annotations and the machine annotations. Consider _but good_ in the sentence _those guards sold you out but good_; it is unclear whether _good_ is meant as an advertb (RB)8 or as an adjective (JJ)9.
Footnote 8: E.g., _those guards sold you out well/RB_.
Footnote 9: E.g., _those guards sold you out, but that’s good/JJ_.
**Gold Standard Errors** These are errors in the gold standard. For instance, _nearby_ was annotated as RB even though it was used as an ADJ, _when_ had been tagged as IN even though it was used in a temporal sense, and _quite_ had been tagged as RB, but it preceded a determiner, making it a PDT.
**Unexplainable Errors** These are errors where the cause for the erroneous prediction cannot be inferred from the context. Examples include _sooner_ being tagged as RB (adverb) instead of RB (adverb in the comparative), and _his_ being tagged as PRP(possessive pronoun) even though it was used nominally (PRP).
### Errors on Known Tokens
Many of the errors made by both taggers are notoriously difficult distinctions. Each tagger had also made additional, individual errors in this category,
\begin{table}
\begin{tabular}{l l l l l|l l l} \hline \hline \multicolumn{3}{c}{Stanford} & \multicolumn{3}{c}{Blity} \\ \(\hat{y}\) & \(y\) & \# & \% & \(\hat{y}\) & \(y\) & \# & \% \\ \hline
**NNP** & **NN** & **9** & **7.4** & **NNP** & **NN** & **10** & **8.9** \\ NNP & NNPS & 7 & 5.8 & JJ & NN & 6 & 5.3 \\ JJ & NN & 7 & 5.8 & **NNP** & **JJ** & **6** & **5.3** \\
**NNP** & JJ & 7 & 5.8 & VBN & JJ & 5 & 4.4 \\
**NNP** & **NNP** & **5** & **4.1** & NNP & NNPS & 4 & 3.5 \\
**NNP** & **NNS** & **4** & **3.3** & **NNP** & **NNP** & **4** & **3.5** \\ VBN & JJ & 4 & 3.3 & **NNP** & **NNS** & **4** & 3.5** \\ & & & & & NN & JJ & 4 & 3.5 \\ & & & & VBD & VBN & 4 & 3.5 \\ & & & & NN & NNS & 4 & 3.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error counts of the most frequent error types (count \(>\) 3). \(\hat{y}\) are the predicted tags and \(y\) are the gold tags. Proper noun errors are in bold.
which emphasizes the difficulty of the task. The error analysis also revealed a few gold standard errors, as well as a number of proper noun errors; the latter is likely caused by incorrect capitalization in the source text. It should be noted, however, that in some cases, the correct part of speech (common noun vs. proper noun) can be considered disputable Manning (2011).
The Stanford tagger made more mistakes in the unexplainable errors category, e.g., tagging _will_/MD as VBP or NN, _details_/VBZ and _tasks_/VBZ as NNS, _thaw_/VB and _place_/VB as NN, and _initiate_/NN as VB despite _initiate_ being preceded by _an_/DT. Bilty did not make the same amount of unexplainable errors as the Stanford tagger, though it did tag the token _have_ incorrectly twice: once as VBP rather than the correct VB10, and once as VB rather than VBP11, even though the tagger had correctly labeled the immediate surrounding tokens in both sentences. Bilty also had some issues with verbs that have the same form in the past tense and as a past participle, e.g., _locked_, _found_ and _called_, but only when they occured as participles without a main verb (e.g., when modifying nouns)12. In these situations, Bilty incorrectly labeled the tokens as VBD instead of VBN. Bilty also incorrectly labeled _impressed_ as VBN rather than JJ, though this is likely due to inconsistent annotations in the training set.
Footnote 10: _(...) I do/VBP have/VB standards._
Footnote 11: _Have/VBP you heard about Goldenglow Meadery?_
Footnote 12: _Tharm had a powerful weapon called/VBN the Staff of Chaos (...)_
### Errors on Unknown Tokens
The majority of the errors made by both taggers on unknown tokens are proper noun errors. This is unsurprising, as the _Elder Scrolls_ universe introduces a number of fictional races and ethnicities that are rightfully capitalized, but often used as nouns (NN) or adjectives (JJ), e.g., _Orsimer_, _Nord_, and _Khajiit_. Additionally, some tokens have been capitalized even when they do not represent proper nouns, leading both taggers to incorrectly tag them as NNP, e.g., _Strength_ and _Endurance_. Both taggers also make errors of quantity, but they are not consistent in using, e.g., the suffix _-s_ as a determining factor; for instance, _Nikulas_ (the name of a character) was tagged as plural, but _Scrolls_ was tagged as singular. The taggers also struggled with terms from the _Elder Scrolls_ universe that have the same form in both singular and plural, e.g., _Falmer_, _Dunner_, and _Skaal_. Interestingly, both taggers make additional, unique proper noun errors where they each tag proper nouns as common nouns despite correct capitalization. The Stanford tagger, however, makes almost double the amount of these errors compared to the Bilty (7 vs. 4).
### Experiments with Letter Case
Seeing as proper noun errors are a large source of errors for the unknown tokens, we attempted to increase robustness by lowercasing the data both for training and testing. This, however, turned out to considerably harm the detection of proper nouns for both taggers (Table 3), more so for Bilty, as indicated by the decrease in recall scores. Precision is not harmed as severely, but is still noticeably affected. This suggests that, even for datasets with inconsistent capitalization, capitalization aids the taggers more than it disrupts them.
## 5 Conclusion
This study presents a new, modest wiki dataset, Elder Scrolls Fandom (ESF), for qualitative cross-domain evaluation of POS taggers. Our analyses show that the Stanford tagger Toutanova et al. (2003) and Bilty Plank et al. (2016), a BiLSTM, both suffer a significant decrease in accuracy on unknown ESF tokens when trained on Wall Street Journal, one of the primary challenges being distinguishing proper nouns and other parts of speech. Lowercasing the data decreases accuracy even further, and negatively affects precision and recall on proper nouns; this shows that capitalization aids the taggers more than it disrupts them, even when inconsistent. Bilty performs slightly better than the Stanford tagger out-of-domain, the largest difference in accuracy being on unknown tokens: 78.37% for Stanford vs. 80.41% for Bilty.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c|}{NNP(S) Prec.} & \multicolumn{2}{c}{NNP(S) Rec.} \\ & Orig. & Low. & Orig. & Low. & Orig. & Low. \\ \hline Stanford & 94.19 & 89.40 & 89.50 & 78.05 & 94.23 & 46.15 \\ Bilty & 94.58 & 88.33 & 90.87 & 76.04 & 95.67 & 35.10 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiments with original casing (Orig.), and lowercased data (Low.) and its influence on proper noun (NNP+NNPS) precision (Prec.) and recall (Rec.).
## Acknowledgments
We would like to thank the NLPnorth and MaiNLP members as well as the anonymous reviewers for their feedback.
|
2306.14647 | Mono-to-stereo through parametric stereo generation | Generating a stereophonic presentation from a monophonic audio signal is a
challenging open task, especially if the goal is to obtain a realistic spatial
imaging with a specific panning of sound elements. In this work, we propose to
convert mono to stereo by means of predicting parametric stereo (PS) parameters
using both nearest neighbor and deep network approaches. In combination with
PS, we also propose to model the task with generative approaches, allowing to
synthesize multiple and equally-plausible stereo renditions from the same mono
signal. To achieve this, we consider both autoregressive and masked token
modelling approaches. We provide evidence that the proposed PS-based models
outperform a competitive classical decorrelation baseline and that, within a PS
prediction framework, modern generative models outshine equivalent
non-generative counterparts. Overall, our work positions both PS and generative
modelling as strong and appealing methodologies for mono-to-stereo upmixing. A
discussion of the limitations of these approaches is also provided. | Joan Serrà, Davide Scaini, Santiago Pascual, Daniel Arteaga, Jordi Pons, Jeroen Breebaart, Giulio Cengarle | 2023-06-26T12:33:29Z | http://arxiv.org/abs/2306.14647v1 | # Mono-to-stereo through Parametric Stereo Generation
###### Abstract
Generating a stereophonic presentation from a mono-phonic audio signal is a challenging open task, especially if the goal is to obtain a realistic spatial imaging with a specific panning of sound elements. In this work, we propose to convert mono to stereo by means of predicting parametric stereo (PS) parameters using both nearest neighbor and deep network approaches. In combination with PS, we also propose to model the task with generative approaches, allowing to synthesize multiple and equally-plausible stereo renditions from the same mono signal. To achieve this, we consider both autoregressive and masked token modelling approaches. We provide evidence that the proposed PS-based models outperform a competitive classical decorrelation baseline and that, within a PS prediction framework, modern generative models outshine equivalent non-generative counterparts. Overall, our work positions both PS and generative modelling as strong and appealing methodologies for mono-to-stereo upmixing. A discussion of the limitations of these approaches is also provided.
Dolby Laboratories
[email protected]
## 1 Introduction
Single-channel monophonic (mono) signals are found in multiple situations, such as historical recordings or current ones made with a single microphone (e.g., field recordings, amateur band rehearsals, etc.). Even recordings made with two or more microphones that are not spaced enough or that do not have enough directivity may be better treated by downmixing to mono (e.g., mobile phone recordings). Furthermore, many processing algorithms, including modern deep neural network algorithms, cannot yet or are simply not designed to handle more than one channel. Unlike these scenarios, the most common listening experiences, either though loudspeakers or headphones, involve two-channel stereophonic (stereo) signals. Hence the usefulness of mono to stereo upmixing.
Classical approaches to produce a pseudo-stereo effect from a mono signal are based on decorrelation. Initial approaches used time delays and complementary filters [1], although all-pass filters [2] are commonly used nowadays, together with multi-band processing to improve the effect [3, 4, 5]. Instead of multi-band, estimation of foreground/background time-frequency tiles can also be performed [6]. Decorrelation approaches, however, only provide a mild stereo effect, with limited width, and cannot spatially separate individual elements in the mix. To overcome the latter, researchers have considered source separation approaches [7, 8, 9]. The main idea is that, if individual elements or tracks are available, those can be panned to any location, producing a more realistic spatial image. Nevertheless, this approach presents several drawbacks: firstly, even the best-performing source separation algorithms produce artifacts [10], which can be highly audible in the stereo render; secondly, current separation algorithms are very restrictive in the number and types of elements they can separate [11], thus considerably limiting their application in real-world spatialization tasks; thirdly, after elements or tracks are separated, it remains to be seen how can they be automatically panned in a realistic manner (cf. [12]), which is the reason why separation-based approaches usually involve user intervention in the panning stage [7, 8, 9].
Music is a paradigmatic example where, apart from stereo capture, artists and engineers massively exploit the stereo image to serve a creative artistic intent. Instrument panning is a fundamental part of music mixing, and achieving the right balance requires musical sensibility as well as technical knowledge [13]. However, apart from some style conventions, the stereo image of a music mix is a highly subjective construct: given a set of input tracks, there are many plausible stereo renditions from which selecting the final mix is practically only a matter of artistic choice. Hence, we posit that this is a perfect ground for modern deep generative models [14]. However, to our surprise, we only found one work using deep neural networks for mono-to-stereo [15], with very limited generative capabilities.
In this work, we propose the use of machine learning techniques and parametric stereo (PS) decoding [16, 17] for converting mono to stereo. PS is a coding technique that allows to transmit a stereo signal through a mono signal plus side information that, with enough bit rate, can be used to recover an almost transparent version of the original stereo content. By leveraging machine learning techniques, we generate (or invent) plausible versions of PS parameters in situations where side information is not available. These parameters can then be used to decode an existing mono signal into a plausible stereo one. We propose two variants of PS generation: one based on a classical nearest neighbor approach [18] and another one based on deep generative modeling. For the latter, we consider both common autoregressive modeling [19] and more recent masked token modeling [20], and show that there can be noticeable
differences between the two. We use subjective testing to compare the proposed approaches and show that PS generation can produce results that are more appealing than considered competitive baselines. We also introduce two objective evaluation metrics and discuss the limitations of both PS and generative approaches for mono-to-stereo.
## 2 Parametric Stereo
PS exploits the perceptual cues that are more relevant to our spatial perception of sound, namely the fact that directional sources produce interaural level and phase (or time delay) differences, and the fact that diffuse sound fields manifest as decorrelated signals at the two ears. These cues effectively describe how a mono signal is mapped to the left and right stereo channels, and can be measured using three quantities or parameters [16, 17]: interchannel intensity differences (IID), interchannel time differences (or, equivalently, phase differences), and interchannel coherence or correlation (IC). PS parameters are computed in frequency bands, to reflect the frequency-dependent nature of the spatial properties of stereo content, and also on a frame-by-frame basis, to reflect the time-varying nature of frequency cues and spatial images. An important observation is that PS is capable of capturing spatial attributes that are perceptually relevant and re-instate those without changing signal levels, tonality, or other artifacts that may arise from methods that operate on audio signals directly. In this work, for compactness and ease of implementation, we choose to use the two-parameter approach by Breebaart et al. [17], which models IID and IC without interchannel phase differences, accepting that this two-parameter approach is not providing the best possible quality of PS coding. We now overview this PS coding strategy and introduce the main notation of the article.
### Encoding
Given two complex-valued spectrograms expressed as complex matrices \(\mathbf{X}\) and \(\mathbf{Y}\), where rows represent frequency bins and columns represent frames, we define the band-based cross-spectrogram function
\[\rho(\mathbf{X},\mathbf{Y})=\mathbf{B}\;(\mathbf{X}\odot\mathbf{Y}^{\ast}),\]
where \(\odot\) denotes elementwise multiplication, \({}^{\ast}\) denotes elementwise complex conjugate, and \(\mathbf{B}\) is a matrix with ones and zeros that is used to sum frequency bins according to a certain frequency band grouping (using matrix multiplication). In this work, we use the same spectrogram settings and banding as in [17]: frames of 4,096 samples for 44.1 kHz signals, 75% overlap, a Hann window, and 34 bands which are approximately distributed following equivalent rectangular bandwidths.
Given the two complex spectrograms \(\mathbf{L}\) and \(\mathbf{R}\) corresponding to the left and right channels of a stereo signal, we can compute the IID using
\[\mathbf{P}^{\text{IID}}=10\log_{10}\left(\rho(\mathbf{L},\mathbf{L})\odot \rho(\mathbf{R},\mathbf{R})\right),\]
where \(\oslash\) denotes elementwise division. The IC is similarly derived from the cross-spectrogram following
\[\mathbf{P}^{\text{IC}}=\text{Re}\left\{\rho(\mathbf{L},\mathbf{R})\right\} \oslash\sqrt{\rho(\mathbf{L},\mathbf{L})\odot\rho(\mathbf{R},\mathbf{R})},\]
where \(\text{Re}\{\}\) extracts the real part of each complex value and the square root is applied elementwise. Notice that the use of the real part instead of the absolute value allows to retain information on the relative phase of the two signals that would otherwise be lost. We finally quantize \(\mathbf{P}^{\text{IID}}\) and \(\mathbf{P}^{\text{IC}}\) by discretizing each matrix element. To do so, we use the same non-uniform quantization steps as in [17]: 31 steps for IID and 8 for IC. We denote the quantized versions as \(\mathbf{Q}^{\text{IID}}\) and \(\mathbf{Q}^{\text{IC}}\).
To facilitate subsequent operation, and to prevent potential prediction mismatches between IID and IC, we join both parameters and treat them as one. For \(\mathbf{P}^{\text{IID}}\) and \(\mathbf{P}^{\text{IC}}\), we concatenate them in the frequency axis and form a single matrix \(\mathbf{P}\). For \(\mathbf{Q}^{\text{IID}}_{i,j}\) and \(\mathbf{Q}^{\text{IC}}_{i,j}\), we fuse them elementwise into individual integers using the amount of IC quantization steps. This way, \(\mathbf{Q}_{i,j}=8\cdot\mathbf{Q}^{\text{IID}}_{i,j}+\mathbf{Q}^{\text{IC}}_{ i,j}\) (note that we can recover back \(\mathbf{Q}^{\text{IID}}_{i,j}\) and \(\mathbf{Q}^{\text{IC}}_{i,j}\) using the division and modulo operators).
### Decoding
To decode the above PS encoding, we perform a mixing between the available mono signal and a decorrelated version of it. We decorrelate a mono signal \(\mathbf{S}\) by applying a cascade of 4 infinite impulse response all-pass filters and obtain \(\mathbf{S}^{\text{D}}\) (this all-pass filter is an enhanced version of the basic one proposed in [17] thanks to transient detection and preservation, which avoids time smearing). After that, we can decode the estimated left and right channels \(\hat{\mathbf{L}}\) and \(\hat{\mathbf{R}}\) by carefully mixing \(\mathbf{S}\) and \(\mathbf{S}^{\text{D}}\). We can do so with
\[\hat{\mathbf{L}} =\mathbf{M}^{\text{z}}\odot\mathbf{S}+\mathbf{M}^{\text{b}}\odot \mathbf{S}^{\text{D}},\] \[\hat{\mathbf{R}} =\mathbf{M}^{\text{c}}\odot\mathbf{S}+\mathbf{M}^{\text{d}}\odot \mathbf{S}^{\text{D}},\]
using mixing matrices \(\mathbf{M}\), which are computed from the coded PS parameters \(\mathbf{P}^{\text{IID}}\) and \(\mathbf{P}^{\text{IC}}\). The exact calculation of mixing matrices \(\mathbf{M}\) is straightforward to obtain by adapting to matrix notation the formulation in [17], to which we refer for further detail and explanation.
## 3 Parametric Stereo Generation
We now explain the proposed approaches for PS generation. All of them share the above encoding-decoding formulation, either using the quantized or unquantized versions. During training, stereo signals are used to compute input downmixes \(\mathbf{S}=(\mathbf{L}+\mathbf{R})/2\) and target PS parameters \(\mathbf{P}\) or \(\mathbf{Q}\) (hence the proposed approaches aim at producing \(\hat{\mathbf{P}}\) or \(\hat{\mathbf{Q}}\)). Note that, in the case of the generative models we consider, one has to additionally input contextual PS parameters in a teacher-forcing schema [21]. We also want to note that, since they are quite common practice, it is not in the scope of the current work to provide a detailed explanation of existing generative models (instead, we refer the interested reader to the cited references). In
all proposed approaches, we tune model hyperparameters by qualitative manual inspection in a preliminary analysis stage. PS specifications are predefined and correspond to the ones mentioned in Sec. 2. Neural network approaches use Pytorch's [22] defaults and are trained with Adam for 700 epochs using a batch size of 128 and a learning rate of \(10^{-4}\), with warmup cosine scheduling.
### Nearest neighbor
The first approach proposes to impose the PS parameters of existing, similar stereo fragments to individual mono frames using a nearest neighbor (NN) algorithm [18]. We call the approach PS-NN. The idea is to retrieve frame-based PS parameters using mono frame sequences, and to use the sequence of those retrieved parameters to decode the mono input. At training time, we randomly select a song, randomly extract an \(N=20\) frame spectrogram \(\mathbf{S}\) and its corresponding parameters \(\mathbf{P}\), and compute a key-value vector pair (we here use the magnitude spectrogram). The key vector is formed by framewise averaging the energy in each band,
\[\mathbf{k}=\frac{1}{N}\sum_{j=1}^{N}\mathbf{B}\,\mathbf{S}_{\cdot,j}, \tag{1}\]
and the value vector corresponds to the PS parameters of the last frame, \(\mathbf{v}=\mathbf{P}_{\cdot,N}\), which allows for a fully-causal schema. We repeat the process half a million times and store all pairs in a nearest neighbor structure. At test time, for every frame of the input mono signal, we compute an average as in Eq. 1, query the nearest neighbor structure, retrieve the \(\hat{\mathbf{v}}\) vector of the closest neighbor (using Euclidean distance), and assign it as the predicted PS parameter for that frame. This way, we obtain a sequence of estimated PS parameters \(\hat{\mathbf{P}}\).
In preliminary analysis, we observed that PS-NN produced a high-rate 'wobbling' effect between left and right (that is, panning was rapidly switching from one channel to the other) and presented some temporal inconsistencies (that is, sources were unrealistically moving with time, even within one- or two-second windows). To counteract these effects, we implemented a two step post-processing based on (i) switching the sign of \(\hat{\mathbf{P}}^{\text{HD}}_{\cdot,j}\) if the Euclidean distance to \(\hat{\mathbf{P}}^{\text{HD}}_{\cdot,j-1}\) was smaller, and (ii) applying an exponential smoothing on the columns of \(\hat{\mathbf{P}}\) with a factor of 0.95. This post-processing substantially reduced the aforementioned undesirable effects.
### Autoregressive
The second approach proposes to model PS parameters with a deep generative approach based on an autoregressive (AR) transformer [19]. We call the approach PS-AR. Our architecture is composed by 7 transformer encoder blocks of 512 channels, with 16 heads and a multilayer perceptron (MLPs) expansion factor of 3. We use sinusoidal positional encoding at the input, and add a two-layer MLP with an expansion factor of 2 at the output to project to the final number of classes (which is 31\(\times\)8 tokens times 34 bands per frame, see Sec. 2.1). The input is formed by a projection of the mono spectrogram \(\mathbf{S}\) and the teacher-forcing information \(\mathbf{Q}\) into a 512-channel activation \(\mathbf{H}\),
\[\mathbf{H}=\phi(\mathbf{S})+\sum_{i=1}^{B}\xi_{i}(\mathbf{Q}_{i,\cdot}), \tag{2}\]
where \(\phi\) is a two-layer MLP with an expansion factor of 2, \(B=34\) is the number of bands, and \(\xi_{i}\) is a learnable per-band token embedding (which includes the mask token, see below). We train the model with weighted categorical cross-entropy, using the weight
\[w=1+\lambda\sigma\left(\left[\mathbf{P}^{\text{HD}}\right]_{\pm\epsilon} \right)+\sigma(\mathbf{P}^{\text{IC}}), \tag{3}\]
calculated independently for every element in the batch. In Eq. 3, \(\sigma(\mathbf{X})\) corresponds to the elementwise standard deviation of \(\mathbf{X}\), \(\lambda=0.15\) compensates for different magnitudes, \([\ ]_{\pm\epsilon}\) corresponds to the clipping operation, and \(\epsilon=20\) is a threshold to take into account the little perceptual relevance of IIDs larger than 20 dB [23]. In preliminary analysis, we observed that using \(w\) qualitatively improved results, as it shall promote focus on wider stereo images and more difficult cases.
PS-AR follows a PixelSNAIL recursive approach [24], starting with the prediction of lower frequency bands, then higher frequency bands, and moving into the next frame once all bands are predicted. To efficiently exploit the past context, all input sequences have full-sequence teacher-forcing except for the upper frequency bands of the last frame, which are masked consecutively and uniformly at random during training [24]. At test time, we sample recursively, following the same masking strategy and using a temperature hyperparameter \(\tau=0.9\). In addition, we employ classifier-free guidance [25] with a hyperparameter \(\gamma=0.25\). For that, we use the approach in [26], which modifies the conditional logits \(\mathbf{U}^{\text{cond}}\) with unconditional ones \(\mathbf{U}^{\text{uncond}}\) such that
\[\mathbf{U}=(1+\gamma)\mathbf{U}^{\text{cond}}-\gamma\mathbf{U}^{\text{uncond }}. \tag{4}\]
To have both a conditional and an unconditional model within the same architecture, following common practice, we randomly replace \(\phi(\mathbf{S})\) in Eq. 2 by a learnable dropout token 10% of the time.
### Masked token modeling
The third approach proposes to model PS parameters with a deep generative approach based on masked token modeling (MTM) [20]. We call the approach PS-MTM. The architecture, loss, inputs, and outputs of the model are the same as in PS-AR, including the cross-entropy weights (Eq. 3) and classifier-free guidance (Eq. 4). The only difference is the masking approach and the sampling procedure, which implies different hyperparameters for the testing stage (we use \(\tau=4.5\) and \(\gamma=0.75\), but now the temperature \(\tau\) has a different meaning as explained below).
MTM generates patch representations **Q** with quantized elements **Q\({}_{i,j}\)** which are dubbed as tokens (in our case the matrix **Q** has dimensions \(B\times N\), with \(N\) being the number of considered audio frames; the maximum number of tokens in **Q\({}_{i,j}\)** is 31\(\times\)8, as defined in Sec. 2.1). During training, the teacher-forcing input **Q** is masked uniformly at random, and only the ground truth elements corresponding to the masked positions are used to compute the cross-entropy loss at the output. The number of elements to mask is also selected at random following a cosine schedule [20] (this specifically includes the case where all patch elements are masked). During sampling, patch representations are formed with 50% overlap, using no masking for the first half of the patch, similar to [26].
MTM sampling is an iterative process that achieves orders of magnitude speedups compared to autoregressive modeling (in our case PS-MTM uses 20 steps for a 3 s hop, while PS-AR requires \(B=34\) steps for just a single audio frame of a few milliseconds). MTM iteratively samples a masked patch, performs predictions with classifier-free guidance [26], chooses the predictions with the highest logit score for the next iteration (they will become unmasked and fixed), and reduces the percent of masked tokens following the same scheduling as in training until no masked elements remain [20]. Differently from training, the masking used in sampling is not random, but based on logit scores (lowest ones become masked), and noise is added to logit scores to promote diversity [20, 26]. In our case, we employ Gaussian noise with zero mean and a standard deviation \(\tau\), which becomes our redefined temperature parameter.
## 4 Evaluation
To train and evaluate all approaches we use a collection of professionally-recorded stereo music tracks at 44.1 kHz. We consider 419,954 tracks for training and 10 k for evaluation, and randomly extract a 10 s chunk from each track. During training, we sample 6 s patches from those and perform data augmentation using a random gain and also randomly switching left and right channels.
### Baselines: regression and decorrelation
In addition to the original stereo and its mono downmix, we consider two additional baselines to compare with the previous approaches. The first baseline corresponds to an ablation of the deep generative approaches, and tries to answer the question of whether a generative component is needed or convenient for the task. Thus, the baseline consists of a neural network with the exact same configuration as PS-AR or PS-MTM, but substituting the generative part by standard regression with mean-squared error [18]. We term this baseline PS-Reg, and note that it could be considered an enhanced modern version of the approach of Chun et al. [15], using PS.
It is interesting to mention that, in preliminary analysis, we observed that PS-Reg accurately estimated IC values, but consistently failed to predict IIDs. The predicted IIDs had minimal deviation from zero, which can be attributed to the probability distribution function of IID values being centered around zero with equally plausible deviations to the right and to the left. This was an early indication that the one-to-many mapping of IID prediction cannot be correctly handled by regression methods, and that the task would be better served by a generative approach.
The second baseline we consider corresponds to a variant of classical decorrelation approaches. Here, the decorrelation is implemented by means of an all-pass filter network enhanced by (i) detection and preservation of transients, and (ii) a frequency-dependent mix between original and decorrelated signals to achieve a frequency-dependent IC. We term this baseline Decorr, and we note that it could be considered an improved modern version of the approaches [1, 2, 3, 4, 5, 6].
### Objective measures
To the best of our knowledge, there are no objective measurements for plausible stereo renderings nor suitable PS prediction scores. Due to the highly creative/subjective nature of the task, common error measurements may not be appropriate. Therefore, as a way of measuring progress, we propose to use a couple of metrics inspired from the literature on generative modeling (cf. [14]). The first metric we consider is the minimum error on a large sample basis, \(E_{\min}\). Given a large sample of generated PS parameters (\(K=128\) for a single audio excerpt), \(E_{\min}\) chooses the minimum error with respect to the ground truth:
\[E_{\min}=\min_{k}\left[\sum_{i,j}\delta\left(\textbf{P}_{i,j},\hat{\textbf{P} }_{i,j}^{(k)}\right)\right],\]
where \(\delta\) is a suitable error function. The idea is that if we allow the model to generate many samples for every input, in the limit of very large \(K\) one of them should come close to the ground truth. For PS parameters, we use absolute errors, weight the IID to compensate magnitudes with IC, and take into account some perceptual relevance for IID as in Sec. 3.2 and Eq. 3:
\[\delta(x,y)=\begin{cases}\lambda\,|[x]_{\pm\epsilon}-[y]_{\pm\epsilon}|&\text{ for IID},\\ |x-y|&\text{for IC}.\end{cases}\]
The second metric we consider is the Frechet distance on the PS parameter space, \(D_{\text{F}}\). Given a pool of PS parameters \(\mathbf{\mathcal{P}}\) and a \(K\) times larger pool of generated parameters \(\hat{\mathbf{\mathcal{P}}}\), assuming Gaussian distributions, \(D_{\text{F}}\) is computed as
\[D_{\text{F}}=\Big{|}\mu(\mathbf{\mathcal{P}})-\mu(\hat{\mathbf{\mathcal{ P}}})\Big{|}^{2}+\\ +\text{Tr}\left\{\sigma(\mathbf{\mathcal{P}})+\sigma(\hat{\mathbf{\mathcal{ P}}})-2\sqrt{\sigma(\mathbf{\mathcal{P}})\sigma(\hat{\mathbf{\mathcal{P}}})}\right\},\]
where \(\text{Tr}\{\}\) denotes the matrix trace and \(\mu\) and \(\sigma\) correspond to the mean vector and the covariance matrix over frames, respectively. The Frechet distance has become a standard measure in generative modeling where, instead
of the PS parameters used here, activations of pre-trained classification networks are used. We will see that it is also able to provide some informative cues in our task (Sec. 5).
### Subjective evaluation
Given the creative/subjective nature of the task, the best way to measure performance is through subjective testing. In this study, we ran a preference test with 24 listeners on 7 song excerpts of 10 s from the test set. To select those excerpts, we ranked the test excerpts based on \(w\) (Eq. 3) and randomly selected them from the top quartile. When doing so, we manually verified that the selected excerpts covered distinct musical genres and ensured that a PS-decoded version did not exhibit significant coding degradation (this way, we prime the listener to focus on the stereo image instead of potential artifacts introduced by our implementation of PS, Sec. 2).
The test consisted in providing a rating between 0 and 100 to 7 approaches: the three proposed ones, the two baselines, the mono downmix, and the original stereo signal (professional mix, non-coded). Mono and stereo signals provide us with intuitive bounds for the analysis of preference, and also serve us to discard non-expert listeners. Indeed, we found that the task is quite hard for non-experts, who provided many inconsistent ratings when asked to evaluate an appropriate balance between the width and the clarity of the mix. We used the most obvious of those inconsistencies to discard listeners from the test, namely the fact that they rated mono (input) over stereo (professional mix) in one or more occasions. Half of the users (12) did not incur into such inconsistency and were considered reliable enough to derive conclusions from their ratings. To compensate for differences in subjective scales, we normalized excerpt preference tuples between 0 and 1 (that is, we normalized the ratings for the 7 approaches independently per audio excerpt and listener). To measure statistical significance, we used pairwise Wilcoxon signed-rank tests and applied the Holm-Bonferroni adjustment for multiple testing with \(p=0.05\). The Wilcoxon signed-rank test is appropriate for our case as it is non-parametric and designed for matched samples.
## 5 Results
In Fig. 1 we depict the average listener preference for each item and approach. Initially, we see that the pattern differs depending on the test item. For some items, the proposed approaches are preferred over the baselines (e.g., Electro1, Jazz1, and Latin3) while, for some other items, differences between approaches are less clear (e.g., Rock1 and Soul2). All approaches seem to be preferred above the mono signal, except for baseline approaches with Electro1. Noticeably, in some situations, preference for some of the proposed approaches even overlaps with the original stereo (e.g., Electro1, Latin3, and Soul2). The case of Soul2 shows an example where considered approaches are almost as preferred as the original stereo, whereas the case of Jazz1 shows an example where considered approaches are still far from the professional mix.
Despite the different preferences on individual excerpts, upon further inspection we see that a clear pattern emerges when considering all items: proposed approaches rank better than mono and the considered baselines (Fig. 1, right). In Table 1 we confirm that, on average, PS-AR is preferred over the baseline approaches and that, in turn, PS-NN and PS-MTM are preferred over PS-AR. In Table 2, we report statistically significant differences between PS-NN/PS-MTM and the baseline approaches, but not between PS-AR and the baseline approaches (and neither between PS-AR and PS-NN/PS-MTM nor between PS-NN and PS-MTM). Overall, the results show that a generative approach to PS prediction can become a compelling system for mono-to-stereo. The performance of PS-NN is a nice surprise that was not predicted by the objective metrics, which otherwise seem to correlate with listener preference (Table 1; perhaps PS-NN does not follow the trend because it is not a generative approach).
Besides quality, another aspect worth considering is speed. In Table 3 we observe that PS-AR, as anticipated, is orders of magnitude slower than the other approaches, to the point of making it impractical for real-world operation.
Figure 1: Preference results for the items included in the subjective test (Sec. 4.3). Markers indicate average values and vertical bars indicate the 95% confidence interval associated to them.
Decorr, PS-Reg, and PS-NN are faster than real-time on CPU and PS-MTM is not. However, one should note that with PS-MTM we can easily trade off sampling iterations at the expense of some quality reduction (see [20, 26]). PS-NN may dramatically improve speed if we consider the use of fast nearest neighbor search algorithms or even hash tables, which make this approach very interesting for real-world deployment (note we deliberately made PS-NN comparable in size to the other approaches, see Table 3).
## 6 Discussion
Despite the good results obtained above, the subjective test reveals that, for some of the considered excerpts, there is still a gap between professional stereo mixes and the proposed approaches. We hypothesize that this gap is due to (i) limitations of the considered PS encoding, and (ii) the difficulty of the task itself. Regarding (i), we suspect that part of the low subjective scores of PS-based approaches is due to the audio distortions and tonal artifacts introduced by the PS decoding. Thus, we hypothesize that using a commercial implementation of PS coding (or perhaps even learning end-to-end the coding operation) could yield better results. Besides, we think that the fact that PS is defined in a banded domain poses a challenge to PS generation approaches, namely that individual bands are panned but approaches do not have an explicit notion of instrument or 'entity'. Indeed, we sometimes observe individual entities being panned into two different positions simultaneously (e.g., for the same instrument, we may get some frequencies panned to the left and some to the right, which is an uncommon stylistic decision). A potential solution to this problem could be to add better (or more) inputs to the models, together with more capacity, with the hope that they achieve a better understanding of what is a source before panning it. Along this line, it would be perhaps interesting to include some techniques used in the context of source separation with neural network models [11]. Regarding (ii), another issue we sometimes observe is with the temporal consistency of panning decisions, with an instrument appearing predominantly in one channel but then moving (without much artistic criterion) to the other channel after 10 or 20 s. Handling temporal consistency is a transversal problem across all generative models, typically handled by brute force (that is, more receptive field and/or larger models) or by some form of hierarchical or recurrent processing. Nonetheless, it is still an open issue, especially in the case of really long sequences like audio and music.
In addition to the limitations inherent to the technology, there are also some shortcomings in the test methodology. The subjective tests were conducted using headphones, whereas stereo images are typically created and mixed in a studio using professional loudspeaker monitoring. This implies that when critically evaluating the proposed approaches on a professional setup, additional subtleties might be discernible. Another methodological challenge was that often users had difficulty in evaluating multiple test excerpts according to the stated evaluation criteria. A potentially contributing factor to it was the absence of a standardized test methodology for multiple preference testing without a reference.
## 7 Conclusion
In this work we study methods to convert from mono to stereo. Our proposal entails (i) the use of PS for mono to stereo upmixing and (ii) the synthesis of PS parameters with three machine learning methods. We also introduce (iii) the use of modern generative approaches to the task and propose two variants of them. We additionally (iv) overview and adapt an existing PS methodology and (v) propose two tentative objective metrics to evaluate stereo renderings. The three proposed approaches outperform the classical and the deep neural network baselines we consider, and two of such approaches stand out with a statistically significant difference in the subjective test.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Approach & \(E_{\min}\downarrow\) & \(D_{\text{F}}\downarrow\) & Preference \(\uparrow\) \\ \hline \hline Mono & 0.104 & 20.89 & 0.090 \(\pm\) 0.042 \\ PS-Reg & 0.069 & 8.11 & 0.451 \(\pm\) 0.066 \\ Decorr & 0.093 & 8.32 & 0.457 \(\pm\) 0.064 \\ \hline PS-AR & 0.074 & 0.62 & 0.527 \(\pm\) 0.060 \\ PS-NN & 0.089 & 3.08 & 0.582 \(\pm\) 0.057 \\ PS-MTM & 0.068 & 0.59 & 0.608 \(\pm\) 0.050 \\ \hline Stereo & 0.000 & 0.03 & 0.908 \(\pm\) 0.042 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for the objective (\(E_{\min}\), \(D_{\text{F}}\)) and subjective (Preference \(\pm\) 95% confidence interval) evaluations.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & PS-Reg & Decorr & PS-AR & PS-NN & PS-MTM & Stereo \\ \hline Mono & ✓ & ✓ & ✓ & ✓ & ✓ \\ PS-Reg & & ✗ & ✓ & ✓ & ✓ \\ Decorr & & ✗ & ✓ & ✓ & ✓ \\ PS-AR & & & ✗ & ✗ & ✓ \\ PS-NN & & & & ✗ & ✓ \\ PS-MTM & & & & & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pairwise statistical significance for the case of all test items (12 subjects times 7 excerpts, see Sec. 4.3). The obtained \(p\)-value threshold is 0.0053.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Approach & Learnable & \multicolumn{3}{c}{RTF \(\downarrow\)} \\ & parameters & CPU & GPU \\ \hline \hline Decorr & 0 & 0.25 & n/a \\ PS-Reg & 30.1 M & 0.32 & 0.21 \\ PS-NN & 34.0 M\({}^{\dagger}\) & 0.82 & n/a \\ PS-MTM & 34.5 M & 5.81 & 0.33 \\ PS-AR & 34.5 M & 255.87 & 8.38 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Number of learnable parameters and average real-time factor (RTF). Superscript \({}^{\dagger}\) indicates an estimation of 0.5 M key-value pairs with \(B=34\) bands (Sec. 3.1). RTFs are measured on a Xeon(R) 2.20 GHz CPU and on a GeForce GTX 1080-Ti GPU.
## 8 Acknowledgments
We thank all the participants of the listening test for their input and Gautam Bhattacharya and Samuel Narvaez for preliminary discussions on the topic.
|
2308.12942 | Excitons in twisted AA' hexagonal boron nitride bilayers | The twisted hexagonal boron nitride (hBN) bilayer has demonstrated
exceptional properties, particularly the existence of electronic flat bands
without needing a magic angle, suggesting strong excitonic effects. Therefore,
a systematic approach is presented to study the excitonic properties of twisted
AA' hBN using the Bethe-Salpeter equation based on single-particle
tight-binding wave functions. These are provided by a one-particle Hamiltonian
that is parameterized to describe the main features of {\it ab initio}
calculations. The Bethe-Salpeter equation is then solved in the so-called
excitonic transition representation, which significantly reduces the problem
dimensionality by exploiting the system's symmetries. Consequently, the
excitonic energies and the excitonic wave functions are obtained from the
direct diagonalization of the effective two-particle Hamiltonian of the
Bethe-Salpeter equation. We have studied rotation angles as low as
$7.34^{\circ}$. The model allows the study of commensurate and incommensurate
moir\'e patterns at much lower computational cost than the {\it ab initio}
version of the Bethe-Salpeter equation. Here, using the model and effective
screening of the Keldysh type, we could obtain the absorption spectra and
characterize the excitonic properties of twisted hBN bilayers for different
rotation angles, demonstrating how this property affects the excitonic energies
and localizations of their wavefunctions. | Pedro Roman-Taboada, Estefania Obregon-Castillo, Andrés R. Botello-Mendez, Cecilia Noguez | 2023-08-24T17:30:45Z | http://arxiv.org/abs/2308.12942v1 | # Excitons in twisted AA' hexagonal boron nitride bilayers
###### Abstract
The twisted hexagonal boron nitride (hBN) bilayer has demonstrated exceptional properties, particularly the existence of electronic flat bands without needing a magic angle, suggesting strong excitonic effects. Therefore, a systematic approach is presented to study the excitonic properties of twisted AA' hBN using the Bethe-Salpeter equation based on single-particle tight-binding wave functions. These are provided by a one-particle Hamiltonian that is parameterized to describe the main features of _ab initio_ calculations. The Bethe-Salpeter equation is then solved in the so-called excitonic transition representation, which significantly reduces the problem dimensionality by exploiting the system's symmetries. Consequently, the excitonic energies and the excitonic wave functions are obtained from the direct diagonalization of the effective two-particle Hamiltonian of the Bethe-Salpeter equation. We have studied rotation angles as low as \(7.34^{\circ}\). The model allows the study of commensurate and incommensurate moire patterns at much lower computational cost than the _ab initio_ version of the Bethe-Salpeter equation. Here, using the model and effective screening of the Keldysh type, we could obtain the absorption spectra and characterize the excitonic properties of twisted hBN bilayers for different rotation angles, demonstrating how this property affects the excitonic energies and localizations of their wavefunctions.
excitons, Bethe-Salpeter equation, many-body, optics, twisted-hBN
Introduction
Two-dimensional (2D) materials and their heterostructures are ideal candidates for many technological applications [1; 2; 3; 4]. In particular, they are suitable for nanophotonics applications [5; 6; 7]. These kinds of materials can give rise to strong light-matter interactions through a myriad of dipole-type polaritonic excitations, such as infrared-active phonons [8], excitons in 2D semiconductors [9; 10; 11], and plasmons in doped 2D materials [12]. In 2D semiconductors, the appearance of strongly bound excitons opens the possibility of realizing efficient energy transfer by driving charge excitons via applied electric fields [13; 14]. Furthermore, they are used in solar cells [15] and photodetectors [16]. In this context, a 2D hexagonal boron nitride (hBN) monolayer is a semiconductor with a wide band gap, which exhibits strong correlation effects. For instance, excitons are characterized by binding energies of about 2 eV, which makes hBN an excellent candidate for applications in optoelectronic devices in the deep-ultraviolet region [17; 18]. On the other hand, heterostructures made out of 2D hBN monolayers are also of great interest. For example, by twisting an hBN bilayer, one can obtain flat bands without the necessity of magic angles [19; 20] due to an enhancement of the correlation effects, thus, improving its excitonic properties.
The Bethe-Salpeter equation is the most common approach to study the excitonic phenomena in condensed matter systems. It is a four-point equation that allows the calculation of the two-particle correlation function and describes the propagation of two particles within the Green's function formalism [21]. Its \(\mathbf{k}\)-space representation is often used to deal with periodic systems [22; 23]. Unfortunately, the _ab initio_ form of the Bethe-Salpeter equation can be extremely cumbersome and hard to solve even for periodic systems with few atoms per unit cell. For example, commensurate twisted hBN bilayer, whose excitonic properties are impossible to compute by this means even for its smallest unit cell. It translates into the need for sophisticated numerical approaches and the necessity of huge computational resources [24; 25; 26; 27].
Typically, one first computes the quasiparticle band structures fed into the Bethe-Salpeter equation using a combination of Density Functional Theory (DFT) and many-body perturbation theory [28; 29; 30] (_e.g._, GW approximation) and time-dependent DFT [31; 32; 33]. Instead of performing these expensive band structure calculations, one can directly employ a continuum or a tight-binding model to get them, significantly reducing the complexity of the problem. The continuum model works well for twisted systems with extensive unit cells or, equivalently, minimal rotation
angles [34; 35]. In this case, an effective continuum Hamiltonian can be written in \(\mathbf{k}\)-space that is often expanded in Bloch states, forming a momentum lattice that is then truncated [36]. This approach allows fast and precise calculation of the band structure of the twisted systems within a reduced range of energies. However, it is still being determined if the excitonic properties are correctly described since the model needs to include a detailed local atomic description. This latter is crucial for the correct computation of the dielectric function. Despite this, the approach has shown to be a good approximation for the case of non-twisted AA' and AB hBN bilayers but might fail to predict their optical response satisfactorily [30].
Using single-particle tight-binding states as input for the Bethe-Salpeter equation in real space has proven very successful [37; 38; 39; 40; 41]. Here, we employ a parameterized single-particle tight-binding model to include quasiparticle corrections. When this approach is applied to periodic systems, their symmetries are exploited to reduce the problem's dimensionality remarkably. This latter is possible because the number of electron-hole pairs needed to solve the Bethe-Salpeter equation can be hugely reduced by considering only non-equivalent hole sites [39]. Then, one can use the excitonic transition basis, built on single-particle wave functions, to write down the effective two-particle Hamiltonian of the Bethe-Salpeter equation.
Taking all this into account, the aim of this paper is twofold. First, we present a systematic approach to studying the excitonic properties of twisted AA' hBN bilayers using the Bethe-Salpeter equation based on adequate tight-binding models. Using moderate computational resources, the model allows us to analyze systems with unit cells with thousands of atoms. For the sake of simplicity, we describe the Bethe-Salpeter kernel, _i.e._, the direct Coulomb interaction and the exchange effects, by a model potential or, in other words, an effective screening. This approximation is only valid when the model potential is relatively smooth. To solve the Bethe-Salpeter equation, we write it down into the basis of excitonic transitions, which are constructed using the tight-binding single-particle wave functions. Second, we apply the model to twisted AA' hBN bilayers for several rotation angles to study their excitonic properties. As a remark, even though we have employed our model to hBN bilayers of the AA' type, its application to other vertical stacking systems is straightforward. Also, considering other 2D crystalline structures is possible as long as suitable tight-binding models and a smooth electron-hole interaction model potential are available.
This work is organized as follows. Section II describes the single-particle model that defines
the electronic properties of non-twisted and twisted AA' hBN bilayers. Section III is devoted to presenting the details of the two-particle model. All the approximations used here are listed, and the explicit form of the real space representation of the Bethe-Salpeter equation is shown. Section IV presents the results obtained for commensurate twisted AA' hBN bilayers, and the discussion. The conclusions are set down in section V. Finally, most of the technical information is given in the appendix.
## II Single-particle model
We define the tight-binding Hamiltonian used to describe the one-particle electronic properties of a non-twisted AA' hBN bilayer. To do so, we will assume that the electronic properties of an hBN bilayer at low energies are well described, within the one-particle approximation, by a \(p_{z}\) tight-binding Hamiltonian, \(H^{\rm 1P}\). In the second quantization formalism, \(H^{\rm 1P}\) takes the following form,
\[\begin{split} H^{\rm 1P}=&-t_{\parallel}\sum_{\langle i,j\rangle,\alpha}\left(c_{\alpha,i}^{\dagger}c_{\alpha,j}+\text{h.c.}\right)\\ &-t_{\perp}\sum_{i,j}^{M}\left(c_{1,i}^{\dagger}c_{2,j}+\text{h.c. }\right)\\ &+\sum_{i}^{M/2}\left[\Delta_{B}\left(c_{i}^{B}\right)^{\dagger} c_{i}^{B}+\Delta_{N}\left(c_{i}^{N}\right)^{\dagger}c_{i}^{N}\right],\end{split} \tag{1}\]
where \(\langle i,j\rangle\) indicates summation over nearest and next-nearest neighbors, \(c_{\alpha,i}\) is the annihilation operator for a \(p_{z}\) electron at the site \(i\) within the \(\alpha\) layer, with \(\alpha=1,2\). The operator \(c_{i}^{B}\) (\(c_{i}^{N}\)) annihilates an electron at the \(i\)-th boron (nitrogen) site. Therefore, \(\Delta_{N}\) and \(\Delta_{B}\) are the on-site energies of nitrogen and boron atoms, respectively. Finally, \(M\) is the number of atoms in the system, and \(t_{\parallel}\) is the intralayer hopping parameter, _i.e._, the interaction between atoms within the same layer. At the same time, \(t_{\perp}\) stands for the interlayer hopping parameter, _i.e._, the interaction between atoms at different layers.
Here, intralayer hopping parameters are considered constant, although interactions up to second-nearest neighbors are included. As a result, we have three types of interactions, namely, boron-nitrogen (\(t_{\parallel}^{\rm BN}\)), nitrogen-nitrogen (\(t_{\parallel}^{\rm NN}\)), and boron-boron (\(t_{\parallel}^{\rm BB}\)) interactions. In contrast, to correctly reproduce the electronic properties of hBN bilayers, the interlayer hopping parameters
must follow \(V_{pp}\) interactions,
\[\begin{split} t_{\perp}(r)&=n^{2}V_{pp\sigma}e^{-\beta _{1}(r-c_{0})/c_{0}}\\ &+(1-n^{2})V_{pp\pi}e^{-\beta_{2}(r-c_{0})/c_{0}},\end{split} \tag{2}\]
where \(n=\mathbf{r}\cdot\mathbf{e}_{z}/r\) is the direction cosine of the vector \(\mathbf{r}\) that joins two atoms at different layers and \(r=|\mathbf{r}|\). The parameters \(V_{pp\pi}\), \(V_{pp\sigma}\), \(\beta_{1}\), and \(\beta_{2}\) are fit to _ab initio_ calculations. As for intralayer hopping parameters, we have three types of interlayer hopping parameters: \(t_{\perp}^{\text{BN}}\), \(t_{\perp}^{\text{NN}}\), and \(t_{\perp}^{\text{BB}}\). Additionally, all interactions are zero for distances bigger than a cutoff radio, \(r_{\text{cutoff}}=7\) A. The best-fit values of all these parameters are listed in Table 1. In the appendix, a detailed explanation of how the parameters were tested for different hBN bilayers and the errors associated with the parameters, listed in Table 2, can be found.
### Non-twisted AA' hBN bilayer
To validate the model, we computed the band structure and the density of states (DOS) of a non-twisted AA' hBN bilayer. It was done by directly diagonalizing the periodic version of equation (1). For this purpose, we used a linear combination of localized atomic orbitals, chosen as Bloch functions. The band structure for a \((\mathbf{\Gamma},\mathbf{K},\mathbf{M},\mathbf{\Gamma})\)\(\mathbf{k}\)-path and the corresponding DOS are shown in Fig. 1. It can be seen that the tight-binding model (red lines) has an indirect band gap going from some point near the \(\mathbf{K}\)-point at the valence band to the \(\mathbf{K}\)-point at the conduction band, whose size is approximately 7.87 eV. This latter value was obtained after adjusting the parameters to reproduce the band gap at the \(\mathbf{\Gamma}\)-point provided by GW calculations [40; 41].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & BN & NN & BB \\ \hline \(V_{pp\pi}\) [eV] & -1.932 & -1.162 & -1.323 \\ \hline \(V_{pp\sigma}\) [eV] & 0.398 & 0.153 & 0.817 \\ \hline \(\beta_{1}\) & 3.739 & 11.91 & 4.156 \\ \hline \(\beta_{2}\) & 6.292 & 8.317 & 5.940 \\ \hline \(t_{\parallel}\) [eV] & -2.725 & 0.222 & 0.019 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used here to describe the one-particle electronic structure of AA’ hBN bilayers.
Besides that, this model qualitatively reproduces the main features found for AA' hBN bilayer in DFT calculations (blue lines in Fig. 1, near the \(\mathbf{K}\)-point), as long as a scissor operation is performed to reproduce the GW band gap correctly. The corresponding normalized DOS for our tight-binding model is presented in the right panel of Fig. 1. Once the model has been validated, we study the case of twisted AA' hBN bilayers.
### Twisted AA' hBN bilayer
Each layer of the bilayer system has a unit cell made of two atoms, one boron and one nitrogen, defined with the following unit vectors,
\[\begin{split}\mathbf{a_{1}}&=a(1,0)\\ \mathbf{a_{2}}&=a\left(\sqrt{3},1\right)/2.\end{split} \tag{3}\]
Figure 1: Right panel. Single-particle tight-binding (red lines) and DFT (blue lines) electronic band structure of a non-twisted hBN bilayer of type AA’. The tight-biding model reproduces the indirect band gap around the \(\mathbf{K}\)-point. Although this model is not in quantitative agreement with DFT calculations, it qualitatively reproduces their features at low energies, _i.e._, near the \(\mathbf{K}\) point. Left panel. The corresponding normalized DOS for the tight-binding model.
The superlattice of a commensurate twisted hBN bilayer can be determined by its unit vectors, which are established via two integers, \(m_{1}\) and \(m_{2}\), as usual [42; 43],
\[\begin{split}\mathbf{L_{1}}&=m_{1}\mathbf{a}_{1}+m_{2}\mathbf{a}_ {2}\\ \mathbf{L_{2}}&=-m_{2}\mathbf{a}_{1}+(m_{1}+m_{2})\mathbf{a}_{2}. \end{split} \tag{4}\]
For example, the twisted AA' hBN bilayer with \((m_{1},m_{2})=(2,1)\) is shown in Fig. 2. The rotation angle between layers can be obtained as follows,
\[\cos{(\theta)}=\frac{m_{1}^{2}+4m_{1}m_{2}+m_{2}^{2}}{2(m_{1}^{2}+_{1}m_{2}+m_ {2}^{2})}. \tag{5}\]
In this work, we have studied four pairs of indices \(m_{1}\) and \(m_{2}\), namely, \((2,1)\), \((3,2)\), \((4,3)\), and \((5,4)\). These indices are equivalent to rotation angles given by \(\theta=21.79^{\circ}\), \(13.17^{\circ}\), \(9.43^{\circ}\), and \(7.34^{\circ}\). Although all the calculations were performed using periodic boundary conditions from here on, it is essential to mention that our model can also be applied to incommensurate twisted hBN bilayers.
The _ab initio_ calculation shows that flat bands can be obtained without a magic angle by twisting AA' hBN bilayers [20]. Flat bands emerge at large rotation angles, for example, at
Figure 2: The lattice structure of twisted hBN bilayer for \((m_{1},m_{2})=(2,1)\). Solid blue and dotted violet lines represent the first and second layers, respectively. Here, the second layer is rotated with respect to the first one. The unit vector of layer 1 (layer 2) are denoted by \(\mathbf{a}_{1}\) and \(\mathbf{a_{2}}\) (\(\mathbf{a}_{1}^{\prime}\) and \(\mathbf{a}_{2}^{\prime}\)), while the unit vectors of the superlattice (\(\mathbf{L}_{1}\) and \(L_{2}\)) are given by \(\mathbf{L}_{1}=m_{1}\mathbf{a}_{1}+m_{2}\mathbf{a}_{2}\) and \(\mathbf{L}_{2}=-m_{2}\mathbf{a}_{1}+(m_{1}+m_{2})\mathbf{a}_{2}\).
Figure 3: Panel a). Single-particle band structures of a twisted AA’ hBN bilayer with \((m_{1},m_{2})=(2,1)\) for the left panel and \((m_{1},m_{2})=(3,2)\) for the right one. Such band structures were obtained from our one-particle tight-binding model. In the left panel, note how the valence and conduction bands begin to flatten near the \(\mathbf{K}\)-point, being more pronounced on the right panel, where truly flat bands appear. This effect agrees with _ab initio_ calculations previously reported [20]. Panel b). In this plot, we display a zoom of the single-particle DOS near the conduction and valence band edge for several rotation angles. In this way, it can be observed how sharp peaks emerge inside the band gap of a non-twisted AA’ hBN bilayer as the rotation angle decreases.
\(13.17^{\circ}\). Our simple model qualitatively reproduces this feature of twisted AA' hBN bilayers. In Fig. 3 (a), we show the band structure of a twisted hBN bilayer for two pairs of \((m_{1},m_{2})\) indices, namely \((m_{1},m_{2})=(2,1)\) and \((3,2)\). Although the band structure for the \((m_{1},m_{2})=(2,1)\) case does not show any flat band (see left panel of Fig. 3 (a)), note how the states at the \(\mathbf{K}\)-point begin to flatten. In fact, by decreasing the rotation angle to \(\theta=13.17^{\circ}\), real flat bands emerge, as seen in the right panel of Fig. 3(a), since they are isolated from the other bands by about 0.01 eV. For rotation angles (\(\theta\)) smaller than \(13.17^{\circ}\), the width of the flat bands diminishes. This feature can be better appreciated by plotting the DOS for various angles, as is done in Fig. 3(b). Note how sharp peaks emerge within the band gap of a typical AA' hBN bilayer as the rotation angle decreases (see Fig. 3 (b)). For angles smaller than \(21.79^{\circ}\), these peaks are a signature of flat bands near the conduction and valence band edges.
After having presented and proven the validity of our model, we proceed to its application to study the excitonic properties of twisted AA' hBN bilayer. This study is done in the next section, where we briefly review the Bethe-Salpeter equation and introduce our approach to treating excitonic effects on periodic 2D systems.
## III Bethe-Salpeter equation
### Reciprocal space representation
In condensed matter, the Bethe-Salpeter equation (BSE) describes bound states in a two-body particle system that, in the \(\mathbf{k}\)-space, is often written as an effective eigenvalue problem for electron-hole pairs [39; 40],
\[\begin{split}&\left(E_{\mathbf{k}c}^{\rm 1P}-E_{\mathbf{k}v}^{\rm 1P} \right)\Psi_{\mathbf{k}vc}^{\lambda}\\ &+\sum_{\mathbf{k}^{\prime}v^{\prime}c^{\prime}}\left\langle\mathbf{k}vc \right|K_{\rm eh}\left|\mathbf{k}^{\prime}v^{\prime}c^{\prime}\right\rangle\Psi_{ \mathbf{k}^{\prime}v^{\prime}c^{\prime}}^{\lambda}=E^{\lambda}\Psi_{\mathbf{k}vc}^{ \lambda},\end{split} \tag{6}\]
where \(E_{\mathbf{k}c}^{\rm 1P}\) and \(E_{\mathbf{k}v}^{\rm 1P}\) are the single-particle conduction and valence band energies, respectively. The electron-hole interaction kernel is denoted by \(K_{\rm eh}\), and
\[\left|\mathbf{k}n_{1}n_{2}\right\rangle=\left|\mathbf{k}n_{1}\right\rangle\otimes\left| \mathbf{k}n_{2}\right\rangle \tag{7}\]
with \(\left|\mathbf{k}n_{1}\right\rangle\) being a single-particle Bloch function obtained from the one-particle Hamiltonian (1). Additionally, \(E^{\lambda}\) stands for the \(\lambda\)-th excitonic energy and \(\Psi_{\mathbf{k}vc}^{\lambda}\) are the expansion coefficients of
the \(\lambda\)-th excitonic state \(\left|\Psi^{\lambda}\right\rangle\) in terms of electron-hole excitations in the reciprocal space, given by \(\left|\Psi^{\lambda}\right\rangle=\sum_{vc}\mathbf{k}\Psi^{\lambda}_{\mathbf{kv}}\left| \mathbf{k}vc\right\rangle\). Here
\[\left|\mathbf{k}vc\right\rangle=a^{\dagger}_{c\mathbf{k}}a_{v\mathbf{k}}\left|\emptyset \right\rangle, \tag{8}\]
with \(\left|\emptyset\right\rangle\) denoting the vacuum state. For simplicity, the spin of the electrons and holes is not considered, _i. e._, only singlet states were studied. This assumption can be justified since triplet states are dark when the spin-orbit coupling is not strong enough to induce spin-flips, as is the case for ideal hBN systems [44]. Additionally, only vertical transitions for the electron-hole pairs have been considered, leading to a vanishing wave vector of its center of mass, \(\mathbf{Q}=0\) in \(\mathbf{k}_{e}=\mathbf{k}_{h}+\mathbf{Q}=\mathbf{k}\). Under these conditions, the \(\lambda\)-th six-dimensional excitonic wave function can be expressed in terms of Bloch functions and excitonic weights,
\[\Psi^{\lambda}(\mathbf{r}_{e},\mathbf{r}_{h})=\sum_{vc\mathbf{k}}\Psi^{\lambda}_{\mathbf{kv}c }\varphi_{c\mathbf{k}}(\mathbf{r}_{e})\varphi^{*}_{v\mathbf{k}}(\mathbf{r}_{h}), \tag{9}\]
where \(\mathbf{r}_{e}\) and \(\mathbf{r}_{h}\) are the positions of the electron and hole, respectively.
### Real space representation
The conduction (valence) states near the band gap are concentrated in boron (nitrogen) atoms for AA' hBN bilayers [39; 20]. Under these circumstances, one can approximate equation (8) as,
\[\left|\mathbf{k}vc\right\rangle \approx a^{\dagger}_{B\mathbf{k}}a_{N\mathbf{k}}\left|\emptyset\right\rangle \tag{10}\] \[\approx\frac{1}{M}\sum_{\mathbf{\beta},\mathbf{\alpha}}a^{\dagger}_{B\bm {\alpha}}a_{N\mathbf{\beta}}e^{i\mathbf{k}\cdot(\mathbf{\alpha}-\mathbf{\beta})}\left|\emptyset \right\rangle.\]
Here, the sum runs over electron (\(\mathbf{\alpha}\)) and hole (\(\mathbf{\beta}\)) positions, while \(N\) and \(B\) are nitrogen and boron atoms, respectively. This latter sum can be further simplified by noting that for electron-hole pairs, it is enough to know their relative position, namely, \(\mathbf{R}=\mathbf{\alpha}-\mathbf{\beta}\). Hence, the equation (10) sum can be rewritten as follows,
\[\left|\mathbf{k}vc\right\rangle \approx\frac{1}{M}\sum_{\mathbf{\beta},\mathbf{R}}a^{\dagger}_{N+\mathbf{R}, \mathbf{\beta}}a_{N\mathbf{\beta}}e^{i\mathbf{k}\cdot\mathbf{R}}\left|\emptyset\right\rangle \tag{11}\] \[\equiv\frac{1}{\sqrt{M}}\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\left| \mathbf{R}vc\right\rangle,\]
where we have defined \(\left|\mathbf{R}vc\right\rangle\) as
\[\left|\mathbf{R}vc\right\rangle=\frac{1}{\sqrt{M}}\sum_{\mathbf{\beta}}a^{\dagger}_{ N+\mathbf{R},\mathbf{\beta}}a_{N\mathbf{\beta}}\left|\emptyset\right\rangle. \tag{12}\]
The states \(\left|\mathbf{R}vc\right\rangle\) are the elementary excitonic states in real space. Equivalently, \(\left|\mathbf{R}vc\right\rangle\) can be seen as the Bloch state describing the motion of an electron-hole pair of size \(\mathbf{R}\).
Let us say a few words about the advantages of this type of basis. First, since the system under study (AA' hBN bilayers) is periodic, it is unnecessary to consider all the electron-hole pairs. By symmetry properties, examining only non-equivalent hole positions within the bilayer is enough, significantly reducing the problem's dimensionality. Second, by restricting electrons to be only at boron sites, the number of electron-hole pairs is decreased again. This assumption is based on previous results reported in Ref. [39], where authors found that near the gap, the valence (conduction) Bloch states are concentrated on nitrogen (boron) atoms, which has been corroborated in Refs. [40; 41].
Finally, considering up to first-order effects and neglecting exchange ones, the electron-hole interaction kernel of the BSE is diagonal. Although these approximations seem very drastic, their application to hBN bilayers results in good agreement with more complex and cumbersome models, such as Bethe-Salpeter calculations based on the GW approximation.
We have closely followed the work presented in reference [39]. However, we improve this model to adequately describe the excitonic properties of twisted AA' hBN bilayers. First, we allow the electrons to be at boron sites and nitrogen ones. By doing this, we expect to correctly consider the complex atomic environment which characterizes twisted bilayer systems. Second, in Ref. [39], the kinetic part of the real-space BSE is obtained by first-order perturbation theory, which leads to a next-nearest neighbor approximation in the excitonic transition space. In this work, we follow a different approach, detailed below, that allows us to consider all types of interactions at once, albeit at a higher computational cost. Finally, instead of expanding the BSE Hamiltonian with a set of one-particle tight-binding \(s\)-orbitals, we use a set of \(p_{z}\) orbitals, which has been demonstrated to reproduce better the results of DFT calculations for non-twisted and twisted AA' hBN bilayers.
We now obtain the explicit representation of the BSE in real space. We substitute equation (11) into equation (6) to do so. After some manipulations, we obtain,
\[\begin{split} E^{\lambda}\Psi^{\lambda}_{\mathbf{R}vc}& =\sum_{\mathbf{R}^{\prime}}\left\langle\mathbf{R}vc\right|H^{1\text{P}} \left|\mathbf{R}^{\prime}v^{\prime}c^{\prime}\right\rangle\Psi^{\lambda}_{\mathbf{R}^ {\prime}v^{\prime}c^{\prime}}\\ &+\left\langle\mathbf{R}vc\right|K_{\text{eh}}\left|\mathbf{R}vc\right \rangle\Psi^{\lambda}_{\mathbf{R}vc}.\end{split} \tag{13}\]
To consider all possible interactions included in the kinetic energy term of the previous equation,
we compute it as follows,
\[\left\langle\mathbf{R}\right|H^{\rm 1P}\left|\mathbf{R^{\prime}}\right\rangle=\sum_{vc}(E_{ c}-E_{v})\left\langle\mathbf{R}|vc\right\rangle\left\langle vc|\mathbf{R^{\prime}} \right\rangle. \tag{14}\]
We have omitted the \(vc\) label in the excitonic states \(\left|\mathbf{R}vc\right\rangle\equiv\left|\mathbf{R}\right\rangle\) for simplicity. On the other hand, \(\left|vc\right\rangle=\left|v\right\rangle\otimes\left|c\right\rangle\), where \(\left|n\right\rangle\) with \(n=c,\,v\), is an eigenstate of \(H^{\rm 1P}\). In deriving equation (14), we have assumed that the \(\left|vc\right\rangle\) states are complete and that the \(H^{\rm 1P}\) can be split into two subspaces as \(H^{\rm 1P}\approx H_{c}\otimes\mathbf{1}_{v}-\mathbf{1}_{c}\otimes H_{v}\), which is reasonable for 2D semiconductors with a wide bandgap [39].
For the electron-hole interaction kernel, we neglect the exchange contributions. In this approximation and the basis \(\left|\mathbf{R}\right\rangle\), the kernel, up to first-order, becomes diagonal. By following reference [39], we also assume the kernel can be substituted by a model potential, particularly a potential of the 2D Keldysh type [45]:
\[V(r)=\frac{e}{4\pi\epsilon_{0}r_{0}}\frac{\pi}{2}\left[H_{0}\left(\frac{r}{r_ {0}}\right)-Y_{0}\left(\frac{r}{r_{0}}\right)\right], \tag{15}\]
where \(r_{0}\) is the screening length, and \(H_{0}\) and \(Y_{0}\) are the Struve function and the Bessel function of the second kind, respectively. As a reminder, this type of interaction behaves as a screened \(1/r\) Coulomb potential at long range. At the same time, it has a weak logarithmic divergence at short distances, where the screening distance, \(r_{0}\), determine the crossover. Thus the matrix elements of the Bethe-Salpeter kernel, in the excitonic transition basis, can be written as,
\[\left\langle\mathbf{R}\right|K_{\rm eh}\left|\mathbf{R}\right\rangle=V(\mathbf{R}). \tag{16}\]
Since we are dealing with a bilayer system, two screening lengths are defined, namely \(r_{0}^{\rm IP}\) and \(r_{0}^{\rm IL}\). Here, \(r_{0}^{\rm IP}\) corresponds to the screening length of the Keldysh potential for interactions in a single layer, while \(r_{0}^{\rm IL}\) stands for the screening length for interactions among different layers. These parameters were adjusted to reproduce the excitonic energies of a non-twisted AA' hBN bilayer correctly. The best fitting values we have found are \(r_{0}^{\rm IP}=0.87\) nm and \(r_{0}^{\rm IL}=0.91\) nm.
In the excitonic transition basis, the \(\lambda\)-th excitonic wave function can be readily written as,
\[\begin{split}\Psi^{\lambda}(\mathbf{r}_{e},\mathbf{r}_{h})=\sum_{\mathbf{R}} \sum_{\mathbf{\beta}}\Psi^{\lambda}_{\mathbf{R}}\,\varphi_{e}(\mathbf{r}_{e}-\mathbf{\beta}- \mathbf{R})\times\\ \varphi^{*}_{h}(\mathbf{r}_{h}-\mathbf{\beta}),\end{split} \tag{17}\]
where \(\mathbf{r}_{h}\) is the position of the hole, \(\mathbf{r}_{e}\) is the position of the electron, \(\mathbf{\beta}\) runs over the position of the non-equivalent holes (nitrogen atoms), and \(\Psi_{\mathbf{R}}\) are the excitonic weights obtained from the eigenvectors of the effective two-particle Hamiltonian in equation (13).
This work mainly aims to apply the BSE (13) to twisted AA' hBN bilayers. For that, we feed the BSE (13) with the eigenvalues and eigenvectors of the one-particle Hamiltonian \(H^{\rm 1P}\), defined in equation (1). Before applying our model to twisted systems, we have verified that our model correctly reproduces the results from other authors [30; 39; 40; 41].
The path chosen to obtain the excitonic properties of AA' hBN bilayers is stepwise. The first step is building the periodic version of equation (1) for a given pair of \(m_{1}\) and \(m_{2}\) indices. Then, in the second step, we obtain the matrix representation of the real-space version of the BSE (13), taking into account as many holes as non-equivalent positions they are. Remember that we suppose that holes can only live in nitrogen atoms. For example, a non-twisted AA' hBN bilayer will have only two non-equivalent holes or nitrogen atoms, one per layer. However, the system's symmetry decreases as the rotation angle decreases. In other words, the unit cell becomes larger, giving rise to more non-equivalent sites for nitrogen atoms.
At this point, how the effective eigenvalue problem scales upon the considered number of holes must be clarified. For simplicity, let us consider the case of a non-twisted hBN bilayer. As we have mentioned before, the number of non-equivalent holes (\(n_{\rm h}\)) for this system is two. Since the electron can be at any atomic site within the bilayer, we have \(n_{h}M\) electron-hole pairs, where \(M\) is the total number of atoms in the bilayer. Therefore, the matrix representation of equation (13) has dimension \(n_{\rm h}M\times n_{\rm h}M\). However, due to the sum over valence and conduction states appearing in the kinetic part of the equation (13), we need to sum \((M/2)^{2}\) items per matrix element. In this way, the building of the Bethe-Salpeter effective two-particle Hamiltonian scales as \(n_{\rm h}^{2}M^{4}/4\). As the prefactor \(n_{\rm h}\) grows, more computational resources are needed. For the rotation angles studied here, the number of non-equivalent holes is \(n_{h}=3\), 7, 13, and 21 corresponds to \(\theta=21.79^{\circ}\), \(13.17^{\circ}\), \(9.43^{\circ}\), and \(7.34^{\circ}\), respectively.
Once the matrix representation of equation (13) has been built, the third and final step is its numerical diagonalization. This last computation provides us with the excitonic energies and wave functions of the system in consideration. One of the main advantages of the real-space representation is that the excitonic wave functions can be straightforwardly plotted. The following section presents a detailed analysis of the excitonic energies and wave functions. Note that our
method allows us to study systems with around two thousand atoms in a few seconds, which is impossible when employing higher-level theories [46].
## IV Excitonic wave functions of twisted HBN
This section presents the application of the abovementioned model defined for commensurated twisted AA' hBN bilayers. Since excitonic effects play an important role in the optical spectrum of 2D semiconductors [9; 10; 11], including hBN, the characterization of the excitonic states is an essential ingredient for technological applications. Accordingly, we study the symmetry properties of excitonic wave functions obtained from the effective two-particle Hamiltonian in equation (13) for different rotation angles, \(\theta\). Before that, it is convenient to understand how the excitonic wave function can be obtained from the matrix representation, in the \(|\mathbf{R}\rangle\) basis, of equation (13). Any eigenvector of this matrix, with eigenvalue number \(\lambda\), can be written as \(n_{h}\) blocks of size \(M\), which means, \(\mathbf{\Psi}_{\mathbf{R}}^{\lambda}=\left(\mathbf{\Phi}_{h_{1}}^{\lambda},\mathbf{\Phi}_{h_{ 2}}^{\lambda},\ldots,\mathbf{\Phi}_{h_{n_{h}}}^{\lambda}\right)\). Each of these blocks, here represented by the numbers \(\mathbf{\Phi}_{h_{\alpha}}^{\lambda}\equiv\left\{\mathbf{\Phi}_{{\mathbf{r}}_{h_{\alpha}},{\mathbf{r}}_{e_{1}}}^{\lambda},\ldots,\mathbf{\Phi}_{{\mathbf{r}}_{h_{\alpha}},{\mathbf{r}}_ {e_{M}}}^{\lambda}\right\}\), stands for all the electron-hole pairs which can be formed with a fixed hole \(h_{\alpha}\) located at \({\mathbf{r}}_{h_{\alpha}}\) (with \(\alpha=1,\ldots,h_{n_{h}}\)) and an electron located at \({\mathbf{r}}_{e_{\beta}}\), with \(\beta\) running over all the positions in the system, in other words, \(\beta=1,\ldots,M\). As a result, the total excitonic wave function, equation (17), is obtained by adding up these \(n_{h}\) parts. Remarkably, on the \(|\mathbf{R}\rangle\) basis, the excitonic wave function can easily be plotted in real space, enabling the characterization of its symmetry properties.
For illustrative purposes, in Fig. 4, a schematic representation of the second (\(\lambda=2\)) eigenvector of a twisted AA' hBN bilayer with \((m_{1},m_{2})=(2,1)\) is displayed. On the left-hand side of the figure, we show a particular case of \(\mathbf{\Psi}_{\mathbf{R}}^{\lambda}\) for \(n_{h}=3\) and \(\lambda=2\). As can be seen, that eigenvector is formed by three blocks, each having \(M\) items. It is important to remark that even though all the blocks have the same size, they do not equally contribute to the total excitonic wave function. To show this, we plot a pie chart on the right side of Fig. 4, wherein the different colors represent the contributions due to different holes. Each non-equivalent hole contributes differently to the excitonic wavefunction. This latter can be understood by noting that the atomic environment dramatically affects the potential that acts over the exciton. Therefore its contribution to the excitonic wave function will be unique. Note how the complexity of \(\mathbf{\Psi}_{\mathbf{R}}^{\lambda}\) grows as the number of the non-equivalent position for the holes increases. In particular, for a twisted AA' bilayer with
\((m_{1},m_{2})=(5,4)\), we have that \(n_{h}=21\), as seen in Fig. 5. Even for this seemingly small number of atoms and a supercell of size \(3\times 3\), the computational resources needed for the diagonalization of the matrix representation of equation (13) is considerable. For us, studying angles smaller than \(\theta=6^{\circ}\) is out of reach because the number of Hamiltonian matrix elements will be \(\approx 10^{9}\) or larger, demanding a massive amount of RAM and CPU time exceeding several months.
### Excitonic wavefunction for \(\theta=7.34^{\circ}\)
For convenience, we have chosen to put on view only one case, specifically, \((m_{1},m_{2})=(5,4)\), which corresponds to a rotation angle of \(\theta=7.34^{\circ}\). We have selected this system because it is the smaller angle we can handle. Besides that, the main features of it are very similar to the ones that appear at other angles. Before showing the excitonic wave functions, it is interesting to look at the intralayer or interlayer character of the excitonic states of this particular case. With this aim, we plotted the projected two-particle DOS, over different layers, of the excitonic energies of a \((5,4)\)-twisted hBN bilayer in Fig. 6. It is noteworthy that this density of states was calculated
Figure 4: Schematic representation of the second (\(\lambda=2\)) eigenvector of the effective two-particle Hamiltonian of the BSE obtained for a twisted AA’ hBN bilayer with \((m_{1},m_{2})=(2,1)\). Its mathematical form is shown on the left of the figure. The full circle represents the total excitonic wave function, while each color represents the contribution of specific holes to it. The three non-equivalent holes for this case are displayed in Fig. 5, where the atomic environment of the holes is completely different from each other, leading to different contributions to the entire excitonic wavefunction.
Figure 5: Panel a). Non-equivalent position for nitrogen atoms (labeled by red solid stars) of a twisted AA’ hBN bilayer with \((m_{1},m_{2})=(2,1)\). Here and in panel b), atoms belonging to layer 1 (layer 2) are denoted by solid black (solid blue) circles. Panel b). Non-equivalent positions for nitrogen atoms of a twisted AA’ hBN bilayer with \((m_{1},m_{2})=(5,4)\). The non-equivalent positions, twenty-one for this particular case, are labeled by solid red stars.
using the excitonic energies weighted by the different contributions of the excitonic wavefunctions in real space considering a distance cutoff of 7 A. We plotted the DOS projected over the first (second) layer in solid red (solid blue) lines in the lower panel of such a figure.
In contrast, the difference between both is exhibited in solid green lines in the upper panel. By inspecting the lower panel of Fig. 6, one can elucidate a given excitonic energy of the interlayer or intralayer nature. Excitonic states with a strong intralayer character are almost at a single layer. Thus, the projected DOS must be more significant in one of the two layers. Contrary, interlayer excitonic states are distributed among the two layers of the system. This latter leads to a projected DOS having similar contributions on both layers. For a \((5,4)\) twisted AA' hBN bilayer, the excitonic states with the lowest energies (from 6 eV to approximately 6.25 eV) have a strong intralayer character, whereas, for energies greater than 6.25 eV, they exhibit a robust interlayer nature, see Fig. 6. It has to be noted that due to the symmetry reduction imposed
Figure 6: Projected excitonic DOS of a twisted AA’ hBN bilayer with \((m_{1},m_{2})=(5,4)\) is shown. In the lower panel, solid red (blue) lines indicates the DOS projected over the first (second) layer. In the upper panel,the difference between the projected DOS over the first layer and the second one is presented in solid green line. Intralayer states are characterized by having a very large projection at a single layer. On the other hand, interlayer states have similar projections over the two layers.
by the rotation of the layers, all the excitons, even the ones appearing at the lowest excitonic energies, become non-degenerate. This latter contrasts with the untwisted case where most low-energy and bright excitons doubly degenerate. This behavior is typical, at least, for the \(\theta\) angles we have studied.
We present two excitonic wave functions obtained for a \((5,4)\)-twisted AA' hBN bilayer. For demonstrative purposes, we have chosen an intralayer and an interlayer excitonic state to be analyzed. The \(\Phi_{h_{\beta}}^{\lambda}\) part of the excitonic wave functions for \((\beta,\lambda)=(6,24)\) and \((1,56)\) is shown in the corresponding a) and b) panels of Fig. 7. Note that the excitonic wave function of the panel a) is strongly localized at a single layer, the one where the hole is fixed (hence its intralayer nature) and that it resembles the symmetry of a single component of the first exciton appearing in a normal hBN bilayer, which is also of an intralayer nature (see, for example, reference [39]). As a
Figure 7: Excitonic wave functions of a \((5,4)\)-twisted AA’ hBN bilayer for two excitonic energies, with eigenvalue numbers \(\lambda=23\) and \(\lambda=55\) for the left, a), and right, b), panels, respectively. Each of these panels shows the excitonic wave function \(\Phi_{h_{\beta}}^{\lambda}\). Here we have chosen \(\beta=5\) [for a) panel] and \(0\) [for b) panel], projected on layer 1, on layer 2, and on both layers. Also, a lateral view is included to better appreciate the intralayer or interlayer character of the excitonic state. Solid circles indicate the site of the wave function in real space. Colors represent the square of the wave function amplitude, namely, \(\left|\Phi_{h_{\beta}}^{\lambda}\right|^{2}\) at each atomic position. Finally, the specific position of the hole is indicated by a solid red star.
result, the symmetry of this exciton is reduced for its degenerate version. The exciton presented in panel a) of Fig. 7 has the same symmetries as the antisymmetric component of the ground state exciton of a non-twisted AA' hBN bilayer.
On the other hand, the higher energy exciton, shown in panel b) of Fig. 7, has a greater spatial extension than the one appearing in panel a) on the same figure. As easily verified, this exciton has an interlayer character since most of its wave function is located at the layer with no holes. In addition, observe that this exciton's structure is more complex than the one of panel a). A more detailed study of the symmetry properties of these excitons is of great interest but is out of this work's scope and will be done elsewhere.
### Absorption spectrum for \(\theta=7.34^{\circ}\)
In order to know which of these excitonic states is bright or dark, one has to look at the absorption spectrum of the system, given by the imaginary part of the macroscopic dielectric function [47], which in the many-body perturbation case and within the dipole approximation becomes [48],
\[\epsilon(\omega)=\frac{8\pi^{2}e^{2}}{V}\sum_{\lambda}\left|\sum_{\mathbf{R}}\Psi _{\mathbf{R}}^{\lambda}\,d_{\mathbf{R}}\right|^{2}\delta(\omega-E^{\lambda}) \tag{18}\]
with the dipole element matrix given by
\[d_{\mathbf{R}}=\frac{i\hbar}{m}\mathbf{n}\cdot\frac{\left\langle\mathbf{R}cv\right|\mathbf{p} \left|\mathbf{R^{\prime}}cv\right\rangle}{E^{\lambda}},\]
where \(e\) is the electron charge and \(m\) its mass, \(V\) is the volume of the unit cell, \(E^{\lambda}\) are the excitonic energies, \(\Psi_{\mathbf{R}}^{\lambda}\) are the excitonic weights of the \(\lambda\)-th eigenvector, \(\mathbf{n}\) is the direction of the electric field, and \(\mathbf{p}\) is the momentum operator.
The quantity \(\epsilon(\omega)\) shows a peak whenever an optical transition is allowed; therefore, the excitons at that energy shall be bright. Figure 8 displays the absorption spectra for the three rotation angles studied, \(\theta=0^{\circ}\), \(21.78^{\circ}\), and \(7.34^{\circ}\). Our model agrees with the binding energies prior obtained from first-principles calculations for the non-twisted hBN bilayer [40; 41]. For this case, \(\theta=0^{\circ}\), the spectrum shows few very well-defined peaks at specific energies, the more intense at about 6.25 eV, or, equivalently, at an excitonic binding energy of \(-1.62\) eV. In particular, this peak is very narrow compared to those that emerged in twisted bilayers. This exciton's
characteristics might be attributed to the fact that the unit cell is the smallest, being the more symmetric so that the non-twisted hBN bilayer has more degenerated eigenstates. Therefore, a precise energy source should be used to observe such exciton. It is important to mention that in previous works that used time-dependent DFT, see for example [32, 33], a shift of the excitonic energies is observed for a non-twisted hBN bilayer when compared to GW-BSE calculations. In [32], the authors claimed that the red-shift of the excitonic energies they have obtained could come from the small vacuum region they used in their calculations or from the fact that they did not use a 2D Coulomb truncation method. Additionally, the authors concluded, among other things, that time-dependent DFT underestimates the excitonic binding energy compared to many-body perturbation theory techniques. In reference [33], a blue-shift of the excitonic energies is observed by using time-dependent DFT altogether with screened range-separated hybrid functionals. This blue-shift of the excitonic peaks, as the authors claimed, it is similar in nature to the one observed in [34].
Figure 8: Absorption spectra for three different rotation angles, \(\theta\), were obtained from the numerical evaluation of equation (18) using a broadening of 5 meV. As can be seen, our model qualitatively reproduces the biding energies of the _ab initio_ calculations. See, for example, Refs. [40, 41]. Note how the rotated cases exhibit a bright exciton inside the band gap of the non-twisted bilayer.
in [31]. However important this could be, it is out of the reach of the present work and it is not discussed here. On the other hand, the absorption spectra for bilayers wit \(\theta\neq 0^{\circ}\) exhibit more and broader peaks, where the more intense ones are at lower energies than the untwisted counterpart. Notice that for smaller angles, wider excitonic energy windows are observed. Also, as the angle decreases, the unit cell is larger, and the eigenstates are less degenerated, giving rise to an energy dispersion of them. Therefore, excitonic peaks are observed that become wider as the angle diminishes. For these cases, the excitons can be excited by a range of energies given by the width of the excitonic peak. Third, the effect is even greater for the angle \(\theta=7.34^{\circ}\), for which the dominant peaks appear even at lower energies when compared to the \(\theta=0^{\circ}\) and \(\theta=21.78^{\circ}\) cases. This latter could be a consequence of the dispersionless states that appear in the band structure of the twisted hBN bilayer. Due to the low kinetic energy of these states, interaction effects are stronger, enhancing the excitonic properties of the system. Since the moire potential can be strong near the hole, the exciton could be trapped, giving rise to well-localized excitons. The localization of the excitons can be studied readily within this formalism. However, its discussion is out of the scope of this work and will be discuss elsewhere.
Ultimately, let us discuss the applicability and limitations of our model. The main limitations of our model are the approximations that were made, which include the optic limit approximation (\(\mathbf{Q}=0\)), the neglect of quasiparticle and exchange effects, the use of single-particle wave functions that do not have the symmetry of the systems, the use of a model potential instead of using a self-consistent one, and the supposition that holes are only located at nitrogen atoms. Even though this approximation works well for non-twisted hBN bilayers, their validity on twisted systems must be clarified. However, we expect our model to be good for semiconductors with a wide band gap whose holes and electrons tend to be in different sublattices, suppressing the exchange effects. On the other hand, quasiparticle corrections can be included via the single-particle Hamiltonian by fitting it to the GW calculation of the untwisted system as a first approximation. Also, a fully symmetrized set of the wave function can be chosen as the basis on which the one-particle Hamiltonian is to be represented. Notably, our approach can be used in several 2D systems without significant modifications, which makes it useful for systematically studying their optical properties, at least under the above approximations.
Conclusions
To summarize, we have introduced a tight-binding model to solve the Bethe-Salpeter equation (BSE) using an effective potential for electron-hole interaction. This approach was used for studying the excitonic properties of 2D twisted bilayers, in particular, and as an example, we have studied the case of twisted AA' hBN bilayers. Our method solves the BSE in real space using a basis of excitonic transitions. To build such a basis, we used a parameterized one-particle \(p_{z}\) tight-binding Hamiltonian, adjusted to correctly reproduce the band gap at the \(\mathbf{\Gamma}\) point obtained from GW calculations. The main advantage of this approach is that the number of electron-hole pairs needed to solve the BSE can be highly diminished by considering the system symmetries. This approach is possible because the number of holes forming the electron-hole pairs can be set to be the number of non-equivalent nitrogen atoms, where we have supposed the hole states are mainly localized. To further simplify the problem, we have approximated the Bethe-Salpeter kernel by a model potential of the 2D Keldysh type, which is a good approximation for 2D semiconductors with a big band gap.
Thus, we obtain the excitonic properties of twisted hBN bilayers with a rotation angle of down to \(\theta=7.34^{\circ}\), which is far beyond the limits of the _ab initio_ version of the BSE in reciprocal space. The results obtained from it are expected to be in the range of the experimental error, allowing their qualitative description even the several approximations of our model. Furthermore, the approximations we have introduced can be lifted to improve the model due to its modular structure, albeit at a much higher computational cost. As we have shown, the excitonic properties of the hBN bilayer can be tuned by controlling the angle between layers, opening the possibility of novel applications in the far ultraviolet optoelectronic devices. We hope our work motivates more studies in this direction since it allows the study of large moire patterns with relatively moderate computational resources.
Additionally, we have made our implementation of the model in Python code available under the free software license (see the _Tight-binding version of the Bethe-Salpeter equation for 2D semiconductors_ repository on GitHub.) for anyone who wants to use it. Besides that, we claim our model can be straightforwardly applied to other 2D semiconductors similar to hBN as long as suitable parameterized single-particle tight-binding Hamiltonians and a smooth 2D model potential are available, thus, opening the possibility of systematically studying their excitonic
properties.
###### Acknowledgements.
The authors acknowledge support by the projects PAPIIT IN104622 and IN110023, and by CONACYT Grants No. A1-S-14407 and No. 1564464. P.R.T. also acknowledges the CONACYT for the postdoctoral grant from project No. A1-S-14407. E.O.C. acknowledges financial support from the UNAM via the PRIDIF2021 grant.
## Appendix A Details on the fitting tight-binding parameters
We have used the Pybinding [49] and Pymatgen [50] packages to construct the single-particle Hamiltonian. The algorithm used to build the twisted AA' hBN bilayer follows the implementation introduced in the references [43; 51]. For single-particle band structure calculations, we
Figure 9: Log scale of the matrix elements of the tight-binding like Hamiltonian, as a function of distance, obtained by the fitting of the Slater-Koster functions to DFT calculations for a (3,2) twisted hBN bilayer. Note that the matrix elements are quite small for distances greater than 7 Å.
have used \(M_{\mathbf{k}}=50\)\(\mathbf{k}\)-points. We have used a histogram algorithm with 600 bins and a gaussian envelopment adjusted to reproduce the histogram data best, to obtain the one-particle DOS. As mentioned in the main text, all interlayer interactions were set to zero for distances larger than a radio cutoff of 7 A. To fully justified this assumption a few words are needed. Our tight-binding model was fitted to DFT calculations for both the regular hBN bilayer and the twisted ones. This was made by following the next procedure. First, we have performed DFT calculations for twisted hBN bilayer with indexes (2,1), (3,2), and (4,3). Then, we have extracted a tight-binding Hamiltonian by projecting the plane-wave wavefunctions into maximally localized Wannier functions using Wannier90. We have taken into account only \(p_{z}\) orbitals. Since these orbitals are orthogonal to the rest, the disentanglement procedure was fairly simple. After this, a model was obtained by treating the resulting tight-binding like Hamltonian, particularly by truncating it to 7 A interactions, and then by fitting the corresponding Slater-Koster functions. For instance, for the (3,2) twisted hBN bilayer system, the resulting maximally localized wannierized Hamiltonian was found to have the matrix elements that are shown in figure 9. Therein, one can see that the Hamiltonian elements are very low after a few A's. For this reason, we have truncated the analysis to 7 A.
Since the Wannier centers are localized fairly close to the atomic positions a simple script can be used to separate the different contributions of the various types of interactions, typical results can be seen in figure 10. We have approximated all the intraleyer hopping parameters by using
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Interactions**} & \multicolumn{4}{c|}{**Errors**} \\ \cline{2-5} & \(V_{pp\sigma}\) & \(V_{pp\pi}\) & \(\beta_{1}\) & \(\beta_{2}\) \\ \hline Intralayer BB & - & 0.044 & - & - \\ \hline Intralayer NN & - & 0.155 & - & - \\ \hline Intralayer BN & - & 0.007 & - & - \\ \hline Interlayer BB & 0.022 & 0.189 & 0.513 & 0.575 \\ \hline Interlayer NN & 0.038 & 0.573 & 12.272 & 0.863 \\ \hline Interlayer BN & 0.040 & 0.055 & 0.253 & 0.124 \\ \hline \end{tabular}
\end{table}
Table 2: Errors of the hopping parameters of the \(p_{z}\) tight-binding model at one standard deviation for a (3,2) twisted hBN bilayer.
a \(s\)-type model, as can be seen in figure 11 this type of model correctly reproduces the DFT calculations. On the other hand, interlayer hopping parameters are more complex. From figure 12, it can be clearly seen that a model using only \(s\) orbitals does not describe correctly their distance dependence (see the solid violet circles in that figure), as a matter of fact, a \(p_{z}\) model is needed to rightly reproduce the DFT results. The corresponding fitting of this kind of hopping parameters is shown in figures 13, 14, and 15 for BN, BB, and NN interactions, respectively. Although the agreement between DFT results and the \(p_{z}\)-type tight-binding Hamiltonian is quite good, a certain amount of variations are observed in the previous figures. To quantify the goodness of such a fitting we can compute the errors up to one standard deviation for each twisted hBN bilayer. The resulting error for the three types of interlayer interactions, for a (3,2) twisted hBN bilayer, are summarized in table 2.
this type of model is adequate to study rotated systems.
Concerning two-particle calculations, a \(5\times 5\) supercell was used to compute the excitonic properties of twisted AA' hBN bilayer with \((m_{1},m_{2})=(2,1)\) and \((3,2)\), whereas a \(3\times 3\) supercell was used for twisted systems having \((m_{1},m_{2})=(4,3)\) and \((5,4)\). The number of atoms for \((m_{1},m_{2})=(2,1)\), \((3,2)\), \((4,3)\), and \((5,4)\) are \(M=700\), \(1900\), \(1332\), and \(2196\), respectively. We have used the Spglib library [52] via Pymatgen to compute the non-equivalent positions of the twisted systems studied here. For the 2D Keldysh potential, we have used a radio cutoff (\(r_{c}\)) of twice the size of the unit cell of the twisted system. To be specific, \(r_{c}=1.3\) nm, \(2.1\) nm, \(3.0\) nm, and \(3.8\) nm for rotation angles of \(\theta=21.79^{\circ}\), \(13.17^{\circ}\), \(9.43^{\circ}\), and \(7.34^{\circ}\). Finally, the excitonic energies and weights of the two-particle wave functions were obtained by direct diagonalization of the effective two-particle Hamiltonian of the equation (13), which was built using our algorithm.
|
2310.07863 | A rotating accretion disk around MWC297, a young B1.5 Ve star | High resolution spectra with iSHELL on IRTF in the K and M band of the young,
heavily accreting B 1.5e star MWC297 show numerous double-peaked CO lines.
These CO lines originate in an inclined gaseous disk in Keplerian rotation.
MWC297 is the only early B star known to show a Keplerian disk in CO. Analysis
of the spectra show that 12CO 1 - 0 is optically thick for the low excitation
lines. Even the 13CO 1 - 0 and 12CO 2 - 1 have somewhat optically thick lines
at low J levels. We find that the CO emission in the disk can be fitted with CO
being in a narrow ring at a radius of 12 AU, with a temperature of 1500 K, and
a CO column density of 1.6 e18 cm-2. This model underestimates the line
strength of high J lines, indicating that they are excited by fluorescence. The
CO overtone lines have a similar temperature, The 13CO lines are much brighter
than expected from interstellar isotope ratios. The 13CO lines are wider than
the 12CO ones suggesting different excitation conditions. The same is true for
12CO 2 - 1. We see strong absorption in 12CO and 13CO 1 - 0 at low J levels,
which is due to two two cold foreground clouds. These clouds, one with a
temperature of 8.3 K and a column density of 6.7 e17 cm-2, and the other one
colder and with lower column density, can fully account for the observed
extinction toward MWC297. | Goran Sandell, William Vacca | 2023-10-11T20:08:09Z | http://arxiv.org/abs/2310.07863v1 | # A rotating accretion disk around MWC 297, a young B1.5 Ve star
###### Abstract
High resolution spectra with iSHELL on IRTF in the K and M band of the young, heavily accreting B 1.5e star MWC 297 show numerous double-peaked CO lines. These CO lines originate in an inclined gaseous disk in Keplerian rotation. MWC 297 is the only early B star known to show a Keplerian disk in CO. Analysis of the spectra show that \({}^{12}\)CO 1 - 0 is optically thick for the low excitation lines. Even the \({}^{13}\)CO 1 - 0 and \({}^{12}\)CO 2 - 1 have somewhat optically thick lines at low J levels. We find that the CO emission in the disk can be fitted with CO being in a narrow ring at a radius of 12 AU, with a temperature of 1500 K, and a CO column density of 1.6 10\({}^{18}\) cm\({}^{-2}\). This model underestimates the line strength of high J lines, indicating that they are excited by fluorescence. The CO overtone lines have a similar temperature, The \({}^{13}\)CO lines are much brighter than expected from interstellar isotope ratios. The \({}^{13}\)CO lines are wider than the \({}^{12}\)CO ones suggesting different excitation conditions. The same is true for \({}^{12}\)CO - 1. We see strong absorption in \({}^{12}\)CO and \({}^{13}\)CO 1 - 0 at low J levels, which is due to two two cold foreground clouds. These clouds, one with a temperature of 8.3 K and a column density of 6.7 10\({}^{17}\) cm\({}^{-2}\), and the other one colder and with lower column density, can fully account for the observed extinction toward MWC 297.
Star formation (1569); Star forming regions (1565); Massive stars (732); Circumstellar disks (235); Infrared astronomy (786)
## 1 Introduction
Most young, low mass stars are surrounded by circumstellar disks. The same is also true for intermediate mass stars up to a spectral type of \(\sim\)B8 (Williams and Cieza, 2011). Direct detections of disks around early B or O stars are rare. This does not mean that high mass stars do not have accretion disks; rather it indicates that disks are much more short-lived around high mass stars so that by the time these stars become optically visible they have already dispersed their accretion disks. Furthermore, high mass stars are generally more distant than low mass stars, making it harder to spatially resolve their accretion disks. Even O stars with stellar masses of 20 - 30 \(\,{\rm M}_{\odot}\) appear to have disks in their heavy accretion phase (see e.g., Ilee et al., 2013; Beltran and de Wit, 2016; Zapata et al., 2019; Sandell et al., 2020; Moscadelli et al., 2021). However, such stars are difficult to study because they are always in clusters and deeply embedded.
Most of the evidence for disks around high mass stars comes from deeply embedded young objects observed with interferometric means, mostly in the millimeter/submillimeter regime with ALMA and NOEMA and other array telescopes. There are also some detections at optical/infrared wavelengths (Beltran and de Wit, 2016). There has been some success in looking for Keplerian rotation in accretion disks using the rovibrational CO lines at 2.3 \(\mu\)m and at 4.5 -.2 \(\mu\)m. CO overtone emission (\(v\) = 2 - 0 and \(v\) = 3 - 1) was first detected in the BN object (mass \(\sim\) 10\(\,{\rm M}_{\odot}\)) (Scoville et al., 1979), and in the early B-star MWC 349 A (Kraus et al., 2000). The latter has a mass of \(\sim\) 30 \(\,{\rm M}_{\odot}\), although a recent paper by Kraus et al. (2020) argue that it is a B[e] supergiant. There have been a number of detections of
bandhead emission toward massive young stellar objects (Ishii et al., 2001; Blum et al., 2004; Bik and Thi, 2004; Bik, Kaper and Waters, 2006; Wheelwright et al., 2010; Cooper et al., 2013; Ilee et al., 2013; Pomohaci et al., 2017). For these objects the stellar mass is usually deduced from the observed bolometric luminosity. These can be highly uncertain because it includes emission from the accretion process and possible emission from nearby objects, resulting in an overestimate of stellar mass, see, e.g., W 33 A, for which Pomohaci et al. (2017) determine a mass 17.2 \(\,{\rm M}_{\odot}\), while Navarete et al (2021) find a mass of 10 \(\rm{M}_{\odot}\). Yet it is clear that some of these are indeed very young high mass stars. The CO fundamental (\(v\) = 1 - 0) rovibrational lines, located in the M band, 4.5 - 5.2 \(\mu\)m, have been detected in low mass stars (Najita, Carr and Mathieu, 2003), HAEBE stars, and transitional disks objects (Brittain et al., 2003; Blake andBoogert, 2004; Salyk et al., 2009; van der Plas et al., 2015; Doppmann et al., 2017; Banzatti et al., 2022), but not in high-mass stars (M \(>\) 8 \(\,{\rm M}_{\odot}\)), except for MWC 297, the subject of this study.
MWC 297 is a bright, nearby HAEBe star with a spectral type B1.5 Ve (Drew et al., 1997) and a mass of \(\sim\) 10 \(\rm{M}_{\odot}\), located at a distance of 418 pc (Gaia DR3, Riello et al., 2021). It has been the subject of several studies with ESO's Very Large Telescope Interferometer (VLTI) (Acke et al., 2008; Weigelt et al., 2011; Lazareff et al., 2017; Kluska et al., 2020). Most of these studies suggest that the star is surrounded by an almost face-on disk. The star, however, drives an ionized outflow, with spatially separated outflow lobes, and therefore cannot be face-on (Sandell, Weintraub and Hamidouche, 2011). Mid-infrared imaging with FORCAST on SOFIA (Vacca andSandell, 2022) shows that hot dust traces the outflow lobes at wavelengths \(>\)\(\mu\)m. Simple geometrical modeling the mid-infrared morphology constrain the disk inclination rather well and give an inclination angle i = 55\({}^{\circ}\).
Here we present and discuss high resolution M band data obtained with the iSHELL instrument at the IRTF. The spectrum reveals double-peaked CO emission lines, which we have modeled with a rotating Keplerian disk to determine the properties of the emission region. We also discuss and analyze the absorption spectra from the cold foreground cloud, which are seen in low \(v\) = 1 - 0 P and R transitions. We note that MWC 297 was included in a large survey of planet forming disks observed with the same instrument and the same wavelength range, and these data have recently been published (Banzatti et al., 2022). While Banzatti et al. (2022) did classify the double peaked CO profiles in MWC 297 as originating in a Keplerian accretion disk, they provide very few details. Here we present a more comprehensive analysis of of both CO overtone emission at 2.3 \(\mu\)m as well as the rovibrational spectra in the M-band.
## 2 Ishell observations and data reduction
Observations of MWC 297 were obtained at the NASA Infrared Telescope Facility (IRTF) on Mauna Kea on 2020 Sep 30 (UT) and on 2022 April 26 (UT) with iSHELL, the facility near-infrared high resolution cross-dispersed spectrograph (Rayner et al., 2022). The M band observations used the M1 setting of iShell both in 2020 and 2022. This mode yields spectra spanning the wavelength range.5 - 5.2 \(\mu\)m over 16 spectral orders. The observations were acquired in "pair mode", in which the object was observed at two separate positions along the 15\({}^{\prime\prime}\)-long slit. The slit width was set to 0\(\farcs\)375, which yields a nominal resolving power of 88,000 for the spectra. (At the distance of MWC 297 of 418 pc, the iShell 0\(\farcs\)375 slit spans \(\sim\) 160 AU.) Twenty individual exposure of MWC 297, each lasting 28 s, were obtained using the M1 setting of iShell. In 2020 the slit was set to the parallactic angle (46.0\({}^{\circ}\)) during the observations. In 2022 the observations were done using two slit positions, 197\({}^{\circ}\) and 17\({}^{\circ}\), which were coadded in the post reduction. The observations both times were acquired in "pair mode", in which the object was observed at two separate positions along the 15\({}^{\prime\prime}\)-long slit. The slit width was set to 0\(\farcs\)375, which yields a nominal resolving power of 88,000 for the spectra. At the distance of MWC 297, 418 pc, the iShell 0\(\farcs\)375 slit spans \(\sim\) 160 AU. Long observations of an A0 V star, used as a "telluric standard" to correct for absorption due to the Earth's atmosphere and to flux calibrate the target spectra, were obtained immediately prior to the observations of MWC 297. In 2022 we also observed MWC 297 in the K band. These observations were done in the K3 setting, providing a spectral coverage from 2.256 to 2.55 \(\mu\)m. A set of internal flat fields and arc frames were obtained immediately after the observations of MWC 297 for flat fielding and wavelength calibration purposes.
The data were reduced using Spextool (Cushing, Vacca and Rayner, 2004), the IDL-based package developed for the reduction of SpeX and iShell data. The Spextool package performs non-linearity corrections, flat fielding, image pair subtraction, aperture definition, optimal extraction, and wavelength calibration. The sets of spectra resulting from the individual exposures were median combined and then corrected for telluric absorption and flux calibrated using the extracted A0 V telluric standard spectra and the technique and software described by Vacca, Cushing and Rayner (2003). The spectra from the individual orders were then spliced to
gether by matching the flux levels in the overlapping wavelength regions, and regions of poor atmospheric transmission were removed. However, for the September 2020 the final M spectrum of the A0V star had relatively poor S/N (on the order of 30), and this limited the final S/N of the spectrum of MWC 297, which is considerably brighter in the M band. (The reduced spectrum of MWC 297 has a S/N of several hundred.) Therefore, an alternate telluric correction was derived using the _xtellcor_model_ software developed by Boogert (2022) and then applied to the reduced spectrum of MWC 297. Although this was effective at correcting the telluric absorption, the resulting spectrum was still uncalibrated. Therefore, we fit a polynomial to the continuum of both the A0V-corrected spectrum and the model-corrected spectrum, and multiplied the model-corrected spectrum by the wavelength-dependent ratio of the polynomials. This yielded a telluric-corrected and flux-calibrated spectrum of MWC 297. The S/N varies across the spectrum but is on the order of several hundred across the entire M1 wavelength range. For the 2022 data we used a brighter A0 V star, which could directly be used for flux calibration. Direct comparison of the two calibrated data sets show that the calibration agrees within 10% for both data sets and we have therefore added together the two data sets, after correcting both spectra for Doppler shifts due to the relative motion of the earth and the source, and extracted flux densities for all lines not affected by telluric absorption. In 2020 the Doppler shift for MWC 297 was -11.5 km s\({}^{-1}\), which severely affected CO lines with J values \(\lesssim\) 15, while it was +41.2 km s\({}^{-1}\) in 2022, which largely shifted all CO lines away from telluric absorption features.
## 3 A Short Overview of What Has Beeen Learned From Studies of Individual Co
Although CO vibrational overtone emission was first detected in the inner disks of both high and low mass objects, but it is not as ubiquitous as the CO fundamental (\(\Delta\upsilon=1\)) transitions of CO at 4.6 \(\mu\)m. The detection rate of overtone emission in unbiassed surveys is about 25%. In part this is due to the relatively low A-values, which require high column densities of hot gas for the lines to be detectable, but other factors play a role as well. Modeling by Ilee et al. (2018) suggest that moderate accretion rates produce the most prominent and hence detectable CO first overtone emission. If the accretion rate is too high (\(\gtrsim 10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\)), it results in large dust sublimation radii, a larger contribution of hot dust to the K band continuum and therefore a lower CO to continuum ratio. Low accretion rates (\(\lesssim 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\)) however, result in smaller dust sublimation radii, a smaller CO emitting area and therefore a lower CO to continuum ratio. Although CO overtone emission is detectable in MWC 297, which has an accretion rate of 3 10\({}^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\), the first overtone band is barely visible in high resolution spectra, suggesting that the gas is relatively cold, or about 1000 K. The higher vibrational levels are filled with double peaked spectra. This will be discussed later in the paper.
The rovibrational CO line emission at 4.5 - 5.2 \(\mu\)m is as already mentioned much more ubiquitous and seen in a variety of objects, ranging from Transition Objects to Classical T Tauri and HAEBE stars, but contrary to CO overtone emission it is not seen in high-mass stars except for MWC 297 (Banzatti et al., 2022, and this paper) and perhaps in BN (Scoville et al., 1983). Banzatti et al. (2022) gives an up to date list of all the high resolution K and M-band CO surveys done over the last 20 years. Therefore there is no reason for us to repeat it here. Instead we try to surmmarize the major findings resulting form these studies and complementary theoretical modeling.
The rovibrational CO emission at 4.7 \(\mu\)m is very common in protoplanetary disks and arises in the inner 0.1 - 2 AU region in disks around low mass stars Najita et al. (2007) and up to 30 AU or more in HAEBE stars (van der Plas et al., 2015; Banzatti et al., 2018, 2022). The line shapes vary (Bast et al., 2011; Brown et al., 2013; Banzatti and Pontoppidan, 2015; Banzatti et al., 2022). They can be narrow and single peaked, narrow with a broad pedestal, broad but single peaked, or pure double peaked Keplerian profiles. Some stars also show the CO in absorption, which in most cases originates from colder foreground gas. Double peaked profiles, which are expected to be a true disk signature, are surprisingly uncommon (Bast et al., 2011), although in the large survey by Banzatti et al. (2022) the detection rate of double peaked lines was quite common, \(\sim\) 50%.
Banzatti et al. (2022) define the line profiles in two broad groups depending on line shape: triangular and double peaked lines. In their classification the single peaked narrow lines with a broad pedestal and the broad, but single peaked lines all fall into this category. These lines are often somewhat asymmetric with the blue-shifted side being stronger Herczeg et al. (2011); Brown et al. (2013). (Banzatti et al., 2022) define a line shape parameter S as S = FW10%/FW75%, where FW stands for Full Width at 10 and 75%, respectively. Lines with a shape value \(\gtrsim\) 2.5 are triangular, and \(\lesssim\) 2.5 they are generally double peaked Keplerian or narrow single peaked lines. The latter are believed to represent Keplerian disk emission seen in face on disks. The single
peaked narrow lines with a broad pedestal, i.e. triangular shaped lines cannot be explained by Keplerian motion. The broad pedestal may still be Keplerian, but the narrow component appears to originate in a slow moving disk wind Bast et al. (2011); Brown et al. (2013). Pontoppidan et al. (2011) found from spectroastrometry of narrow line with a broad pedestal that the narrow line emission is too compact and asymmetric to originate in a Keplerian disk. The broad single peaked lines are also believed to be a combination of a Keplerian disk rotation and a disk wind, but the disks are seen with higher inclination angles, producing broader lines (Brown et al., 2013).
Line shape alone does not really discriminate where the CO lines are formed, whether they originate from the disk or the outflow or both. Spectroastrometry (Pontoppidan et al., 2011; Brittain, Carr & Najita, 2018) can achieve subarcsecond resolution and hence put spatial constraints on the emitting region, which has been used to discriminate between Keplerian and wind models. Another powerful constraint is the strength of higher vibrational levels, which have been detected up to \(\upsilon\) = 6 - 5 in HAEBE stars (Brittain et al., 2007; Jensen et al., 2021; Banzatti et al., 2022). For double-peaked lines, the work by Banzatti et al. (2018, 2022); Bosman et al. (2019) suggests that one can identify three emitting regions that show a dependence on the \(\upsilon\)2/\(\upsilon\)1 ratio depending on the location of their emitting radii, \(R_{\rm CO}\), compared to their dust sublimation radius, \(R_{\rm subl}\). Regions a and 1b both have high vibrational ratio \(\upsilon\)2/\(\upsilon\)1 = 0.2 -.0, but come from very different regions in the disk, although both are largely dust free. Region 1a is inside \(R_{\rm subl}\), while 1b is inside a dust cavity observed by millimeter interferometry. Region 1b therefore has lower rotational temperatures and narrower line widths and it appears that UV fluorescence largely excites the higher vibrational levels in this region (Bosman et al., 2019). In region 2 the vibrational ratio \(\upsilon\)2/\(\upsilon\)1 = 0.01 - 0.2, and the CO is emitted from regions outside the dust rim.
## 4 Analysis and Results
The iSHELL spectrum exhibits a large number of double-peaked emission lines from rovibrational transitions of CO. The \(\upsilon\) = 1 - 0 CO lines up to rotational level P(47) and higher vibrational levels up to \(\upsilon\) = 5 - 4 are clearly seen. Emission from \({}^{13}\)CO is also seen, in both the \(\upsilon\) = - 0, and \(\upsilon\) = 2 - 1 transitions. The profiles of the lowest rotational transitions exhibit strong narrow absorption lines, due to absorption from a cold foreground cloud, which cuts out a substantial fraction of the emission. Three H lines (Pf \(\beta\), Hu \(\epsilon\), and Hu \(\delta\)) are also seen in emission. The K band iSHELL spectrum also shows a large number of double peaked lines, all from CO overtone bandhead emission, see Figure 1.
It is clear that the double peaked \({}^{12}\)CO profiles must originate in a rotating accretion disk. To enable a better comparison between \({}^{12}\)CO and \({}^{13}\)CO lines we generated median line profiles of the \({}^{12}\)CO \(\upsilon\) = - 0, \({}^{13}\)CO 1 - 0 and also for the \({}^{12}\)CO 2 - 1 and \({}^{12}\)CO 2 - 0 emission lines from the spectrum after subtracting a low-order polynomial continuum. Since we see no difference between low and high J lines we created median lines profiles of all unaffected by absorption or blending. For the \({}^{12}\)CO 1 - 0 we used R(14), R(9), R(6), P(8), P(9), P(12), P(14), P(26), P(34), P(37), P(38), P(45), and P(47). For \({}^{13}\)CO 1 - 0 we used R(29), R(24), R(22), R(21), R(19), R(16), R(15), R(13), R(12), R(10), R(9), R(3), P(7), P(9), P(11), P(12), P(16), P(17), and P(21). **The \({}^{12}\)CO 2 - 1 transitions were R(27), R(21), R(20), R(17), R(14), R(13), R(12), R(9), R(1), P(1), P(2), P(6), P(8), P(15), P(21), P(25), P(26), P(28), P(32), and P(33), while the transitions for \({}^{12}\)CO 2 - 0 were R(6), R(7), R(11), R(12), R(14), R(17), R(18), R(20), R(21), R(22), R(23), and R(24). These were re-interpolated and centered at the theoretical line center, see Figure 2. From this plot it is easy to that the \({}^{13}\)CO 1 - 0 lines are broader and the flank of the \({}^{13}\)CO lines also appear somewhat steeper. The \({}^{12}\)CO 2 - 1 lines are also broader than the 1 - 0 lines. The overtone \({}^{12}\)CO 2 - 0 lines are even broader than any of the rovibrational lines. The separation between the line peaks are also wider for both \({}^{13}\)CO 1 - 0 and \({}^{12}\)CO 2 - 1 compared to \({}^{12}\)CO 1 - 0 lines indicating that these lines are inward of the disk relative to the location of the \({}^{12}\)CO 1 - 0 emitting gas. In Table 1 we give the measured FWHMs and peak separations for all the lines shown in Fig. 2.
At 10% level the line widths are similar or \(\sim\) 65 km s\({}^{-1}\), suggesting that the inner radius is similar for all of them. The FW10% maybe slightly larger for the \({}^{12}\)CO 2 - 0 lines. We measure a linewidth of 68 km s\({}^{-1}\). It is not clear whether this difference is real or simply due to measurement inaccuracy, especially since Banzatti et al. (2022) quote a FW10% of 68.9 km s\({}^{-1}\). They also quote a slightly larger FWHM of 46.8 km s\({}^{-1}\) for \({}^{12}\)CO 1 - 0 than what we find, 45.3 km s\({}^{-1}\). We determine a V\({}_{lsr}\) = +6.7 \(\pm\) 2.2 km s\({}^{-1}\)for the emission lines. This is a weighted mean of the September 2020 and April 2022 data sets. Banzatti et al. (2022) find a center velocity of + 7 km s\({}^{-1}\) (converted to LSR). which agrees very well with what we find.
Inspection of the K and M band spectra yields conflicting information regarding the CO emitting gas. The relative strengths of the \({}^{12}\)CO lines across the M band suggest that the gas is optically thick. In addition, the \({}^{13}\)CO emission lines are much stronger relative to those of \({}^{12}\)CO than expected given the normal isotopic ratio and optically thin gas. However, the clearly double-peaked profiles suggest the emission cannot be completely optically thick.
In order to determine the properties of the CO emitting gas, we constructed models of the emission regions under the assumption that the gas was in Keplerian orbit around a 10 \(M_{\odot}\) central source. We considered two configurations: a uniform (i.e., constant temperature and surface density) ring and a thin disk whose temperature and surface density varied with radial distance. We adopted an inclination of 55\({}^{\circ}\) derived by Vacca &Sandell (2022). The models assume the gas is in LTE and are similar to those constructed by Kraus et al. (2000) and others. We generated a synthetic spectrum of the M band emission for each value of temperature between 900 and 2000 K in steps of 100 K and log CO surface density between 15.0 and 22.0 in steps of 0.2. We assumed a source distance of 418 pc and the standard isotopic ratio for \({}^{12}\)CO/\({}^{13}\)CO of 77 (Wilson and Rood, 1994). For the ring models we adopted a radius of 12 AU in order to match the mean width of the emission lines, and a ring width of 1 AU. For the disk models we adopted a power law index of 0.4 for the temperature and 1.5 for
\begin{table}
\begin{tabular}{l c c} \hline \hline Profile & FWHM & Peak separation \\ & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] \\ \({}^{12}\)CO \(v\) = 1–0 & 45.3 & 29 \\ \({}^{12}\)CO \(v\) = 2–1 & 48.2 & 32 \\ \({}^{13}\)CO \(v\) = 1–0 & 48.2 & 32 \\ \({}^{12}\)CO \(v\) = 2–0 & 54.7 & 37 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Line profile parameters for median averaged emission lines. The FWHM values have measurement errors of \(\sim\) 0.5 km s\({}^{-1}\), the errors for the peak separations are \(\sim\) 1 km s\({}^{-1}\).
Figure 1: The CO overtone bands observed with iSHELL in K band showing a large number of double peaks lines. The 2 – 0 bandhead at 2.29 \(\mu\)m is very weak.
Figure 3: Rotational diagram with the observed data points plotted with large dots. The line bands are identified by their colors in the bottom left part of the plot. For almost all of the data, the error bars are smaller than the dot size.
Figure 2: Median line profiles of \({}^{12}\)CO \(v\) = 1 – 0 and 2 – 1 and \({}^{13}\)CO – 0 **and \({}^{12}\)CO 2 – 0** emission lines. The dashed blue line is the \({}^{12}\)CO 2 – 1 profile scaled by 2.25, the dashed red line is the \({}^{13}\)CO 1 – 0 profile scaled by 3, the dashed orange is the scaled \({}^{12}\)CO 2 – 0. The scaled profiles demonstrate the differences in widths compared to the \({}^{12}\)CO –0 profile.
the surface density and radial steps of 0.5 AU between an inner radius of 5 AU and an outer radius of 25 AU.
We then reddened each model by \(A_{V}=8.1\) mag, convolved it with a Gaussian whose width is given by the instrumental resolution (\(R\sim 88,000\) in the M band), and sampled it at the observed wavelength sampling of the observations. A \(\chi^{2}\) comparison between the observed \({}^{12}\)CO line spectra and the model spectrum was then computed. For the ring model we computed the best fit scaling of the model to the spectroscopic data, which corresponds to the best fit width of the ring. The emission line strengths were also compared with the values measured from the observed spectra on the standard rotational diagram (Figure 4).
Somewhat surprisingly, we found that no disk models with our assumed parameters were able to provide good matches to the observed spectra. A thin ring model with a temperature of 1500 K and a surface density of \(\log N_{CO}=18.2\) provided a reasonable, although not perfect, fit to the data. It was immediately clear, however, that the \({}^{13}\)CO emission was far weaker in the model than in the actual data by a factor of 6 - 7. This was also reflected in an offset between the points for the \({}^{13}\)CO emission line strengths from the model and that of the data on the rotational diagram, although the slope of the points for the \({}^{13}\)CO emission line strengths from the model matched that of the data (which indicates that the derived temperature was approximately correct). This result, in addition to the differences seen in the line widths of the \({}^{12}\)CO and \({}^{13}\)CO emission profiles, suggests that the \({}^{12}\)CO and \({}^{13}\)CO emission arises from different parts of the disk. Therefore, we generated a pure \({}^{13}\)CO emission model in which the \({}^{13}\)CO gas temperature was the same as that for the \({}^{12}\)CO gas, but the abundance was enhanced by a factor of 6, which gave a good fit to the \({}^{13}\)CO 1 - 0 spectra. Figure 5 shows the observed spectra overplotted with the best fit model without any additional scaling for \({}^{13}\)CO. The corresponding line strengths are shown on the rotation diagram in Figure 4.
Despite the excellent agreement between the model and the line strengths seen in Figure 5, it is clear that the best-fit model is somewhat unsatisfactory. A thin ring of CO emission seems physically unrealistic. Furthermore, although the model provides a reasonable match to the overall line strengths, it does not provide a good match to several prominent lines in the spectrum. Many of the \({}^{12}\)CO 2-1 lines in the model are much stronger than in the data. In addition, the line profiles of many of the low lying \({}^{12}\)CO - 0 transitions are not well reproduced by the model.
We have not done a detailed analysis of the \(K\)-band overtone spectrum, but the - 0 bandhead emission is very weak and the 3 - 1 bandhead is almost missing, indicating that the CO gas is quite cold. The best fit model from the analysis of the rovibrational CO lines provides an adequate fit to the data.
### Absorption lines
The rovibrational fundamental CO lines in the low P and R transitions show narrow absorption lines cutting out most of the emission from the accretion disk in the center of the emission lines in the \(M\) band spectra. Absorption is also seen in the lowest R and P transitions of \({}^{13}\)CO in both data sets. This suggest that the emission is being absorbed by a cold foreground cloud. This cold foreground cloud is likely to contribute to most of the extinction toward MWC 297, which is estimated to be \(\sim 8.1\) mag (Vacca &Sandell, 2022).
The lowest transitions of the CO absorption lines in the 2020 data set are strongly affected by telluric absorption, which makes it difficult to estimate the true extent of the absorption. Therefore, our analysis focussed on the April spectra, for which the relative velocity between the intrinsic absorption and the telluric absorption is larger. In order to extract the absorption lines we generated a mean profile for the \(v=1\) - 0 \({}^{12}\)CO emission
Figure 4: The ring model for a rotational temperature of 1500 K and a density of \(1.6\times 10^{18}\) cm\({}^{-2}\) plotted with solid lines on the data points plotted in Figure 3. The model matches the 1 – 0 data quite well except at the highest energies where it underestimates the line strength. It also underestimate the line strength at the higher vibrational states, suggesting that the excitation of these lines are dominated by fluorescence. The model overestimates the – 1 data, most likely because its emitting area is smaller than that of 1 – 0. The model calculations show that the \({}^{13}\)CO abundance needs to be increased by a factor of 6 - 7 compared to normal interstellar abundances.
Figure 5: The ring model for T = 1500 K and a log density of 18.2 cm\({}^{-2}\) in red overlaid on the observed M band spectrum. \({}^{13}\)CO needs to be scaled by 6 to match the line profiles. The line transitions are marked and the rovibrational bands they belong to are indicated at the top for \({}^{12}\)CO and at the bottom shows \({}^{13}\)CO 1 – 0. Regions of strong telluric absorption have been blanked out.
lines and then scaled that to match the observed profiles of the low J transitions that have the foreground absorption (see Figure 6). Examination of the CO absorption lines indicates that the lowest transitions reach the zero intensity level and appear much broader than the higher transitions, suggesting that they are optically thick. The higher level J transitions agree well in velocity with the \({}^{13}\)CO absorption lines, while there is a clear shift in velocity at the lowest levels. We illustrate this in Figure 7, which shows three CO \(v\) 1 - 0 lines and three of the \({}^{13}\)CO lines in the source spectrum, which has been shifted to 0 km/s. It can be seen from this figure that the weaker absorption lines are blue-shifted relative to the disk emission. and that the lines are asymmetric, with a red tail (Figure 7). For the optically thin \({}^{12}\)CO and \({}^{13}\)CO lines we find a velocity of -0.6 \(\pm\) 0.5 km s\({}^{-1}\). With increasing optical depth blue-shift of the lines decreases and the red tail increases. At the lowest J levels the CO is very optically thick, the profiles look flat topped and the center of the line is shifted to the red. This suggests that there are two absorption components to these lines, with the red shifted component contributing substantially to the width of the lowest and most optically thick transitions. The radial velocity of MWC 297, 6.7 \(\pm\) 2.2 km s\({}^{-1}\), which we determined for the center of the emission lines is in good agreement with the velocity of the molecular cloud in which it was born. Sandell et al. (2023) determine a velocity \(\sim\) 7.2 km s\({}^{-1}\) of the molecular cloud from \({}^{13}\)CO(6-5) observations with APEX. This agrees well with \({}^{13}\)CO(3-2) and C\({}^{18}\)O(3-2) observations. These lines are unaffected by the cold foreground cloud, while all the emission in the low J \({}^{12}\)CO emission is completely absorbed (see also Manoj et al., 2007). Only emission from CO(4-3) and higher rotational levels are unaffected by the cold foreground cloud
To determine the temperature and column density of the absorbing gas we performed a curve of growth analysis similar to that of Goto et al. (2003). We solved for column density, temperature, and the line width using an iterative least-squares fitting routine. However, the low excitation CO lines are very optically thick and also include the red-shifted component, which we cannot separate at the iShell resolution. We therefore first analyzed the weaker, and presumably more optically thin, \({}^{13}\)CO lines. Using all \({}^{13}\)CO lines in the analysis resulted in large densities (2 10\({}^{18}\) cm\({}^{-2}\)) and very low temperatures (\(\sim\)4 K). While these parameters provided a reasonable match the observed equivalent widths (EWs), the model did not reproduce the line profiles or the depths. The fit is skewed to high densities in order to reproduce the observed EWs, which are clearly dominated by the second red absorption component for the highest optical depth lines. As can be seen in Figure 7, even the \({}^{13}\)CO lines exhibit a red wing, which is contributing to the EW. We therefore restricted the EW analysis to the two lowest opacity lines, for which the red wing appears to have the smallest contribution, and derived a column density of \(\sim\) 6.5 10\({}^{17}\) cm\({}^{-2}\) and a temperature of \(\sim\) 8.5 K. We then combined the two weakest CO lines with the two weakest \({}^{13}\)CO lines in the fit and obtained a solution that matched both the EWs and the line profiles. We kept the velocity width equal to 3.5 km s\({}^{-1}\)for this fit, which is in agreement with our resolution (R \(\sim\) 88,000). This resulted in a CO column density, N(CO) = 6.7 \(\pm\) 0.4 10\({}^{17}\) cm\({}^{-2}\), and a temperature T = 8.25 \(\pm\) 0.10 K. No substantial residuals are seen when we subtract this model from the spectrum of the 4 lines. If we assume a CO abundance of 10\({}^{-4}\), and that all the hydrogen is molecular and adopt \(N(H_{2})/A_{V}=0.94\) 10\({}^{21}\) molecules cm\({}^{-2}\)(Bohlin et al., 1978), then this absorption component indicates an extinction of 6.3 mag. This, however, is a lower limit to the total foreground extinction since the red-shifted absorption component is completely opaque at the lowest CO transitions and also detected in \({}^{13}\)CO. It is likely that it is responsible for an additional extinction of 1 - 2 magnitudes. Therefore the cold foreground CO emission readily explains the observed foreground extinction of 8.1 mag.
## 5 Discussion
Generally half of all solar mass stars have lost their disks after 3 Myr with an overall disk life time of 6 Myr
Figure 6: An example illustrating the extraction of absorption line profiles. The observed emission line, in this case the \({}^{12}\)CO \(v\) = 1 – 0 R(4) is plotted in black, the scaled template profile is plotted in red and the resulting absorption profile is green. The baseline around the absorption profile (green) is quite flat and centered at 0.
(Haisch, Lada & Lada, 2001). Some stars still have disks after 10 Myr (see e.g. Espaillat et al., 2008). However, disk dispersion timescales for high-mass stars are much shorter. By the time O and B stars become optically visible, they have long since dispersed their disks. It is therefore not surprising that hardly any disks have been detected around visible early B stars. The two previously known examples are the early B star MWC 349 A, and LkH\(\alpha\) 101. The latter is a B1 Ve star that illuminates an H ii region. Infrared interferometry shows that it is surrounded by an almost face-on disk with an inner hole and a dust radius of \(\sim\) 30 AU (Tuthill et al., 2002). MWC 297 is now a third example. It has long been suspected to have an accretion disk, but the disk size and inclination have been uncertain. MWC 297 is a much younger star than MWC 349 A or LkH\(\alpha\) 101, as it is still heavily accreting with an age in the range of -2.8 \(\times\) 10\({}^{5}\) yr (Fairlamb et al., 2015; Vioque et al., 2018).
The rovibrational CO lines as well as the CO overtone bands show clear double peaked profiles confirming that we see inclined circumstellar disk around MWC 297, which is in Keplerian rotation. MWC 297 does not really fit in the classification scheme by Meeus et al. (2001) since it is much younger than typical HAeBe stars, but if one follows the Meeus classification it would be in group I (Guzman-Dias et a., 2021). For this reason Acke & van den Ancker (2004) added a group III for deeply embedded and accretion dominated HAEBE stars. Neither does MWC 297 readily fit into the classification of CO spectra by Banzatti et al. (2018, 2022); Bosman et al. (2019). It has double peaked profiles and the ratio of vibrational levels of \(\nu 2/v1=0.45\) it in region 1, but since there is no cold dust in the disk (Vacca &Sandell, 2022), we cannot definitely say whether it belongs to region 1a or 1b. However, since the UV-fluorescence dominates the excitation of the \(\upsilon=\) 2 - 1 and 3 - 2 bands and even appears to dominate the high excitation levels of CO 1 - 0, it should probably go into group 1b. We also note there are only five HAeBe stars that have overtone band-head CO emission (Banzatti et al., 2022), yet the band-head emission was easily detected in MWC 297. Usually the overtone emission comes from hot, dense CO gas in the innermost part of the disk (Ilee et al., 2013, 2014) with gas temperatures of \(\sim\) 3000 K and column densities of 10\({}^{21}\) cm\({}^{-2}\). The overtone emission in MWC 297 is much colder, \(\sim\)1500 K, an order of magnitude less dense, and extending up to a radius of 10 AU.
It was clear from the modeling of the CO 1 - 0 lines that the line strengths cannot be fit with a single temperature model. The is very little change in line strength
Figure 8: iSHELL spectrum showing the \(\upsilon=1-0\)\({}^{12}\)CO lines in absorption. The wavelength region is split into two for clarity. The transitions R(3), R(2), R(1) and R(0) at the top, P(1), P(2), P(3) and P(4) on the bottom, left to right. Regions of strong telluric absorption have been blanked out. The fainter double peaked emission lines are from higher rotational rotation transitions. The strong hydrogen lines Pf \(\beta\) and Hu \(\epsilon\) are also seen. The model fit to the absorption lines is overlaid in red.
Figure 7: \({}^{12}\)CO and \({}^{13}\)CO \(\upsilon=\) 1 – 0 absorption lines. The \({}^{12}\)CO profiles are plotted with solid lines and \({}^{13}\)CO with dashed lines. The lines are identified in the bottom left corner of the image. The velocity of the emission lines, centered at 0 km s\({}^{-1}\), is marked with a gray vertical line. The optically thick \({}^{12}\)CO \(\upsilon=\) 1 – 0 R(0) line appears flat topped, much broader, and shifted in velocity relative to \({}^{12}\)CO 1 – 0 R(4) and \({}^{13}\)CO 1 – 0 R(2), which are optically thin. This is because there is a second, colder red-shifted cloud component, which is optically thick in the \({}^{12}\)CO 1 – 0 R(0) line. It is also seen in the \({}^{13}\)CO 1 – 0 line as a strong red-shifted wing. The 0 km s\({}^{-1}\) velocity corresponds to a V\({}_{lsr}\) = 6.7 km s\({}^{-1}\).
between low and high excitation lines and they all have the characteristic double peaked profile from a disk in Keplerian rotation. An isothermal fit to all the rotational lines results in very high optical depths for the lowest transition and makes the line profiles flat-topped, which is not what is observed. It is clear that the lines can only be moderately optically thick, since they all show a clear U-shaped profile. Neither can a single temperature model explain the strong \(\upsilon\) = 2 - 1 and 3 - 2 transitions, which for the 2 - 1 lines are almost half the line strengths of the 1 - 0 lines. This was already noticed by Brittain et al. (2003), who argued that the strong \(\upsilon\) = 2 - 1 and 3 - 2 transitions in isolated HAeBe star HD 141569 must be excited by UV fluorescence, see also Brittain et al. (2007). Recent modeling of HD 141569, which is a well studied HAeBe star with vibrational levels detected up to \(\upsilon\) = 7 - 6 confirm that fluorescence is needed to explain the observed rovibrational lines (Jensen et al., 2021) We expect MWC 297 to have strong UV radiation, both from the fact that it is an early B star and because it is heavily accreting. However, due to the high foreground extinction, 8.1 mag, the star is not visible in the UV. Therefore we have not attempted to include fluorescence in ir model, although it it is clear that UV fluorescence is needed to explain the emission in the higher vibrational bands and even necessary for the high excitation 1 - 0 lines. The same may be true for other HAeBe disks as well.
Because MWC 297 is very bright in the infrared, it has been the target of many interferometric studies. These studies show a well resolved inner disk in the continuum with a size of 2 - 4 AU in continuum (Malbet et al., 2007; Kraus et al., 2008; Acke et al., 2008; Weigelt et al., 2011; Hone et al., 2017; Kluska et al., 2020) with the line emission (Br\(\gamma\)) being 2 - 3 times more extended (Malbet et al., 2007; Kraus et al., 2008; Weigelt et al., 2011; Hone et al., 2017). Acke et al. (2008) also observed MWC 297 in the mid-inrared. Modeling of mid-infrared data required a two component model and they found that a smaller inner disk with a size of 4 AU and a more extended disk with a size of 17 AU (40 mas), This is rather similar to the size of disk we see in CO Neither do these studies find any inner gap (Acke et al., 2008; Kluska et al., 2020). What is also noteworthy is that the size of the continuum region is well within the dust destruction radius, which for MWC 297 is \(\sim\) 5 AU1. As pointed out by Vacca &Sandell (2022) the Br\(\gamma\) emission is extremely optically thick and it is therefore appears that the continuum comes from an optically thick gaseous region. From the observed line width of the CO lines at 10% level, \(\sim\) 65.2 km s\({}^{-1}\), we find the inner CO radius \(\sim\) 5.6 AU, i.e. roughly at the same radius as the dust destruction radius,
Footnote 1: Kluska et al. (2020) quotes a dust destruction radius of 8 AU, but that is because they use a stellar luminosity 4 \(\times\) 10\({}^{4}\)\(\rm\,L_{\odot}\), which is far too high for a B1.5 V star, see Vacca &Sandell (2022).
A normal isotope ratio of [\({}^{12}\)C/\({}^{13}\)C] of 77 greatly underestimates the strength of the observed \({}^{13}\)CO line intensities. We find that a relative \({}^{12}\)CO to \({}^{13}\)CO abundance of \(\sim\) 6 provides a good fit to the data. This does not mean that the disk has anomalous isotope ratios, but rather that \({}^{13}\)CO is not in LTE and coexisting with \({}^{12}\)CO. As we seen, the FWHM of \({}^{13}\)CO 1 - 0 differs by more than 5% from that of 1 - 0, see Section 4. These kind of \({}^{12}\)CO/\({}^{13}\)CO ratios are not uncommon. van der Plas et al. (2015) found that the \({}^{12}\)CO/\({}^{13}\)CO ratio varied between and 7 for their sample of 12 HAeBe stars and that the rotational temperatures of \({}^{13}\)CO were on the average lower than those for \({}^{12}\)CO. Banzatti et al. (2022), who has the largest sample of rovibrational CO lines in HAeBe stars, do not quote a \({}^{12}\)CO/\({}^{13}\)CO ratio, but looking at the line ratios for their sample of 17 HAeBe stars with double peaked line profiles shows that the ratio ranges from 3 to 25, with a median of 14. This does not directly translate to an abundance ratio [\({}^{12}\)CO/\({}^{13}\)CO], because for most disks \({}^{12}\)CO 1 - 0 is more optically thick than \({}^{13}\)CO 1 -0, nor does it account for difference in excitation between \({}^{12}\)CO and \({}^{13}\)CO.
We detect narrow absorption lines, FWHM \(\sim\) 4 - 5 km s\({}^{-1}\) from a cold unrelated foreground cloud in both \({}^{12}\)CO and \({}^{13}\)CO. Radio observations show that this foreground cloud is very extended and therefore more local, probably somewhere around 100 to 200 pc from us. At the lowest J transitions the absorption lines in \({}^{12}\)CO are completely saturated. At higher J levels the lines become more blue-shifted and agree in velocity with the \({}^{13}\)CO absorption lines. The lowest transitions of \({}^{13}\)CO is also optically thick and show a red-shifted line wing.wing. Detailed analysis shows that the red-shifted component comes from a colder, but less dense cloud component. For the blue-shifted cloud component, which is at a V\({}_{lsr}\)\(\sim\) 6.4 km s\({}^{-1}\) we derive a temperature of 8.3 \(\pm\) 0.1 K and a CO column density, N(CO) = 6.7 \(\times\) 10\({}^{17}\) cm\({}^{-2}\). If we assume that all the CO gas is molecular with a typical abundance ratio of 10\({}^{-4}\) it corresponds to an extinction of.3 magnitude assuming the standard relation between molecular hydrogen and extinction. Since the redshifted cloud component also contributes to the extinction, it is clear that foreground cloud plus diffuse gas along the line of sight can fully explain the observed high extinction, 8.1 mag, toward MWC 297. Banzatti et al. (2022) noticed that the absorption line FWHM decreased as a function of J level
from 10 km s\({}^{-1}\) in the J = 1 lines down to a minimum of 3.3 km s\({}^{-1}\) in the \(\upsilon\) = 1 - 0 J = 4 lines and that the line shape also changed, becoming less Gaussian and showing effects of saturation at the lowest J levels. They also saw a similar behavior in \({}^{13}\)CO. This agrees well with what we have found, and we now now now it is due to two cold clouds in the foreground of MWC 297 with different temperatures and densities, but both of which are completely optically thick in \({}^{12}\)CO at the lowest J levels.
## 6 Summary and Conclusions
We have shown that overtone bandhead and rovibrational CO lines trace a hot gas disk in Keplerian rotation around the 10 M\({}_{\odot}\) star MWC 297. Our modeling shows that the emission cannot be explained by thermal excitation alone. It therefore appears that the high excitation rotational lines are largely excited by fluorescence, something which has been seen in other HAeBe disks as well. Analysis of the spectra show that \({}^{12}\)CO \(\upsilon\) = 1-0 emission is optically thick for the low excitation lines. Even the \({}^{13}\)CO 1 - 0 and \({}^{12}\)CO 2 - 1 have somewhat optically thick lines at low J levels. We find that a narrow ring with a radius of 12 AU and an inclination of 55\({}^{\circ}\) provides an adequate fit to data. For this model we derive a rotational temperature of 1500 K and a CO column density of 1.6 \(\times\) 10\({}^{18}\) cm\({}^{-2}\). This model underestimates the line strength of high J lines, indicating that these lines are excited by fluorescence. The CO overtone emission, which is only seen in a few HAeBe disks, has a similar temperature. Our best-fit model is still somewhat unsatisfactory. A thin ring of CO emission seems physically unrealistic. Furthermore, although the model provides a reasonable match to the overall line strengths, it does not provide a good match to several prominent lines in the spectrum. Many of the \({}^{12}\)CO 2-1 lines in the model are much stronger than in the data. We also find that a normal isotope ratio of [\({}^{12}\)C/\({}^{13}\)C] of 77 greatly underestimates the \({}^{13}\)CO line intensities, while a \({}^{12}\)CO to \({}^{13}\)CO abundance of \(\sim\) 13 provides a good fit to the data. Other HAeBe disks typically show similarly enhanced \({}^{13}\)CO line intensities, most likely because excitation conditions in the disks differ between \({}^{12}\)CO and \({}^{13}\)CO.
The \({}^{12}\)CO 1 - 0 spectrum shows a strong absorption feature in the center of the emission profile, which is completely saturated at J = 1 and 2, and which is also seen in \({}^{13}\)CO 1 - 0. This absorption is due to cold foreground emission, which is unrelated to MWC 297. Radio observations show that this foreground cloud is very extended and therefore more local, probably somewhere around 100 to 200 pc from us. Detailed analysis show that this foreground absorption is caused by two cold foreground clouds, partly overlapping in velocity. The more opaque one is slightly blue-shifted relative to the center of the emission lines has a temperature of 8.3 K and a column density of 6.7 \(\times\) 10\({}^{17}\) cm\({}^{-2}\), which correspond to a visual extinction of 6.3 mag. With our velocity resolution we could not separate the second redshifted component well enough to be able to good estimate the temperature and density, but it is apparent that it is colder and has a lower column density. It is clear these foreground clouds are responsible for the high extinction toward MWC 297.
We have shown that the young, heavily accreting B1.5 V star MWC 297 is still surrounded by a molecular accretion disk in Keplerian rotation. It is the only early B star, which has been detected in ro-vibrational CO lines and one of the few HAeBe stars detected in overtone CO emission. The circumstellar disk has not been detected at mm-wavelengths, because a cold foreground cloud absorbs all the low J \({}^{12}\) CO lines. These lines are typically used for detecting gas in disk with facilities like SMA and ALMA. The MWC 297 disk does not have any cold dust. It that sense it resembles MWC 349A, which has been detected in CO overtone emission but has not been detected in any molecular line at mm-wavelengths, suggesting that it has no cold molecular gas. For the same reason we find it very unlikely that MWC 297 would have any cold molecular gas. Based on the rovibrational lines it also looks like the disk is rather compact. It would be very interesting to get a better estimate of the size of the disk using spectroastrometric imaging using the rovibrational CO lines.
We appreciate Prof. Andrea Banzatti sharing some of his iSHELL data on MWC 297 with us. We also thank Dr. Adwin Boogert for helping us with the telluric correction of the September 2020 iSHELL spectrum. We are especially grateful to the anonymous referee, whose constructive criticism encouraged us to completely rethink our modeling approach to the CO lines and to get additional data.
|
2305.12766 | Explaining Emergent In-Context Learning as Kernel Regression | Large language models (LLMs) have initiated a paradigm shift in transfer
learning. In contrast to the classic pretraining-then-finetuning procedure, in
order to use LLMs for downstream prediction tasks, one only needs to provide a
few demonstrations, known as in-context examples, without adding more or
updating existing model parameters. This in-context learning (ICL) capability
of LLMs is intriguing, and it is not yet fully understood how pretrained LLMs
acquire such capabilities. In this paper, we investigate the reason why a
transformer-based language model can accomplish in-context learning after
pre-training on a general language corpus by proposing one hypothesis that LLMs
can simulate kernel regression with internal representations when faced with
in-context examples. More concretely, we first prove that Bayesian inference on
in-context prompts can be asymptotically understood as kernel regression $\hat
y = \sum_i y_i K(x, x_i)/\sum_i K(x, x_i)$ as the number of in-context
demonstrations grows. Then, we empirically investigate the in-context behaviors
of language models. We find that during ICL, the attention and hidden features
in LLMs match the behaviors of a kernel regression. Finally, our theory
provides insights into multiple phenomena observed in the ICL field: why
retrieving demonstrative samples similar to test samples can help, why ICL
performance is sensitive to the output formats, and why ICL accuracy benefits
from selecting in-distribution and representative samples. | Chi Han, Ziqi Wang, Han Zhao, Heng Ji | 2023-05-22T06:45:02Z | http://arxiv.org/abs/2305.12766v2 | # In-Context Learning of Large Language Models Explained as Kernel Regression
###### Abstract
Large language models (LLMs) have initiated a paradigm shift in transfer learning. In contrast to the classic pretraining-then-finetuning procedure, in order to use LLMs for downstream prediction tasks, one only needs to provide a few demonstrations, known as in-context examples, without adding more or updating existing model parameters. This in-context learning (ICL) capabilities of LLMs is intriguing, and it is not yet fully understood how pretrained LLMs acquire such capabilities. In this paper, we investigate the reason why a transformer-based language model can accomplish in-context learning after pre-training on a general language corpus by proposing one hypothesis that LLMs can simulate kernel regression algorithms when faced with in-context examples. More concretely, we first prove that Bayesian inference on in-context prompts can be asymptotically understood as kernel regression \(\hat{y}=\frac{\sum_{i}y_{i}K(x,x_{i})}{\sum_{i}K(x,x_{i})}\) as the number of in-context demonstrations grows. Then, we empirically investigate the in-context behaviors of language models. We find that during ICL, the attentions and hidden features in LLMs match the behaviors of a kernel regression. Finally, our theory provides insights on multiple phenomena observed in ICL field: why retrieving demonstrative samples similar to test sample can help, why ICL performance is sensitive to the output formats, and why ICL accuracy benefits from selecting in-distribuion and representative samples. We will make our code available to the research community following publication.
## 1 Introduction
Pre-trained large language models (LLMs) have emerged as powerful tools in the field of natural language processing, demonstrating remarkable performance across a broad range of applications [21; 8; 22; 2; 9]. They have been used to tackle diverse tasks such as text summarization, sentiment analysis, schema induction and translation, among others [2; 15; 9]. One of the most fascinating capabilities of LLMs is their ability to perform in-context learning (ICL), a process in which a language model can make predictions on a test sample based on a few demonstrative examples provided in the input context [11]. This feature makes LLMs particularly versatile and adaptive to different tasks. Studies have found ICL to emerge especially when the size of LLM is large enough, and pre-trained over a massive corpus [23].
Although intuitive for human learners, ICL poses a mystery for optimization theories because of the significant format shift between ICL prompts and pre-training corpus. There have been lots of efforts in providing a theoretical understanding of how LLMs implement ICL. Some work [25; 19] approaches this problem from a data perspective: they claim that ICL is possible if a model masters Bayesian inference on pre-training distribution. However, they fail to explain how such inference is feasibly implemented in practical language models, nor did their theory provide insights into ICL behaviors. Another stream of work conjectures that under a simple linear setting: \([x,y]\) where \(y=w^{\top}x\) and the input sequence \(x\) only has length 1, they can construct a Transformer [17] to
implement gradient descent (GD) algorithm over ICL prompt [1; 18; 5]. However, this constrained setting diverges from the most interesting part of ICL, as state-of-the-art LLMs work with linguistic tasks where the sequential textual inputs has complex semantic structures, and ICL emerges from pre-training on general-purpose corpus instead of explicit ICL training.
In this work, we delve deeper into the question of _what mechanism enables Transformer-based LLMs to, after pre-training on a general language corpus, accomplish in-context learning on sequential data_. We specifically explore the hypothesis that LLMs employ a kernel regression algorithm when confronted with in-context prompts. Kernel regression adopts a non-parametric form
\[\hat{y}=\frac{\sum_{i}y_{i}K(x,x_{i})}{\sum_{i}K(x,x_{i})} \tag{1}\]
when making predictions, where \(K(x,x_{i})\) is a kernel that measures the similarity between inputs \(x\) and \(x_{i}\). In plain words, it estimates the output \(\hat{y}\) on \(x\) by drawing information from similar other data points \(x_{i}\) and take a weighted sum on their \(y_{i}\).
We first provide a theoretical analysis demonstrating that Bayesian inference predictions on in-context prompts converge to a kernel regression in Section 4. Our results also shed light on various phenomena observed in previous empirical studies, such as the advantage of retrieving in-context examples that are similar to the test sample, the sensitivity of ICL performance to the output formats, and why using a group of in-distribution and representative samples improves ICL accuracy.
Following our theoretical investigation, in Section 5 we conduct empirical studies to verify our explanation of in-context learning of LLMs in more details. Our results reveal that during LLM ICL, the attention map used by the last token to predict the next token is allocated in accordance with our explanation. By plugging attention values into our equation, we are also able to reconstruct the model's output by over 80% accuracy. Moreover, we are able to reveal how information necessary to kernel regression is computed in intermediate LLM layers. In conclusion, we make the following contributions in this work:
* We provide a theoretical explanation of how practical LLMs can be capable of ICL using the concept of kernel regression.
* We conduct empirical analysis to verify that the LLM's attention maps and hidden features matches our explanation.
* Out theoretical explanation provides insights and heuristics to multiple phenomena observed in ICL practice by previous studies.
## 2 Related Work
### In-Context Learning
As an intriguing property of large language models, in-context learning has attracted high attention in research community. There have been numerous studies on empirical analysis of in-context learning, including format and effects of in-context samples [14; 13; 27], selection of in-context
Figure 1: Our results suggests that LLMs might be conducting kernel regression on ICL prompts.
samples [10; 12], characteristics of in-context learning behaviors [27; 12; 7; 23], and relation between ICL and pre-training dataset [24; 3].
Going deeper, researchers have also been interested in building a theoretical understanding of why ICL works. One branch of studies investigates ICL from a data perspective: [26; 19] demonstrate that a good enough Bayesian inference on pre-training data might cause emergence of ICL ability. However they fail to explain if such Bayesian inference is computationally feasible in practical language models, as such Bayesian inference involves unbounded depth computational graphs as the number of samples increases. Our study builds on top of some similar assumptions, but goes further to explain how ICL can be accomplished with attention-mechanism in Transformers [17].
Another angle of explanation is analyzing what algorithms might be implemented in LLMs for ICL, which is closely related to our study. Representative studies include [1; 5; 18; 4], with the majority of them proposing gradient descent (GD) algorithm as a promising candidate answer. However, attempts in explicit construction of GD algorithm in Transformers [1; 18] mostly assume an oversimplified setting of linear tasks with input length equal to 1, and evaluate Transformers after training on a synthetic dataset (including [1; 5]). This is different from ICL's main advantage as an emergent ability on language pre-training, and that LLMs are able to work on textual data which involves more complex syntactic and semantic structures.
### Emergent Ability of LLMs
This is a larger topic that in-context learning is also highly related to. [21; 2; 8; 22] showed that abilities including reasoning, in-context learning, few-shot learning, instruction understanding and multilingualism emerge in large language models after pre-training on massive language data. These impressive and mysterious capacities have boosted significant progress in natural language processing as well as artificial intelligence, but still baffle theoretical analysis. In this work, we make a preliminary step towards understanding ICL as a special case of LM capacity emergence.
## 3 Formulation
### Preliminaries: Hidden Markov Models
Following the setting of [25], we assume that the pre-training corpus can be modelled by a mixture of HMMs. Each HMM corresponds to a certain task \(\theta\in\Theta\). Assuming a large finite number of tasks, one can include all task-specific HMMs into one single HMM. In this unified HMM, let \(\mathcal{S}\) be the set of states, and \(\mathcal{O}\) be the set of observations where \(|\mathcal{O}|=m\). At each time step, state \(s_{t}\) randomly emits one observation \(o_{t}\) and then transits to the next state \(s_{t+1}\). \(p_{\text{pre-train}}\), \(P(s_{t+1}=s^{\prime}|s_{t}=s)\) and \(P(o_{t}=o|s_{t}=s)\) denote the pre-training initial distribution, transition distribution and emission distribution respectively. Under an arbitrary ordering of \(\mathcal{S}\) and \(\mathcal{O}\), we can define the transition matrix \(T:T(s,s^{\prime})=P(s^{\prime}|s)\), and emission matrix \(B:B(s,o)=P(o|s)\), respectively. We also let \(\mathbf{o}=(o_{0},\cdots)\) be the full observation sequence, and \(\mathbf{o}_{[0:l]}\) denote its first \(l\) tokens.
### In-Context Learning
In this work we consider the following formulation of in-context learning (ICL). Let \(\Theta\) be the set of tasks. The distribution of sequences generated by each individual task in the HMM together composes the pre-training distribution. Specifically, each task \(\theta\in\Theta\) is associated with a distinct initial state \(s_{\theta}\in\mathcal{S}\), and the set of all such initial states \(\mathcal{S}_{\text{start}}=\{s_{\theta}|\theta\in\Theta\}\) forms the support of \(p_{\text{pre-train}}\).
Following the ICL prompt formulation in [25], for a test task \(\theta^{\star}\), the in-context learning prompt follows the format:
\[[S_{n},\mathbf{x}_{\text{test}}]=[\mathbf{x}_{1},y_{1},o^{\text{elim}}, \mathbf{x}_{2},y_{2},o^{\text{elim}},\cdots,\mathbf{x}_{n},y_{n},o^{\text{ delim}},\mathbf{x}_{\text{test}}], \tag{2}\]
where the input-output pairs \([\mathbf{x}_{i},y_{i}]\) are i.i.d. demonstrate samples sampled from \(\theta^{\star}\), and \(o^{\text{delim}}\) is delimiter token used to separate adjacent samples.
We further make some connections between in-context learning and the HMM model. Note that the probability of generating a sequence from the initial distribution \(p_{0}\) can be expressed as follows[6]:
\[P(\mathbf{o}_{[0:l]}|p_{0})=\mathbf{v}_{p_{0}}^{\top}\left(\prod_{i=0}^{l-1} \text{diag}(\mathbf{p}_{o_{i}})T\right)\text{diag}(\mathbf{p}_{o_{i}})\mathbf{1}, \tag{3}\]
where \(\mathbf{p}_{(}o)\) is vector of emission probabilities \(P(o|s\in\mathcal{S})\). We denote the intermediate matrices as one operator \(T_{\mathbf{o}_{[0:l-1]}}=\prod_{i=0}^{l-1}\text{diag}(\mathbf{p}_{o_{i}})T\). We can use a matrix \(\Sigma_{p,l}\) to denote the covariance between all of its \(d^{2}\) elements of \(\mathrm{vec}(T_{\mathbf{o}_{[0:l-1]}})\) when \(\mathbf{o}_{[0:l-1]}\) is generated from initial distribution \(p\). For each individual task, we also have \(\epsilon_{\theta}=\inf_{l}\rho(\Sigma_{p_{\text{pre-train}}}^{-1}-\Sigma_{s_{ \theta},l}^{-1})\) to quantify the difference between sequences generated by \(s_{\theta}\) and those from pre-training distribution, where \(\rho\) denotes the spectral radius of a matrix. Let \(\eta=\sup_{\mathbf{o}_{[0:l-1]}}\|T_{\mathbf{o}_{[0:l]}}\|_{F}\) be the upper bound of \(T_{\mathbf{o}_{[0:l]}}\)'s Frobenius-norm.
### Assumptions
We go on and present the assumptions we make on the formulation:
**Assumption 1**.: _(Recurrence) If a sequence \(\mathbf{o}\) is generated by task \(\theta\), then \(s_{\theta}\) can be revisited in future steps with non-zero probability:_
\[\forall\mathbf{o}_{[0:l-1]},P(s_{t}=s_{\theta}|\mathbf{o}_{[0:l-1]},\theta) \geq\epsilon_{r},\]
**Remark**: this means that the pre-training corpus is of a repetitive nature, and the task \(\theta\)'s "theme" is repeatedly mentioned or discussed throughout a text sequence.
**Assumption 2**.: _(Beginning Anchor Words) The beginning token of any sequence has non-zero emission probability only on starting states:_
\[\forall\mathbf{o}\sim p_{\text{pre-train}},s\not\in\mathcal{S}_{\text{start} },P_{O}(o_{0}|s)=0\]
**Remark**: this means that the start of a sentence is usually indicative enough, such as the start of a new line combined with capital letters at the beginning of a paragraph.
**Assumption 3**.: _(Delimiter Emission Probability) Each state \(s\in\mathcal{S}\) has bounded probability of generating \(o^{\text{delim}}\):_
\[P_{O}(o^{\text{delim}}|s)\geq\epsilon_{d}.\]
**Remark**: this means that the delimiter token has probability to be generated unexpectedly, and reflects the common linguistic phenomenon of "elliptical construction", where people stop at a partially completed sentence as long as the omitted part can be inferred from the context.
**Assumption 4**.: _(Distinguishability) The Kullback-Leibler divergence (KL divergence) between the first \(l\) tokens between two distinct tasks \(\theta\neq\theta^{\prime}\) is lower-bounded by:_
\[\inf_{\theta,\theta^{\prime}}D_{KL}(P(\mathbf{o}_{[0:l]}|\theta^{\prime})||P( \mathbf{o}_{[0:l]}|\theta))=\epsilon_{KL}>\ln\frac{1}{\epsilon_{r}\epsilon_{d}}.\]
**Remark**: this requires that tasks are distinguishable enough from each other. As KL-divergence is non-decreasing with length \(l\), this assumption also encourages the length \(l\) to be large enough to provide sufficient task-specific information.
**Assumption 5**.: _(Bounded Task Deviation) The in-context learning tasks should not deviate too much from the pre-training corpus:_
\[\epsilon_{\theta}<\frac{\Delta}{2\eta^{2}}\]
_where \(\Delta=\inf_{y^{\prime}\neq y_{\text{max}}}\left.\left|P(y_{\text{max}}\left| \mathbf{x}_{\text{text}}\right.)-P(y^{\prime}|\mathbf{x}_{\text{test}})\right|\) is the minimum margin in test sample prediction._
**Remark**: this assumption requires that the sentences in each task should not significantly deviate from general pre-training distribution, a phenomenon also known as linguistic regularity.
Theoretical Analysis
### Explaining ICL as Kernel Regression
Within the framework presented in Section 3, we pose the following result. The basic idea is that, as the number of samples \(n\) increases, inference on the in-context learning prompt converges to a kernel-regression form.
**Theorem 1**.: _With any probability \(1-\delta\), if the in-context prompt is larger than a threshold_
\[n>n_{\delta}=O(\mathrm{poly}(\ln m,ln\frac{1}{\delta},\ln\frac{1}{\epsilon_{d} },\ln\frac{1}{\epsilon_{r}},(\epsilon_{KL}-\ln\frac{1}{\epsilon_{r}\epsilon_{ d}})^{-1},(\frac{\Delta}{2}-\epsilon_{\theta}\eta^{2})^{-1})),\]
_then the most probable prediction by posterior inference_
\[\arg\max_{y}P(y|S_{n},\mathbf{x}_{\text{test}})\]
_equals to the one with maximal value in the following kernel regression form_
\[\arg\max\frac{\sum_{i=1}^{n}\mathbf{e}(y_{i})\left\langle\mathrm{vec}(T_{ \mathbf{x}_{\text{test}}}),\Sigma_{p_{\text{pre-train}}}^{-1},\ \mathrm{vec}(T_{\mathbf{x}_{i}})\right\rangle}{\sum_{i=1}^{n}\left\langle \mathrm{vec}(T_{\mathbf{x}_{\text{test}}}),\Sigma_{p_{\text{pre-train}}}^{-1}, \ \mathrm{vec}(T_{\mathbf{x}_{i}})\right\rangle} \tag{4}\]
_where \(\left\langle\cdot,\cdot\right\rangle\) indicates the inner product, and \(\mathbf{e}(y)\) is the one-hot vector corresponding to index \(y\)._
Equation 4 can be interpreted as follows: it calculates the semantic similarity between the test input \(\mathbf{x}_{\text{test}}\) and each sample \(\mathbf{x}_{i}\), and aggregates their outputs to compute a most likely prediction for the test sample. This is natural to the motivation of ICL: we encourage the LLM to leverage the pattern provided in demonstrative samples, and mimic the pattern to predict on the test input. Equation 4 is also similar to the form of attention mechanism used in Transformer decoder models:
\[h=\text{softmax}(q^{\top}K)V^{\top}=\frac{\sum_{j}v_{i}e^{<q,k_{i}>}}{\sum_{j} e^{<q,k_{i}>}} \tag{5}\]
where \(q\) is the query vector corresponding to the last token, \(k,K\) are the key vectors and matrix, and \(v,V\) are the value vectors and matrix used in the Transformer, respectively. The only difference is that \(e^{<q,k_{i}>}\) is replaced with a dot product in Equation 4, which can be regarded as a kernel trick. We assume that previous hidden layers are responsible for learning the semantic vectors of samples inputs \(\mathrm{vec}(T_{\mathbf{x}})\). We can then make the following loose analogy between our kernel regression explanation (Equation 4) and the attention mechanism (Equation 5):
* Label information \(\mathbf{e}(y_{i})\) corresponds with the _value_ vector \(v_{i}\)
* The similarity kernel \(\left\langle\mathrm{vec}(T_{\mathbf{x}_{\text{test}}}),\Sigma_{p_{\text{pre- train}}}^{-1},\ \mathrm{vec}(T_{\mathbf{x}_{i}})\right\rangle\) loosely corresponds to the attention value \(e^{<q,k_{i}>}\), where:
* the semantic information \(\mathrm{vec}(T_{\mathbf{x}_{i}})\) corresponds to the _key_ vectors \(k_{i}\) and _query_ vectors \(q_{i}\) for samples \([\mathbf{x},y]\).
One might argue that it is also theoretically possible to directly infer the next token in matrix form \(p_{\text{pre-train}}^{\top}T_{[S_{n},\mathbf{x}_{\text{test}}]}\). However, this form involves \(2n\) consecutive matrix multiplications. When \(n\) increases, this is infeasible for a practical Transformer architecture which is composed of a fixed number of layers. In comparison, Equation 4 only requires semantic information for each sample \(\mathbf{x}\) to be provided beforehand, and the applies kernel regression (which can be done by one attention layer) to get the answer. Learning to represent \(T_{\mathbf{x}}\) is probable for preceding layers, as it is also used for ordinary inference \(P(y|\mathbf{x})=p_{\text{pre-train}}^{\top}T_{\mathbf{x}}\). In experiments in Section 5 we demonstrate that this analogy can explain the ICL behaviors of LLMs to an extent.
### Insights Provided by the Explanation
Theorem 1 is able to explain multiple phenomena in ICL field that are observed by previous studies. This is helpful for understanding and predicting the behaviors of ICL, and providing heuristics for future development.
**Retrieving Similar Samples** It is empirically observed [16, 10] that retrieving demonstrative samples \(\mathbf{x}_{i}\) that are similar to the test input \(\mathbf{x}_{\text{test}}\) can benefit ICL performance. This phenomenon is understandable from our explanation. Encouraging selection of similar samples can be understood as limiting the cosine distance between demonstrative samples \(\mathbf{x}_{i}\) and test sample \(\mathbf{x}_{\text{test}}\) in sentence embedding space. This is similar to selecting a smaller "bandwidth" in kernel regression and sampling only from a local window, which reduces the bias in kernel regression. Therefore, devising better retrieval technique for selecting samples, especially those with similar representations as LLMs, is a promising direction for further boosting ICL scores.
**Sensitivity to Label Format**[14] also observes that the ICL performance relies on the label format. Replacing the label set with another random set will reduce the performance of ordinary auto-regressive LLMs. This can be explained in our theory that the model's output comes from a weighted voting of demonstrative sample labels \(\{y_{i}\}\). If the label space is changed the next token will also be mislead to a different output space. So it is generally beneficial for ICL to ensure the demonstrative samples and test samples share an aligned label space and output format.
Sample Inputs' DistributionAnother phenomenon discovered by previous studies is the importance of sample inputs' distribution [14], where out-of-distribution (OOD) demonstrative samples will degrade ICL accuracy. This mirrors our Assumptions 5 that the demonstrations \([\mathbf{x}_{i},y_{i}]\) should be sampled in a way close to \(p_{\text{pre-train}}\). Our theory also encourages that the samples should form a representative set in the input space. Otherwise if there are distributional difference from samples \(\mathbf{x}_{i}\) and the task \(\theta^{\star}\), a bias or instability in the estimation in Equation 4 will be introduced. In all, this insight supports the importance that high quality and representative demonstrations be sampled.
Remaining ChallengesHowever, we need to point out that there are still phenomena not explainable by our framework, as well as most previous explanations. One most mysterious one is the
Figure 2: Averaged attention map over GLUE-sst2 test set. A portion of attentions on demonstrative samples are generally focused on label positions \(y_{i}\). This conforms to the intuition in kernel regression explanation in Theorem 1 that the inference on in-context learning prompts is a weighted average over sample labels.
sensitivity to sample ordering [12] as a kernel regression should be order-ignorant, which no existing explanations (including [25; 1]) take into account. Another unexplained phenomenon is LLM's robustness to perturbed or random labels [14]. We attribute such phenomena to the fact that LLMs also rely on a large portion of implicit reasoning in text generation, and might benefit from linguistic cues in text prompt. Our theory provides partial explanation, which needs to be combined with this implicit ability of LLMs to form a more comprehensive understanding of ICL.
## 5 Empirical Analysis
In this section we conduct empirical analysis on LLMs in order to verify our hypothesis. Because Equation 4 is only one special solution among infinite many of its equivalent forms, and it also relies on the unknown HMM structure, it is infeasible to directly evaluate it on data. However, we can verify if it can predict observable behaviors on LLMs in experiments. In this section, unless otherwise stated, we run GPT-J 6B model1 on one Tesla V100. It employs a decoder-only Transformer architecture. In this section, we use the validation set of sst2 dataset as a case study, while results on more tasks can be found at Appendix B. GPT-J-6B ICL achieves 89.6% accuracy on the test set. We investigate the ICL behavior of LLMs from shallow to deep levels, and sequentially ask the following 4 questions: does the attention heads collect label information \(\mathbf{e}(y_{i})\) as predicted? Does the attention-kernel analogy explain LLM's prediction? Can we actually explain the attention values as a kind of similarity? Can we find where algorithmic features \(\mathbf{e}(y_{i}),T_{\mathbf{x}_{i}}\) are stored? The following sections answer these questions one by one.
Footnote 1: [https://huggingface.co/docs/transformers/model_doc/gptj](https://huggingface.co/docs/transformers/model_doc/gptj)
### Where Are Attentions Distributed During ICL?
First, we notice that Equation 2 implies that the LLM takes a weighted average over sample labels \(y_{i}\) in ICL. Figure 2 shows how attention weights are distributed on in-context learning inputs \([S_{n},\mathbf{x}_{\text{test}}]\). On each test point, we sample one ICL prompt, and collect the attention map over previous tokens for predicting the next token. After getting the attention maps, as ICL samples \(\mathbf{x}_{i}\) may have varied lengths, we re-scale the attentions on each \(\mathbf{x}\) from \(|\mathbf{x}|\) to a fixed 30-token length with linear interpolation. After aligning the attention lengths, we average all attention maps. The horizontal axis is the aligned positions on prompt \([S_{n},\mathbf{x}_{\text{test}}]\). Each bar corresponds to one of 28 Transformer layers. Within each bar, each thin line is 1 out of 16 attention heads. Darker (blue) areas mean smaller averaged attention, while brighter areas indicate high attention.
In Figure 2, there are three major locations of attention masses. First, a majority of attentions are focused on the final few tokens in \(\mathbf{x}_{\text{test}}\), especially in the first 3 layers. This accords with previous observations that Transformer attentions tend to locate in a local window to construct local semantic feature for \(\mathbf{x}_{\text{test}}\). Secondly, as also observed in previous studies, LLMs tend to allocate much attention on the first few tokens in a sequence to collect starter information. Finally and most intriguingly, we observe concentrated attention on each sample label tokens \(\{y_{i}\}\). This phenomenon confirms an aggregation of label information in LLM ICL, in line with the prediction by Equation 4.
Figure 3: We use each head’s attention weights on demonstrative samples to manually average sample labels \(y_{i}\). \(x\)-axis are layers and \(y\)-axis are heads in each layer. These figures show that this “reconstructed” output in some heads from layers 16\(\sim\)21 matches LLM prediction with as high as 89.2% accuracy, and matches ground truth with 86.4% accracy.
### Can Attentions Be Interpreted as Kernel Functions?
Now that we observe expected locations of attention weights on labels, we go on to verify if the LLM really predicts by averaging on labels as suggested by Theorem 1. We iterate over 16 heads and 28 layers, and insert their attention weights into Equation 4 to manually average the label distribution. This is similar in concept to a mind reading experiment to predict one's next word using brain waves only [20]. Specifically, for each attention head, we use the maximal attention value \(a_{i}\) within the range of \([\mathbf{x}_{i},y_{i}]\) as the kernel weight. Then on the ICL samples, we reconstruct the a prediction as follows:
\[\tilde{y}=\arg\max\frac{\sum_{i=1}^{n}\mathbf{e}(y_{i})a_{i}}{\sum_{i=1}^{n}a_ {i}}\]
The resulting "reconstructed output" \(\tilde{y}\) is compared for both LLM's actual prediction \(\hat{y}\) and ground truth label \(y_{\text{test}}\) to calculate its accuracy. Figure 2(a) and 2(b) plot the accuracy between \(\tilde{y}\) and \(\hat{y}\) and between \(\tilde{y}\) and \(y_{\text{test}}\) respectively. Interestingly, we spot the existence of several heads in layers 18\(\sim\)21 which demonstrate high accuracy in reconstruction. The highest of them (layer 17, head 10) achieves 89.2% accuracy on \(\hat{y}\) and 86.4% accuracy on \(y_{\text{test}}\). This result validates our hypothesis that some components in Transformer-based LLMs implement kernel regression. Note that this phenomenon happens within a few adjacent layers in the middle of the model. This is similar to our prediction in Section 4.1: not many attention layers are needed for kernel regression, as long as the required features have been computed by preceding layers. It is enough for the higher layers to only pass on the computed results.
### Which Samples Receive High Attention?
We go on and ask the question: if the LLMs use attention to implement kernel regression, _what kind of similarity does this kernel function evaluate?_ From Equation 4, we see that the dot product is measuring similarity between \(T_{\mathbf{x}}\), which encodes information necessary for HMM inference: \(p(o|\mathbf{x})=p_{\text{pre-train}}^{\uparrow}T_{\mathbf{x}}B\). Therefore, we conjecture that the attention value \(a_{i}\) between \(\mathbf{x}_{\text{test}}\) and \(\mathbf{x}_{i}\) correlates with their prediction similarity. Specifically, we define the _prediction similarity_ as follows:
\[\text{sim}(\mathbf{x}_{1},\mathbf{x}_{2})=P(o|\mathbf{x}_{1})^{\top}P(o| \mathbf{x}_{2}), \tag{6}\]
which is measured by applying LLMs on these texts _alone_, rather than in ICL prompt. Finally, we compute the Pearson correlation coefficient between \(\text{sim}(\mathbf{x}_{\text{test}},\mathbf{x}_{i})\) and each attention values on samples for each attention head. The results are shown in Figure 4. The absolute value of correlation is not high, as \(P(o|\mathbf{x})\) is a dimension reduction to \(T_{\mathbf{x}}\) and can lose and mix information. However, we can still note a striking similarity between it and Figure 3. This means that the heads responsible for ICL mostly attend to _prediction-similar_ samples.
### Do Intermediate Features Store Information Useful for Kernel Regression?
Finally, we go into a more detailed level, and investigate the question: _where do Transformer-based LMs store the algorithmic information needed by kernel regression?_ To this end, we take out the intermediate _key_ and _value_ features in all layer heads, and see if the correct information is stored in correct locations. Note in Section 5.1, we observe that a major part of attention weights are located at the label position, so we focus on positions within \([-1,3]\) relative to this position. Noticing
Figure 4: Pearson correlation between sample’s attentions and _prediction similarity_\(\text{sim}(\mathbf{x}_{\text{test}},\mathbf{x}_{i})\) (Equation 6). Note the resemblence between this heatmap and Figure 3.
the analogy we made at Section 4.1 that \(k_{i}\sim\mathrm{vec}(T_{\mathbf{x}_{j}})\) and \(v_{j}\sim y_{j}\), we study two sub-questions: (1) whether _value_ vectors encode label information \(y_{i}\); and (2) whether _key_ vectors encode LLM prediction information \(P(o|\mathbf{x}_{i})\). For each head, we conduct Ridge regression with \(\lambda=0.01\) to fit the task in these 2 questions. Results are presented in Figure 5. We can observe that, generally the high-attention position (y-axis = 0) indeed achieves best accuracy. Figure 4(b) is intuitive, as tokens at a position later than the label token \(y_{i}\) can easily access the information of \(y_{i}\) by self attention. The slight drop at position +3 means that a longer distance introduces more noise to this information flow. Results in Figure 4(a) tells us that, although sentence \(\mathbf{x}_{i}\)'s starting position in ICL prompt is shifted and different from 0, \(k_{i}\) is still strongly correlated with \(P(o|\mathbf{x}_{i})\), which indicates a sense of translation invariance. Overall, the results mean that, with the attention map distributed in Figure 2, LLM is able to use attention mechanism to extract information regarding \(T_{\mathbf{x}_{i}}\) and \(y_{i}\) from _key_ and _value_ vectors effectively just as we described.
## 6 Conclusions and Future Work
In conclusion, our work provides a novel theoretical view to understand the intriguing in-context learning (ICL) capabilities of Transformer-based large language models (LLMs). We propose that LLMs can simulate kernel regression algorithms when dealing with in-context examples. Our empirical investigations into the in-context behaviors of LLMs reveal that the model's attentions and hidden features during ICL are congruent with the behaviors of kernel regression. Furthermore, our theory also explains several observable phenomena in the field of ICL: why the retrieval of demonstrations similar to the test sample can enhance performance, the sensitivity of ICL to output formats, and the improvements in ICL accuracy when selecting in-distribution and representative samples. There are still remaining challenges in this topic, such as understanding the effect of sample orderings and the robustness to perturbed labels. These questions, along with understanding other perspectives of LLMs, are exciting questions for future research.
## 7 Limitations
There are still spaces for improvement in the quest for understanding the emergent ICL capacity of LLMs. As described in Section 4.2, several phenomena still baffles theoretical characterization, such as the sensitivity to sample orders and robustness to input-label mismatch. We conjecture that answering this question requires deeper understanding of LLMs' ability in representation and implicit reasoning. Moreover, in this study we focus on the category of classification tasks where the output involves only one token. Although more complex tasks such as generation, translation and question answering can be viewed as a sequence of classification tasks, a more direct way to analyze text output is much desired. Finally, this study utilizes a well-established but also simple framework of HMM, which is inconvenient for modelling the complicated nature of natural language and we call for better theoretical frameworks for linguistic analysis.
Figure 5: Key and value vectors encode label and LLM prediction information at high-attention position. Here x-axis (0\(\sim\)27) is layer number, y-axis denotes relative position to the high-attention position within each demonstration, and z-axis is accuracy. Each sphere is an attention head. The curve shows average accuracy within each layer.
## Acknowledgements
This work was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and AIDA Program No. FA8750-18-2-0014. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
|
2303.17590 | Going Beyond Nouns With Vision & Language Models Using Synthetic Data | Large-scale pre-trained Vision & Language (VL) models have shown remarkable
performance in many applications, enabling replacing a fixed set of supported
classes with zero-shot open vocabulary reasoning over (almost arbitrary)
natural language prompts. However, recent works have uncovered a fundamental
weakness of these models. For example, their difficulty to understand Visual
Language Concepts (VLC) that go 'beyond nouns' such as the meaning of
non-object words (e.g., attributes, actions, relations, states, etc.), or
difficulty in performing compositional reasoning such as understanding the
significance of the order of the words in a sentence. In this work, we
investigate to which extent purely synthetic data could be leveraged to teach
these models to overcome such shortcomings without compromising their zero-shot
capabilities. We contribute Synthetic Visual Concepts (SyViC) - a million-scale
synthetic dataset and data generation codebase allowing to generate additional
suitable data to improve VLC understanding and compositional reasoning of VL
models. Additionally, we propose a general VL finetuning strategy for
effectively leveraging SyViC towards achieving these improvements. Our
extensive experiments and ablations on VL-Checklist, Winoground, and ARO
benchmarks demonstrate that it is possible to adapt strong pre-trained VL
models with synthetic data significantly enhancing their VLC understanding
(e.g. by 9.9% on ARO and 4.3% on VL-Checklist) with under 1% drop in their
zero-shot accuracy. | Paola Cascante-Bonilla, Khaled Shehada, James Seale Smith, Sivan Doveh, Donghyun Kim, Rameswar Panda, Gül Varol, Aude Oliva, Vicente Ordonez, Rogerio Feris, Leonid Karlinsky | 2023-03-30T17:57:43Z | http://arxiv.org/abs/2303.17590v2 | # Going Beyond Nouns With Vision & Language Models Using Synthetic Data
###### Abstract
Large-scale pre-trained Vision & Language (VL) models have shown remarkable performance in many applications, enabling replacing a fixed set of supported classes with zero-shot open vocabulary reasoning over (almost arbitrary) natural language prompts. However, recent works have uncovered a fundamental weakness of these models. For example, their difficulty to understand Visual Language Concepts (VLC) that go 'beyond nouns' such as the meaning of non-object words (e.g., attributes, actions, relations, states, etc.), or difficulty in performing compositional reasoning such as understanding the significance of the order of the words in a sentence. In this work, we investigate to which extent purely synthetic data could be leveraged to teach these models to overcome such shortcomings without compromising their zero-shot capabilities. We contribute **S**ynthetic **V**isual **C**oncepts (SyVec) - a million-scale synthetic dataset and data generation codebase allowing to generate additional suitable data to improve VLC understanding and compositional reasoning of VL models. Additionally, we propose a general VL finetuning strategy for effectively leveraging **SyVec** towards achieving these improvements. Our extensive experiments and ablations on VL-Checklist, Winoground, and ARO benchmarks demonstrate that it is possible to adapt strong pre-trained VL models with synthetic data significantly enhancing their VLC understanding (e.g. by 9.9% on ARO and 4.3% on VL-Checklist) with under \(1\%\) drop in their zero-shot accuracy.
## 1 Introduction
There have been impressive advances in the performance of zero-shot recognition through the use of large-scale pre-trained Vision & Language (VL) models [50, 23, 56, 31, 18, 64, 32, 16]. However, these VL models still face some important challenges in understanding Visual Language Concepts (VLC) beyond object nouns (e.g., recognizing attributes, relations, states) and in terms of compositional reasoning capabilities (i.e.., understanding subtle changes in meaning due to small changes in word order). Recently, several benchmark tests have been devised to demonstrate the extent to which these models lack these capabilities [57, 70, 66]1. As noted in several recent works [66, 70, 12], this behavior of VL models is likely due to the contrastive pre-training prevalent for all of them and likely inducing 'bag-of-objects' kind of representations (for both images and text alike). Indeed, for (even large) random batches of paired image-text samples, the collection of objects (nouns) in the image (or text) is likely to uniquely determine the image (or text) in the batch, making contrastive batch losses focus on the objects (nouns) while regarding
Figure 1: Overview of our proposed synthetic set: we place different objects in a scene and change their position, color, size and material. We further emphasise on human-level interactions, sampling a wide set of body poses and behaviors to cover transitive and intransitive human actions.
other details (attributes, relations, states, word order, etc.) as unnecessary. Intuitively, this impairs VLC understanding and compositional reasoning of the resulting model.
Given the above, a natural question to ask is what is the most effective way to 'fix' the VL models to improve their VLC understanding and compositional reasoning performance? An approach proposed in concurrent works [12, 66], advocates for the use of text augmentation, using language tools to teach a model the importance of non-noun words by manipulating them (e.g., replacing with incorrect alternatives) and adding the resulting texts to the same batch. Although effective, such augmentation techniques are only easy on the text side and are much harder and prohibitively expensive on the image side. Indeed, finding, collecting, or generating real image samples sharing the objects but differing in their composition, attributes, relations, or states is very difficult. Although significant progress has been achieved with text-based editing [19, 24, 41, 6, 26, 3, 43], these methods are relatively slow (leveraging diffusion) and not sufficiently stable to allow effective use for augmentation in training pipelines. In this work, therefore, we propose an orthogonal route - VL data synthesis for fixing VL models by targeted demonstration. Specifically, we propose enhancing the VLC and compositionality aspects of the generated _visual and text_ data, in turn using this data for finetuning VL models teaching them to pay closer attention to these aspects. Moreover, besides being largely free and infinitely scalable, synthetic data has an additional advantage - it can also be free from privacy concerns always accompanying real data.
Besides the inherent challenges of realistic data simulation, building synthetic data that can be effectively used to improve VLC and compositionality aspects of VL models pre-trained on massive real data poses additional technical challenges. Unlike the majority of prior work focusing on synthetic visual data generation, we need not only to generate images, but also the text that describes compositional items in a scene. We generate synthetic videos that leverage realistic physical 3D simulation [14] including diverse 3D environments and different 3D objects, human motions and actions assets [1, 39, 46, 49], added interaction with objects, and different camera viewpoints. Every frame of these videos is accompanied by rich metadata, allowing using language grammar for generating detailed descriptive captions of any instantaneous scene in each video. These captions, in turn, allow collecting diverse image-text pairs samples contrasting which one to another highlights to the model the importance of the compositional items in the text captions (e.g. different viewpoints or different frames in the same video share objects but may strongly differ in the VLC and other compositional items). While motion assets were used by previous works to generate synthetic data [61, 60], the visual data was not accompanied by textual captions and was not designed with the need to highlight compositionality in mind. We contribute **S**ynthetic **V**isual **C**n concepts (**Sy**ViC**) - a large (million-scale) generated synthetic VL dataset with rich textual captions, easily extensible through our data synthesis code together with all the already generated million-scale synthetic data used in this paper (Figure 1).
In addition to the data synthesis pipeline, we also offer a strategy for effectively leveraging the generated synthetic data, while avoiding forgetting real data alignment and losing the strong a-priori zero-shot capabilities of the model. We propose and extensively ablate a combination of domain adaptation by stylization [71], parameter efficient fine-tuning [20], long captions handling, and model averaging methods [63] to reduce forgetting, as well as examine the effect of different aspects of data synthesis and finetuning choices on the gains in VLC and compositionality understanding.
Our contributions can be summarized as follows: (i) we contribute **Sy**ViC** - a million-scale synthetic dataset with rich textual captions, intended for improving VLC understanding and compositional reasoning in VL models, as well as the methodology and the generation codebase for its synthesis and potentially extensibility; (ii) an effective general VL model finetuning strategy enabling effective leveraging of **Sy**ViC** data for enhancing the aforementioned aspects of strong pre-trained VL models without sacrificing their zero-shot capabilities; (iii) experimental results and extensive ablation study showing significant (over 10% in some cases) improvement in VLC understanding and compositional reasoning respectively, measured on all the recent VL-Checklist, ARO, and Winoground benchmarks and validated on the most popular CLIP [50] model and its derivatives (e.g. the most recent CyCLIP [18]).
## 2 Related Work
**Large-scale Vision&Language (VL) Models:** Large-scale pre-trained VL models such as CLIP [50] or ALIGN [23] show remarkable success in many zero-shot recognition tasks such as image classification or detection [67]. Despite the continued advancements made in this direction [18, 64, 32, 16], recent studies (_e.g._, [70, 57, 66]) show that existing VL models exhibit limited comprehension of structured vision language concepts (VLC). Yuksekgonul _et al_. [66] argue that contrastive learning for image-retrieval learns shortcuts and does not learn compositional information. To address this limitation, some approaches investigate how to augment the text captions or images in contrastive learning to enhance the ability of VLC [12, 66]. Smith _et al_. [54] learn VLC concepts with additional supervised datasets in a continual learning setup. In contrast, we use 3D graphic engines to generate realistic synthetic videos with different compositions and generate corresponding text captions, which allows a VL model to learn compositionality and non-object words such as attributes, actions, relations, etc.
**Learning from Synthetic Data.** There has been a lot of work on learning from synthetic data in image classifica
tion [14, 45, 40], semantic segmentation [52, 51], human pose estimation [61, 25], action recognition [60], etc. Synthetic data is easy to generate and particularly useful for providing dense annotation such as semantic segmentation and depth estimation since these are prohibitively expensive to annotate manually. Some of the work relies on graphics engines to generate realistic data. Mishra [40] propose a method to learn how to generate task-adaptive synthetic data with the 3D simulation engine. For human-related problems, parametric body models (, SMPL [35]) can be leveraged, along with motion assets [59], to generate synthetic human videos for low-level body analysis tasks [61] or action recognition [9, 60]. Similar to our work, [9, 60] seek to associate semantic labels to synthetic images, but different from symbolic action categories, our focus is to assign rich textual descriptions to our generated images.
Since synthetic data suffers from a domain gap such as textures, visual styles, or colors from real images, domain adaptation, and generalization have been proposed to address this issue. Adversarial learning [58, 15, 55] can be used to generate real-like images or feature alignment between synthetic and real data. Additionally, stylization methods [72, 71] are proposed as a style augmentation to make a model robust to diverse styles. In contrast, we manually randomize the visual content including different 3D objects, materials, and color attributes in graphics engines. Then we generate realistic synthetic videos from different domains with corresponding text captions. The generated data can be served as a hard negative augmentation and enhance the ability of VLC.
## 3 Method
We first present our synthetic data generation pipeline (Sec. 3.1), then describe how we leverage it for significant gains in VLC understanding and compositional reasoning capabilities of strong pre-trained VL models (Sec. 3.2). Our entire approach is illustrated in detail in Fig. 2.
### Synthetic Data Generation
In this section, we outline the components and the pipeline of our approach used to generate the proposed Synthetic Visual Concepts (**SyViC**) synthetic VL dataset for improving VLC understanding and compositional reasoning of VL models. Our contributed dataset includes 767,738 image-text pairs, 598K sampled from 1,680 diverse synthetic videos, and the remaining 169K generated as individual static synthetic scenes. Example samples from **SyViC** are provided in Supplementary.
**3D physics-based simulation platform:** ThreeDWorld (TDW) [14], which is built on top of Unity3D, is a multi-modal simulation platform that enables realistic physical interactions. TDW contains \(2304\) objects, 585 unique materials subdivided in metal, cardboard, wood, ceramic, and glass, and over \(30\) indoor and \(9\) outdoor scenes (3D environments). For generating synthetic VL data, we start with placing random objects in a scene following the workflow proposed by [7]. We also use their camera positions and configurations to place objects visible inside good empty room perspectives. We group the available 3D object models by assigning dimension-related labels to each object, and use the ImageNet category labels available for each object model as its associated text for later caption synthesis.
**Camera Viewpoints**: To further augment the set of plausible object placements and relations, we simultaneously place \(4\) to \(12\) cameras around a specific point of an empty room, and randomly place \(n\geq 1\) objects in the scene, allowing us to render images from different views of the same scene further strengthening the compositional aspects of the data as discussed in the introduction (Sec. 1). For each scene (frame), TDW cameras are able to capture RGB images, the corresponding object instance and category semantic segmentation masks, and a depth map. We use these, as well as a range of sensor and physics data representing the state of the world returned by TDW's API, to enable dense annotations and supervision for each scene (frame) as part of our metadata generation process. We collect all of this information in our metadata, and use it to estimate the position of the objects in the scene instead of relying on the 3D coordinates of each object and the camera position.
**Digital humans**: As we focus on compositionality aspects of images and text pairs, having people in our images is important. However, people models (especially animatable ones) are usually not present in common collections of 3D assets. Existing large-scale synthetic datasets often focus on realistically placing objects in a scene, but typically humans and animals are not included. We first inspected what libraries were available for realistic human synthesis. PeopleSansPeople [13], a library with 28 human 3D models and 39 unique actions, allows only random human placement, not allowing for humanoid customization or integration of human-object interactions. We leverage TDW support for Skinned Multi-Person Linear Model [36] (SMPL) humanoids. SMPL is a parametric body model that enables a realistic representation of the shape and pose of arbitrary (non-clothed) 3D human bodies with diverse genders and shapes. SMPL models can be easily animated using motion capture data. The pose of the SMPL model is defined by a set of joint angles that determine the position of the corresponding body parts in the 3D space. The extended SMPL-X [44] additionally allows controlling hand articulation and face expressions. Given the available library asset in TDW that enables placing these SMPLs in a scene, we create a stand-alone module to automatically incorporate arbitrary custom animations and \(514\) unique human textures from the SURREAL [61] and Multi-Garment [5] datasets for clothing the synthetic human models for further enhancing the diversity and compositional features of our data.
**Human motion synthesis and handling interactions:** In Unity, SMPLs have skeletons and can be driven by motion
capture animations but they don't have mass or colliders, this means that they are not physics assets since they can walk through other objects without interacting with them. To solve this issue, we add colliders to each body part of the SMPL model and create three asset templates (i.e., male, female, neutral) that contain all mesh configurations. Figure 2 shows some examples of our rigged asset templates with colliders. All other existing 3D models in TDW have colliders and track collisions at runtime through TDW physics-based simulation. Therefore, our collider-enhanced SMPLs are pulled downward by gravity, as well as simulate interaction by reacting naturally to collisions with other objects in the scene during motion simulation. For human actions, we first synthesize a diverse set of human actions from random language descriptions using TEACH [2], a Transformer-based model that generates a continuous sequence of SMPL motions from a sequence of action text labels. TEACH was trained on AMASS [39], a large-scale motion-capture (mocap) collection, and BABEL [49], a dataset that provides textual descriptions for AMASS, including per-frame unique actions annotations. Second, we extend our set of SMPL motions by directly sampling unique human motions from BABEL and AMASS. We export the corresponding mocaps to FBX files, and extract the animations in Unity, enabling them as asset bundles for use with TDW. FBX is a common format that facilitates data exchange between different 3D simulation platforms.
**Domain randomization:** One of the key qualities of the generated synthetic data, is its ability to highlight the importance of VLCs and compositional properties of the scene (e.g., objects attributes, relations, and more) in the contrastive learning objectives guiding the VL model finetuning. As opposed to methods based on text augmentation [12, 66] that can only enhance those on the text part of the VL image-text pairs, in **SyViC** construction we can easily manipulate also the visual content. We randomly place \(1\) to \(8\) 3D object models in the scene, randomizing their material and color attributes (\(2304\times 585\times 551\) choices for each placed object). We randomly place \(0\) to \(4\) human avatars in the scene, randomizing their gender and clothing. We set camera poses as explained above, keeping scene ID shared for all the cameras of the same scene, and randomly sample \(4\) viewpoints of each scene. We use human motion assets (as explained above) and randomly sample a motion sequence for each human avatar out of \(1898\) imported or generated mocaps. Finally, we sample an average of \(1500\) frames from each resulting synthetic video sequence generating image-text pairs describing a scene with the same objects and attributes, but in different arrangements and different corresponding captions thus enhancing the importance of the compositional aspects of the scene in the contrastive loss. We explore the importance of different simulation aspects in our ablation studies in Sec. 4.4.
**Metadata-driven caption text synthesis:** In addition to RGB frames, we obtain a large collection of rich metadata from the simulation platform, containing information on objects, humanoids, and the scene setting. For each frame, the metadata includes: (i) The world coordinates of each object and humanoid in the scene, including the camera position and viewing direction. (ii) The physical attributes of each object and humanoid in the scene (object physical attributes include color, size, and material; human attributes include the per-frame action label that changes over time, and clothing description). (iii) Rendered depth images, instance segmentation masks, and category segmentation masks. Using the metadata, we compute the positional relations between each pair of objects and/or humans by comparing the pixels covered by their segmentation masks as well as their camera coordinates. Then, we use a simple grammar that deterministically maps positional relationships, object attributes, human attributes and action descriptions, and scene descriptions to a well-formed caption. More details on the grammar are provided in Supplementary.
### Finetuning large-scale pre-trained VL models using synthetic data
In this section, we propose a methodology for effectively leveraging the SyViC synthetic VL data produced as explained in Sec. 3.1. We will use the following notation. Let \((T,I)\) be the text & image pair admitted by a VL model. The model (e.g., CLIP [50], CyCLIP [18]) components are denoted as: (i) image encoder \(e_{I}=\mathcal{E}_{I}(I)\); (ii) text encoder \(e_{T}=\mathcal{E}_{T}(T)\). In this notation, the text-to-image similarity score is computed as:
\[\mathcal{S}(T,I)=cos(\mathcal{E}_{T}(T),\mathcal{E}_{I}(I)))=cos(e_{T},e_{I}), \tag{1}\]
where \(cos\) is the cosine similarity (inner product of normalized vectors). We next describe in detail the components of our finetuning strategy. Their merit and tradeoffs are thoroughly investigated in Sec. 4.4, arriving at the conclusion that parameter efficient finetuning + domain adaptive stylization + proposed caption splitting technique are the most effective combination. We also confirm in Sec. 4.4, that
Figure 2: Summarizing the entire flow of the proposed approach including components and choices of SyViC data synthesis pipeline (**left**) and proposed effective finetuning technique (**right**) attaining significant gains detailed below.
model averaging can provide expected trade-offs between VLC understanding and compositional reasoning gains and maintaining zero-shot performance.
**Avoiding forgetting through parameter efficient finetuning:** Inspired by [12, 54], we use LoRA [20] for VL fine-tuning with reduced forgetting of base model performance. We apply LoRA [20] to adapt the encoders (\(\mathcal{E}_{T}\), \(\mathcal{E}_{I}\)) of a pre-trained VL model by parameterizing the adapted weights \(\mathcal{W}_{k}^{*}\) corresponding to the original model weights \(\mathcal{W}_{k}\) for each layer \(k\) as:
\[\mathcal{W}_{k}^{*}=\mathcal{W}_{k}+\mathcal{A}_{k}\cdot\mathcal{B}_{k} \tag{2}\]
where for \(\mathcal{W}_{k}\) of size \(m\times l\), \(\mathcal{A}_{k}\) and \(\mathcal{B}_{k}\) are rank-\(r\) matrices of sizes \(m\times r\) and \(r\times l\) respectively. These low-rank residual adapters can be applied efficiently during training and collapsed at inference time resulting in zero cost in terms of inference speeds or parameter counts [20]. During finetuning all the base model parameters \(\forall k\), \(\{\mathcal{W}_{k}\}\) are frozen and only the LoRA adapters \(\forall k,\{(\mathcal{A}_{k},\mathcal{B}_{k})\}\) are being learned. Keeping rank \(r\) low, the number of extra parameters added by all the LoRA adapters is low, consequently leading to significantly reduced forgetting in terms of largely maintaining the zero-shot performance of the original VL model.
**Further reducing forgetting via model averaging:**[63] introduced an elegant technique to mitigate forgetting in finetuned models. All the parameters of the source model (before finetune) and the final model (after finetune) are averaged between the two models (typically with \(\alpha=0.5\) weight). We evaluate the effect of this on SyViC finetuned models in our ablation Sec. 4.4.
**Domain adaption using style transfer:** In addition, to mitigate the domain gap introduced by the use of synthetic data, we experiment with two style transfer techniques that align the content and feature statistics of the input frames with randomly-selected real-life images. A pre-trained Adaptive Instance Normalization (AdaIN) [21] enabled encoder-decoder model was used to align the channel-wise statistics of each synthetic frame with a randomly-sampled image from the Human Motion Database (HMDB51)[29] dataset thus generating a stylized synthetic image. We use AdaIN with an interpolation factor \(\alpha=0.5\). In addition, in order to preserve the color information in the synthetic frames, we first match the color distribution of the sampled style image to that of the synthetic frame[17]. We additionally experimented with MixStyle [73] (using ImageNet as a source of real style images) as an extension of the DA stylization pipeline without observing significant gains over AdaIN.
**Handling arbitrary caption length with caption splitting:** The captions generated for SyViC are comprehensive: they contain descriptions of every object and/or humanoid visible in the frame as well as the pairwise positional relationship between objects. Intuitively, including these more elaborate (dense) descriptions in our captions gives a clear advantage in terms of promoting VLC understanding and compositionality following the finetuning of a VL model on SyViC. Hence, captions need to be sufficiently long texts that cannot be fully processed by common VL models (e.g. CLIP) text encoders (\(\mathcal{E}_{T}\)) during training, as those are caped by relatively short max sequence context length (e.g. 77 for CLIP). Therefore, inspired by CLIP multi-caption strategy for inference [50], during training we handle arbitrary caption lengths by splitting a given caption into sub-captions that can each be encoded separately and averaging the text features obtained from each sub-caption. In particular, the features of a caption of arbitrary length text \(T\) is:
\[\mathcal{E}_{T}(T)=\frac{1}{n}\sum_{i}^{n}\mathcal{E}_{T}(T_{i}) \tag{3}\]
where \(T_{i}\) is a sub-caption comprised of one or more sentences that fit into the text encoder max context size.
**Losses:** We employ the original models (e.g. CLIP [50] and CyCLIP [18]) contrastive and other losses when training on SyViC with the aforementioned architectural and training protocol changes as explained above.
## 4 Experiments
### Implementation details
For CLIP, we use the original OpenAI CLIP implementation and checkpoints. We modify their codebase to include LoRA adapters (Sec. 3.2), and use rank 16 in all our experiments. For CyCLIP, we adapt the implementation used in [12]2. For both CLIP and CyCLIP, we use a 5e-7 initial learning rate for finetuning and follow a cosine annealing learning rate schedule [38] using an Adam [27] optimizer and fine-tune checkpoints for six epochs. For all experiments, we use ViT/32-B as the model architecture and fine-tune it for 6 epochs on one A100 GPU with a total batch size of \(400\) image-caption pairs. In addition to the original CLIP data augmentation transforms, we apply heavy random augmentation policies including manipulations in image inversion, contrast, sharpness, equalization, posterization, colorization, brightness, and solarization.
Footnote 2: Code and checkpoints kindly shared by the authors.
### Datasets
To test the effectiveness of our proposed SyViC synthetic dataset and the accompanying finetuning approach for improving VL models' VLC understanding and compositional reasoning capabilities we have evaluated on 3 benchmarks (Winoground [57], VL-Checklist [70], and ARO [66]) consisted of 7 datasets total.
**VL-Checklist [70]** - is a large-scale dataset comprised of: Visual Genome [28], SWiG [48], VAW [47], and HAKE [33]. Each image of these datasets is associated with two captions, a positive and a negative. The positive caption corresponds to the image and is taken from the source dataset. The negative caption is made from the positive caption by changing one word, so the resulting sentence no
longer corresponds to the image. Depending on the word that was changed, VL-Checklist evaluates 7 types of VLC that can be divided into two main groups: (1) Attributes - color, material, size, state, and action, and (2) Relations - spatial or action relation between two objects and/or humans. In the following, we report average results for each of the main (Rel. and Attr.) groups on the combined VL-Checklist dataset. We also detail the individual improvements on all 7 VLC types in Fig. 3 (left).
**Winoground [57]** - is a small dataset that evaluates the ability of VL models for compositional reasoning, specifically understanding the meaning of the sentence after changing the order of its words. The dataset has 400 samples, each comprised of two images and two texts. The texts have the same words in a different order, each text corresponding to one image in the sample. The Winoground metrics include: (a) image score - percent of samples where the model picks the correct text for each image; (b) text score - percent of samples where model picks correct image for each text; (c) group score - percent of samples where both text and image score conditions are satisfied jointly. Recently, [10] has analyzed Winoground for the source of its difficulty and found that only \(171\) of its 400 samples are a valid subset. Other samples are not compositional, ambiguous, related to invisible details, have highly uncommon images or text, or require complex reasoning beyond compositionality. We report results on both the full Winoground and the 'clean' 171 images subset from [10].
**ARO [66]** - or the Attribution, Relation, and Order benchmark, is a large dataset designed to evaluate the ability of VL models to understand four different types of skills. It consists on Visual Genome Attribution and Visual Genome Relation, which leverages the Visual Genome [28] dataset along with the GQA [22] annotations to test the understanding of properties and relational understanding of objects in complex natural scenes. VG-Relation includes \(48\) distinct relations with \(23937\) test cases, and VG-Attribution includes \(117\) unique attribute pairs with \(28748\) test cases. It also leverages the COCO [34] and Flickr30k [65] datasets to evaluate the model sensitivity to select the right caption after applying four different shuffling perturbations (e.g., exchanging nouns and adjectives, or by shuffling trigrams). These tests are performed on the \(5000\) and the \(1000\) images from the respective COCO and Flickr30k test splits.
### Results
The main results of finetuning CLIP [50], CyCLIP [18] - one of CLIP's most recent improvements are summarized in Tables 1 and 2. All models were finetuned using our proposed approach and **SyViC** synthetic data to obtain their syn-\(<\)_model\(>\)_ variants. Each model is compared to its respective source model pre-trained on large-scale real data before finetuning on **SyViC**. As we can observe, our **SyViC** synthetic data and the proposed finetuning recipe on this data demonstrate significant improvements over their source baselines. E.g. for CLIP obtaining \(1.75\%\), \(4.34\%\), and \(9.9\%\) average absolute improvement in Winoground group score (most difficult average metric), VL-Checklist and ARO respectively. In addition, we illustrate the individual VLC metrics improvements obtained for CLIP in VL-Checklist and ARO benchmarks in Fig. 3 showing up to \(9.1\%\) and \(12.6\%\) respective absolute improvements. This underlines the effectiveness and promise of our method and **SyViC** synthetic data towards improving VLC understanding and compositional reasoning in VL models. Importantly, as we can see from Table 1, these strong gains come at a very small (under 1%) cost in the zero-shot performance of the respective VL models measured using the standard Elevater [30] benchmark using 21 diverse zero-shot tasks.
\begin{table}
\begin{tabular}{l|l l|l|l l l l|l|l} \hline \hline & \multicolumn{3}{c|}{VL Checklist} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ & Relation & Attribute & **Average** & VG-Rel. & VG-Att. & Flickr30k & \multirow{2}{*}{COCO} & \multirow{2}{*}{**Average**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline CLIP & 63.57 & 67.51 & 65.54 & 58.84 & 63.19 & 47.20 & 59.46 & 57.17 & 56.07 \\ CyCLIP & 61.15 & 66.96 & 64.06 & 59.12 & 65.41 & 20.82 & 29.54 & 43.72 & 55.99 \\ \hline syn-CLIP & 69.39 (\(\pm\)5.82) & 70.37 (\(\pm\)2.86) & 69.88 (\(\pm\)4.34) & 71.40 (\(\pm\)12.56) & 66.94 (\(\pm\)3.75) & 59.06 (\(\pm\)11.80) & 70.96 (\(\pm\)11.5) & 67.09 (\(\pm\)9.93) & 55.27 (\(\pm\)4.8) \\ syn-CyCLIP & 65.73 (\(\pm\)4.58) & 68.06 (\(\pm\)1.1) & 66.89 (\(\pm\)2.83) & 69.02 (\(\pm\)9.59) & 63.65 (\(\pm\)4.76) & 49.17 (\(\pm\)28.35) & 59.36 (\(\pm\)29.82) & 60.30 (\(\pm\)16.85) & 55.40 (\(\pm\)4.68) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of syn-\(<\)_model\(>\)_s – finetuned on **SyViC** using our proposed recipe, measured on VL-Checklist [70] and ARO [66]. Gains and losses are highlighted in green and red respectively.
\begin{table}
\begin{tabular}{c c|c c|c} \hline \hline & \multicolumn{2}{c|}{Winoground} & \multicolumn{2}{c}{Vine Checklist} \\ randomization & & Relation & Attribute & Average \\ \hline CLIP & & 63.57 & 67.51 & 65.54 \\ \hline ✗ & ✓ & 64.03 & 67.09 & 65.56 \\ ✓ & ✗ & 65.00 & 68.15 & 65.95 \\ ✓ & ✓ & **69.39** & **70.37** & **69.88** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Importance of human avatars, objects, and object attribute variations, evaluated on VL-Checklist and CLIP
\begin{table}
\begin{tabular}{c c|c c|c} \hline \hline objects with attr. & humans & \multicolumn{2}{c}{VL Checklist} \\ randomization & & Relation & Attribute & Average \\ \hline CLIP & & 63.57 & 67.51 & 65.54 \\ \hline ✗ & ✓ & 64.03 & 67.09 & 65.56 \\ ✓ & ✗ & 65.00 & 68.15 & 65.95 \\ ✓ & ✓ & **69.39** & **70.37** & **69.88** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Winoground [57] performance of syn-CLIP – finetuned on **SyViC**. The syn-CyCLIP results on Winoground are provided in the Supplementary. \({}^{\dagger}\) ‘clean’ (no-tag) subset of valid Winoground samples from [10]
### Ablations
We extensively ablate our **SyViC** synthetic data and the proposed VL models finetuning approach on this data according to the following points. We use the most popular CLIP model finetuned on our **SyViC** synthetic dataset evaluated on the largest of the benchmarks - the VL-Checklist to perform our ablations.
**SyViC - objects, humans, object attribute randomization** - we evaluate the major components that comprise our **SyViC** synthetic data, namely the importance of the synthetic data to contain humans performing various motions and actions, the importance of having objects with randomized attributes (Sec. 3.1), and the final result of having all types of data combined. The results of this ablation are summarized in Tab. 3. As expected, humans alone cannot teach the model the needed skills only improving relations VLC by a small margin. Additionally, having only objects with randomized attributes improves attribute VLC, yet only improves relations by \(1.4\%\) which is also expected, as many of the relations involve human actions. The best result is observed on the combined dataset with all the components.
**SyViC - human clothing** - next we evaluate the diversity of human clothing comparing 3 levels of diversity: (i) none - only using a uniform color for the human models; (ii) basic - using less diverse texture maps from SURREAL [61]; and (iii) most diverse - using texture maps from Multi-Garment [5], enriched with clothing colors, human age, and hair color annotations (manually done by us for the textures) which increase the expressivity of our captions. Results are presented in Table 4. As expected the most diverse human textures deliver the best result underlining the importance of this factor. Surprisingly, better human textures lead to improved VL-Checklist Relations metric performance, likely due to the significantly better realism of the Multi-Garment textures.
**SyViC - types of object attributes to randomize** - in Table 5 we examine how randomizing different object attributes affects performance. Specifically, we evaluate the randomization of size, material, and color. Interestingly, we find that the best performance is achieved without color randomization. We suspect it is due to unnatural color-object combinations that arise under such randomization. These in turn teach the model wrong beliefs on real objects' color distributions as well as go against true object-color associations existing in the VL model following massive-scale pre-training on the original VL data.
**SyViC - types of captioning** - we have investigated several variants of ways to obtain textual captions from **SyViC** metadata (Sec. 3.1). Results are summarized the Supplementary. We compared our proposed metadata grammar-based approach to two cascade methods that paraphrase the captions resulting from the grammar using zero-shot LLM inference (in-context learning). The paraphrased caption is then appended to the original grammar-based caption and consumed through our caption-splitting module (as standalone, open LLM-based paraphrasing is not very high quality). As can be seen, currently paraphrasing has minimal effect, but we posit it will become an important tool as stronger LLMs will become openly available.
**SyViC - importance of physics and number of humans in the scene** - we also looked into to which extent reliable physical simulation (made available in **SyViC** through TDW and Unity capabilities) and human-human positional and other relations are important for the observed VL model improvements. In Table 6 we evaluate the effects of removing the physics (resulting in humans or objects floating in space) or removing the multi-human scenes (thus preventing all human-human relations from appearing in the data). As expected, both reliable physics simulation and human-human relations (interactions of sorts) are mostly important to the gains in the Relations metric.
**SyViC - number of models and number of samples** - for lack of space, this is explored in the Supplementary.
**SyViC - finetuning recipe components** - in Table 7 we extensively evaluate the different components of the proposed finetuning approach on **SyViC** that leads to significant improvements on VL-Checklist, ARO, and Winoground VLC and compositional reasoning metrics. We start with vanilla
\begin{table}
\begin{tabular}{c c|c c|c} \hline \hline \multirow{2}{*}{physics} & \multirow{2}{*}{multi-human} & \multicolumn{3}{c}{VL Checklist} \\ & & Relation & Attribute & Average \\ \hline & CLIP & 63.57 & 67.51 & 65.54 \\ \hline ✓ & ✗ & 67.66 & 69.24 & 68.45 \\ ✗ & ✓ & 65.91 & 69.01 & 67.46 \\ ✓ & ✓ & 69.39 & 70.37 & 69.88 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Importance of physics simulation and multi-human relations in **SyViC**, evaluated on VL-Checklist and CLIP
\begin{table}
\begin{tabular}{c c|c c|c} \hline \hline \multirow{2}{*}{SUREREAL} & \multirow{2}{*}{Multi-Garment} & \multicolumn{3}{c}{VL Checklist} \\ & & Relation & Attribute & Average \\ \hline & CLIP & 63.57 & 67.51 & 65.54 \\ \hline ✗ & ✗ & 67.56 & 68.73 & 68.15 \\ ✓ & ✗ & 67.56 & 67.26 & 67.41 \\ ✗ & ✓ & **69.39** & **70.37** & **69.88** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Importance of human avatar clothing choices between SURREAL, Multi-Garment, and simple color textures (corresponding to none), evaluated on VL-Checklist and CLIP
\begin{table}
\begin{tabular}{c c c|c c|c} \hline \hline \multirow{2}{*}{color} & \multirow{2}{*}{size} & \multirow{2}{*}{material} & \multicolumn{3}{c}{VL Checklist} \\ & & & Relation & Attribute & Average \\ \hline & CLIP & & 63.57 & 67.51 & 65.54 \\ \hline ✓ & ✗ & ✗ & 67.71 & 64.61 & 66.16 \\ ✗ & ✓ & ✗ & 68.58 & 68.23 & 68.40 \\ ✗ & ✗ & ✓ & 65.23 & 67.01 & 66.12 \\ ✓ & ✓ & ✓ & 66.67 & 65.97 & 66.32 \\ ✗ & ✓ & ✓ & **69.39** & **70.37** & **69.88** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Importance of different kinds of object attributes randomization, evaluated on VL-Checklist and CLIP
CLIP finetuning on **SyViC** (row #1), already showing some improvement in VL-Checklist relations metrics, and on the ARO benchmark, yet losing to base CLIP on VL-Checklist attributes metrics. Adding our caption splitting module (Sec. 3.2) allows handling long (arbitrary size) texts outputted by our metadata-driven grammar and consequently utilizes all the caption information re-gaining the attributes performance (row #2). Adding parameter-efficient finetuning (LoRA, Sec. 3.2) regularizes finetuning by forcing smaller (low-rank, low-parameters) updates of the large-scale pre-trained CLIP model, consequently somewhat handling the expected domain gap between the synthetic data of **SyViC** and the downstream evaluation (real data) tasks. Notably, LoRA does not add any additional parameters to the model, all LoRA adapters are collapsed into the model weights after finetuning. Consequently, we observed significant improvements from adding LoRA in all metrics (row #3) with only minor degradation (0.7%) in ZS evaluation. With adding domain stylization (Sec. 3.2) we observe the best results in all VLC and compositional reasoning metrics improving ARO by 2.8% and keeping (even slightly improving) the advantages on VL-Checklist. Next, we investigate the variations of our best approach (LoRA + domain stylization + caption splitting). First, we investigate a strategy inspired by the LiT [69] approach (row #5). Freezing the visual encoder \(\mathcal{E}_{I}\) as expected provides a (small) boost in ZS performance, but the reduced plasticity of the model comes at the price of observing smaller (only 3.1%) improvements on ARO and almost no improvements on the VL-Checklist. This leads us to conclude, that freezing the visual encoder is not a good strategy for **SyViC** finetuning. Next, we check the model averaging strategy (Sec. 3.2) inspired by [63] (row #6). This does a better job of mitigating ZS forgetting, while at the same time keeping more of the gains on VL-Checklist and ARO. We conclude that model averaging is a good strategy to complement **SyViC** finetuning, allowing a soft trade-off between mitigating ZS forgetting and VLC and compositionality metrics gains. Finally, we explore again the importance of caption splitting for the best finetuning configuration we found for **SyViC** (row #7). We re-confirm it is indeed important as without it we observe a drop in performance.
## 5 Summary & Conclusions
Large vision and language models have dictated the status quo in computer vision and multimodal perception, achieving state-of-the-art results in a number of challenging benchmarks. However, existing models struggle with compositional reasoning and understanding concepts beyond object nouns, such as attributes and relationships. Our work has investigated, for the first time, whether synthetic data can be leveraged to mitigate these shortcomings. We have proposed a data generation pipeline, which was used to create a million-scale dataset of synthetic images and accompanying captions, as well as an effective fine-tuning strategy and comprehensive analysis to enhance the compositional and concept understanding capabilities of multimodal models, without compromising their zero-shot classification performance.
**Limitations.** While we have achieved quite promising results in three different benchmarks, our work has limitations. As an example, our graphics simulator has a simplified model of lighting, sensor noise, and reflectance functions compared to the real world, which may impact robustness to color constancy. We believe more advanced domain adaptation and rendering techniques are likely needed to further improve our results. We also think a more detailed study of the scaling laws for synthetic data is a great research direction to fully unlock the potential of our work.
**Acknowledgements.** This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-19-C-1001. Any opinions, findings
\begin{table}
\begin{tabular}{c|c c c c c c|c c|c|c} \hline \hline \# & LoRA & freezing & model & DA & Caption & \multicolumn{2}{c|}{VL Checklist} & \multicolumn{1}{c|}{ARO} & ZS \\ & \(\mathcal{E}_{I}\) & averaging & styl. & split emb. & Rel. & Attr. & Avg. & Avg. & (21 tasks) \\ \hline & & & CLIP & & & 63.57 & 67.51 & 65.54 & 57.17 & 56.07 \\ \hline
1 & ✗ & ✗ & ✗ & ✗ & ✗ & 67.10 & 65.45 & 66.28 & 62.83 & 53.84 \\
2 & ✗ & ✗ & ✗ & ✗ & ✓ & 67.76 & 67.51 & 67.64 & 59.27 & 53.87 \\
3 & ✓ & ✗ & ✗ & ✗ & ✓ & 69.32 & 69.46 & 69.39 & 64.34 & 53.16 \\ \hline
4 & ✓ & ✗ & ✗ & ✓ & ✓ & **69.39** & **70.37** & **69.88** & **67.09** & 54.54 \\ \hline
5 & ✓ & ✓ & ✗ & ✓ & ✓ & 63.54 & 68.20 & 65.87 & 60.29 & 55.27 \\
6 & ✓ & ✗ & ✓ & ✓ & ✓ & 66.70 & 69.62 & 68.16 & 63.76 & **55.72** \\
7 & ✓ & ✓ & ✗ & ✓ & ✗ & 65.16 & 66.88 & 66.02 & 57.55 & 52.69 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Importance of the finetuning recipe components, evaluated on VL-Checklist and CLIP
Figure 3: **(left)** detailed evaluation of syn-CLIP on the 7 separate VL-Checklist [70] metrics; **(right)** detailed evaluation of syn-CLIP on all the Compositional Tasks proposed in ARO [66].
and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA). This project was also partially supported by NSF Award IIS-2221943, ANR CorVis ANR-21-CE23-0003-01, and the MIT-IBM Watson AI Lab.
|
2305.13478 | Automatic Readability Assessment for Closely Related Languages | In recent years, the main focus of research on automatic readability
assessment (ARA) has shifted towards using expensive deep learning-based
methods with the primary goal of increasing models' accuracy. This, however, is
rarely applicable for low-resource languages where traditional handcrafted
features are still widely used due to the lack of existing NLP tools to extract
deeper linguistic representations. In this work, we take a step back from the
technical component and focus on how linguistic aspects such as mutual
intelligibility or degree of language relatedness can improve ARA in a
low-resource setting. We collect short stories written in three languages in
the Philippines-Tagalog, Bikol, and Cebuano-to train readability assessment
models and explore the interaction of data and features in various
cross-lingual setups. Our results show that the inclusion of CrossNGO, a novel
specialized feature exploiting n-gram overlap applied to languages with high
mutual intelligibility, significantly improves the performance of ARA models
compared to the use of off-the-shelf large multilingual language models alone.
Consequently, when both linguistic representations are combined, we achieve
state-of-the-art results for Tagalog and Cebuano, and baseline scores for ARA
in Bikol. | Joseph Marvin Imperial, Ekaterina Kochmar | 2023-05-22T20:42:53Z | http://arxiv.org/abs/2305.13478v2 | # Automatic Readability Assessment for Closely Related Languages
###### Abstract
In recent years, the main focus of research on automatic readability assessment (ARA) has shifted towards using expensive deep learning-based methods with the primary goal of increasing models' accuracy. This, however, is rarely applicable for low-resource languages where traditional handcrafted features are still widely used due to the lack of existing NLP tools to extract deeper linguistic representations. In this work, we take a step back from the technical component and focus on how linguistic aspects such as _mutual intelligibility_ or _degree of language relatedness_ can improve ARA in a low-resource setting. We collect short stories written in three languages in the Philippines - Tagalog, Bikol, and Cebuano - to train readability assessment models and explore the interaction of data and features in various cross-lingual setups. Our results show that the inclusion of CrossNGO, a novel specialized feature exploiting n-gram overlap applied to languages with high mutual intelligibility, significantly improves the performance of ARA models compared to the use of off-the-shelf large multilingual language models alone. Consequently, when both linguistic representations are combined, we achieve state-of-the-art results for Tagalog and Cebuano, and baseline scores for ARA in Bikol.
We release our data and code at github.com/imperialite/ara-close-lang
## 1 Introduction
Automatic readability assessment (ARA) is the task that aims to approximate the difficulty level of a piece of literary material using computer-aided tools. The need for such application arises from challenges related to the misalignment of difficulty labels when humans with various domain expertise provide annotations, as well as to the difficulty of manual extraction of complex text-based features (Deutsch et al., 2020). At the same time, readability assessment tools often use different definitions of complexity levels based on (a) age level (Vajjala and Meurers, 2012; Xia et al., 2016), (b) grade level (Imperial and Ong, 2020, 2021), or on established frameworks such as (c) the Common European Framework of Reference for Languages (CEFR)1(Francois and Fairon, 2012; Pilan et al., 2016; Xia et al., 2016; Reynolds, 2016; Vajjala and Rama, 2018).
Footnote 1: [https://www.coe.int/en/web/common-european-framework-reference-languages/level-descriptions](https://www.coe.int/en/web/common-european-framework-reference-languages/level-descriptions)
In recent years, deep learning methods and large language models (LLMs) have gained popularity in the research community. Often studies using these methodologies focus primarily on improving the performance across various metrics. This is particularly manifest in ARA research in languages with a high number of accessible and publicly-available readability corpora such as English (Heilman et al., 2008; Flor et al., 2013; Vajjala and Lucic, 2018) and German (Hancke et al., 2012; Weiss et al., 2021; Weiss and Meurers, 2022) to name a few. At the same time, existing studies focusing on low-resource languages such as Cebuano (Imperial et al., 2022) and Bengala (Islam et al., 2012; Islam and Rahman, 2014) are still at the stage of primarily using traditional features such as word and sentence lengths to train predictive models.
We identify two problems that are related to the use of complex neural-based approaches: the success of such models depends on (a) whether there is enough available data to train a model using a customized deep neural network, and (b) in the case of LLMs, whether there exists an available off-the-shelf pre-trained model for a low-resource language of interest. Imperial et al. (2022) have recently shown that merely integrating extracted embeddings from a multilingual BERT model as features for Cebuano, a low-resource Philippine language, _does not outperform_ models trained with
orthographic features such as syllable patterns customized for the language. These challenges provide motivation for researchers to further explore methods that do not rely on the availability of large amounts of data or complex pre-trained models and investigate simpler, more interpretable models instead of black box architectures.
In this paper, we take a step back and focus on the data available for low-resource Philippine languages and the features extracted from them rather than on the algorithmic aspects. Specifically, we explore a scenario where small readability corpora are available for languages that are _closely related_ or belong to one major language family tree. To the best of our knowledge, incorporating the degree of language closeness or relatedness has not been explored before in any cross-lingual ARA setup. In this study, we make the following contributions:
1. We conduct an extensive pioneer study on readability assessment in a cross-lingual setting using three closely related Philippine languages: Tagalog, Bikolano, and Cebuano.
2. We extract various feature sets ranging from linguistically motivated to neural embeddings, and empirically evaluate how they affect the performance of readability models in a singular, pairwise, and full cross-lingual setup.
3. We introduce cross-lingual Character N-gram Overlap (CrossNGO), a novel feature applicable to readability assessment in closely related languages.
4. We also introduce and release a new readability corpus for Bikolano, one of the major languages in the Philippines.
5. Finally, we set a baseline for ARA in Bikol and report state-of-the-art results for Tagalog and Cebuano.
## 2 Background
### The Philippine Linguistic Profile
The Philippines is a linguistically diverse country in Southeast Asia (SEA) with over \(180\) languages spoken by over \(100\) million people. Languages in the Philippines can be best described as morphologically rich due to their free-word order structures and high number of possible inflections, full and partial duplications, and compound words Go and Nocon (2017). In addition, following lexicostatistical studies, languages are divided into two subgroups, _northern_ and _central_, wherein the major languages Ilokano, Pangasinan, and Kapangan belong to the northern subgroup, and Tagalog, Bikol, Hiligaynon, and Cebuano are allocated to the central subgroup Walton (1979); Constantino (1998). Figure 1 illustrates the central subgroup of the Philippine language family tree.
In this study, our readability experiments focus on three major Philippine languages, _Tagalog_, _Cebuano_, and _Bikol_, which we refer to further in the paper with their corresponding ISO-639-2 language codes as tgl, ceb, and bcl, respectively.
### Mutual Intelligibility
Preliminary linguistic profiling studies of the main Philippine languages such as by McFarland (2004) show that Tagalog, Bikol, and Cebuano are more closely related to one another than any languages in the northern family tree. A language's closeness or its _degree of relatedness_ to another language from the same family (sub)tree is commonly referred to as _mutual intelligibility_Bloomfield (1926). Such similarities can be seen across multiple aspects, including, for example (a) syllable patterns where all three languages have similar three case-marking particles - _ang_ (En: _the_), _ng_ (En: _of_), and _sa_ (En: _at_) for Bikol and Tagalog, and _ug_ instead of _sa_ for Cebuano; and (b) shared words, e.g. _mata_ (En: _eye_) and _tubig_ (En: _water_).
For languages belonging to one greater subgroup in the case of Central Philippine for Tagalog, Bikol, and Cebuano, showing stronger quantitative evidence of mutual intelligibility may provide additional proof that these languages are indeed, at some level, closely related to each other. Thus,
Figure 1: The central subgroup of the Philippine language family tree highlighting the origins of Tagalog, Bikol, and Cebuano.
to contribute towards further understanding of mutual intelligibility in the Philippines language space, we apply two linguistic similarity-based measures using character n-gram overlap and genetic distance which we discuss in the sections below.
**Character N-Gram Overlap.** For our first measure, we use the overlap in character bigrams and trigrams for every pair from the selected set of languages. To do this, we simply extract and rank the top occurring character bigrams and trigrams for a given language and calculate the Rank-Biased Overlap (RBO)2Webber et al. (2010). RBO provides a measure of similarity between two lists while preserving the ranking. We also add English (eng) as an unrelated control language not belonging to the Philippine family tree for comparison. We use the CommonCore readability dataset Flor et al. (2013) for English as it also has three readability levels, and the level distribution is the most similar to the dataset of the three Philippine languages. Further information on the datasets in Tagalog, Bikol, and Cebuano can be found in Section 3. For all languages, we extract the top \(25\%\) of the most frequently occurring bigrams and trigrams for analysis. The top \(40\) most frequent bigrams and trigrams can be found in the Appendix.
Footnote 2: [https://github.com/changyaochen/rbo](https://github.com/changyaochen/rbo)
Table 1 presents character overlap for bigrams and trigrams in a pairwise manner. These results show that all three Philippine languages have character overlap greater than \(75\%\) for bigrams among themselves while overlap with English is below \(27\%\). This pattern is observed again in trigrams with the overlap levels of \(53.3\%\) to \(62.8\%\) between Tagalog, Bikol, and Cebuano and those below \(15\%\) for English. These ranges of mutual intelligibility values for bigram and trigram overlap serve as an estimate of the degree of relatedness between the three Philippine languages, with the values for English serving as a baseline for an unrelated language.
**Genetic Distance.** As a secondary measure of mutual intelligibility, we calculate the genetic distance score Beau and Crabbe (2022) for each pair of languages studied in this work. Similar to the character n-gram overlap analysis, we add English for comparison purposes. Genetic distance Beaufils and Tomin (2020) is an automatic measure for quantifying the distance between two languages without the need for human judgments. This metric requires a list of words and their equivalent translations for any two languages of interest and calculates the number of exact consonant matches using the following formula:
\[\mathrm{GeneticDistance}=100-(\frac{match(l_{1},l_{2})}{n}) \tag{1}\]
where \(l_{1}\) and \(l_{2}\) are a pair of languages, \(n\) is the total number of words for analysis (usually \(100\)), and \(match(\cdot)\) is a function for extracting the consonant patterns for each word from the list as described in Beaufils and Tomin (2020). The metric is measured as a distance; thus, the values closer to \(100\) denote higher dissimilarity or non-relatedness.
Table 2 shows the calculated genetic distance scores for each pair of languages including English. The mapping provided in the table is the prescribed guide from Beaufils and Tomin (2020).
\begin{table}
\end{table}
Table 1: Mutual Intelligibility using Bigram and Trigram Character N-Gram Overlap
\begin{table}
\end{table}
Table 2: Mutual Intelligibility using Genetic Distance with Mapping from Beaufils and Tomin (2020).
Judging by these results, the Philippine languages have genetic distance scores within the **related** and **highly related languages** range with the Tagalog-Cebuano pair showing the closest language distance of \(24.846\). Meanwhile, genetic distance scores between all considered Philippine languages and English fall within the very remotely related to **no recognizable relationship** categories, with the Tagalog-English pair showing the highest distance from each other. Similar to the character n-gram overlap, these results strengthen our initial observation and provide empirical evidence for mutual intelligibility between Tagalog, Bikol, and Cebuano languages which, beyond this study, may also be used in future linguistic research.
## 3 Readability Corpora in Philippine Languages
We have compiled open source readability datasets for Tagalog, Cebuano, and Bikol from online library websites and repositories. Each data instance in this study is a fictional short story. Table 3 shows the statistical breakdown and additional information on the levels in each readability dataset across different languages.
Tagalog and Cebuano.Datasets in these languages have already been used in previous research, including Imperial et al. (2019); Imperial and Ong (2020); Imperial (2021); Imperial and Ong (2021); Imperial et al. (2022). We use the same datasets as in previous research and incorporate them into this study for comparison. For Tagalog, we have assembled \(265\) instances of children's fictional stories from Adarna House3 and the Department of Education (DepED)4. For Cebuano, we use the dataset collected by Imperial et al. (2022) from Let's Read Asia5 and Bloom Library6, which were funded by the Summer Institute of Linguistics (SIL International) and BookLabs to make literary materials in multiple languages available to the public.
Footnote 3: [https://adarna.com.ph/](https://adarna.com.ph/)
Footnote 4: [https://lrmds.deped.gov.ph/](https://lrmds.deped.gov.ph/)
Footnote 5: [https://www.letsreadasia.org/](https://www.letsreadasia.org/)
Footnote 6: [https://ploomlibrary.org/](https://ploomlibrary.org/)
Bikol.There are no pre-compiled datasets available for readability assessment in Bikol yet. For this, we collected all available Bikol short stories from Let's Read Asia and Bloom Library totaling \(150\) instances split into \(68\), \(27\), and \(55\) for levels \(1\) to \(3\) respectively.
All collected data for this study follows the standard leveling scheme for early-grade learners or the first three grades from the K-12 Basic Curriculum in the Philippines.7 Each instance has been annotated by experts with a level from \(1\) to \(3\) as seen in Table 3. We use these annotations as target labels in our experiments. Finally, all datasets used in this study can be manually downloaded from their respective websites (see footnotes for links) under the Creative Commons BY 4.0 license.
Footnote 7: [https://www.deped.gov.ph/k-to-12/about/k-to-12-basic-education-curriculum/](https://www.deped.gov.ph/k-to-12/about/k-to-12-basic-education-curriculum/)
## 4 Experimental Setup
### ML Setup
In this study, our primary focus is on the depth of analysis of the traditional and neural features used in a cross-lingual setting applied to closely related languages. Thus, we use a vanilla Random Forest model which has been previously shown to be the best-performing monolingual-trained model for ARA in Tagalog and Cebuano (Imperial and Ong, 2021; Imperial et al., 2022). We leave the technical breadth of exploring other supervised algorithms to future work.
We use a stratified \(k\)-fold approach with \(k{=}5\) to have well-represented samples per class for a small-dataset scenario used in this study. We report accuracy as the main evaluation metric across all experiments for the ease of performance comparison with previous work (see Section 5). We use WEKA \(3.8\)(Witten et al., 1999)8 for all our modeling and evaluation and set hyperparameters of the Random Forest algorithm to their default values as listed in the Appendix.
Footnote 8: [https://www.cs.waikato.ac.nz/ml/weka/](https://www.cs.waikato.ac.nz/ml/weka/)
### Linguistic Features
We extract and consider a wide variety of features inspired by: (a) handcrafted predictors from previous work, (b) representations from a multilingual Transformer-based model (mBERT), and (c) CrossNGO, a novel feature applicable to readability assessment in closely related languages. We discuss each feature group below.
Traditional Handcrafted Features (TRAD).We integrate available traditional surface-based and
syllable pattern-based features in this study as predictors of text complexity. These features have been widely used in previous research on ARA in Tagalog and Cebuano (Imperial and Ong, 2020; Imperial et al., 2022). For Bikol, this is the first-ever study to develop a readability assessment model. In the case of low resource languages similar to those used in this study, these predictors are still the go-to features in ARA and have been empirically proven effective for Tagalog and Cebuano (Imperial and Ong, 2021). We have extracted a total of \(18\) traditional features for each language, including:
1. The total number of words, phrases, and sentences (3).
2. Average word length, sentence length, and the number of syllables per word (3).
3. The total number of polysyllable words of more than \(5\) syllables (1).
4. Density of consonant clusters or frequency of consonants without intervening vowels in a word (e.g. Tagalog: _sas_tre, En: _dressmaker_) (1).
5. Densities of syllable patterns using the following templates {v, cv, cv, cvc, vcc, ccv, cvc, ccvc, ccvc, ccvcc, ccvcc}, where v and c are vowels and consonants respectively (10).
**Multilingual Neural Embeddings (mBERT).** In addition to the surface-based features, we explore contextual representations from a multilingual Transformer-based large language model via mBERT (Devlin et al., 2019). Previous research on probing BERT has shown convincing evidence that various types of linguistic information (e.g. semantic and syntactic knowledge) are distributed within its twelve layers (Tenney et al., 2019; Rogers et al., 2020). Applying this to ARA, Imperial (2021) showed that BERT embeddings could act as a _substitute_ feature set for lower-resource languages such as Filipino, for which NLP tools like POS taggers are lacking.
For this study, we specifically chose mBERT as this particular model has been trained using Wikipedia data in \(104\) different languages including Tagalog and Cebuano. Bikol is not included in any available off-the-shelf Transformer-based language models due to extremely limited online resources not large enough for training. Nonetheless, we still used the representations provided by mBERT noting its high intelligibility with Tagalog and Cebuano. Feature-wise, we use the mean-pooled representations of the entire twelve layers of mBERT via the sentence-transformers library (Reimers and Gurevych, 2019). Each instance in our readability data has an mBERT embedding representation of \(768\) dimensions.
**Cross-lingual Character N-Gram Overlap (CrossNGO).** N-gram overlap has been used previously in various NLP tasks applied to Philippine language data such as language identification (Oco et al., 2013; Cruz et al., 2016), spell checking and correction (Cheng et al., 2007; Octaviano and Borra, 2017; Go et al., 2017), and clustering (Oco et al., 2013). Drawing inspiration from this fact and from the quantitative evidence of mutual intelligibility between Philippine languages presented in Section 2, we posit that a new feature designed specifically for closely related language data might improve the performance of the readability assess
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Source** & **Language** & **Level** & **Doc Count** & **Sent Count** & **Vocab** \\ \hline \multirow{3}{*}{Adarna and DepED} & \multirow{3}{*}{TGL (265)} & L1 & 72 & 2774 & 4027 \\ & & L2 & 96 & 4520 & 7285 \\ & & L3 & 97 & 10957 & 12130 \\ \hline \multirow{3}{*}{Let’s Read Asia and Bloom Library} & \multirow{3}{*}{BCL (150)} & L1 & 68 & 1578 & 2674 \\ & & L2 & 27 & 1144 & 2009 \\ & & L3 & 55 & 3347 & 5509 \\ \hline \multirow{3}{*}{Let’s Read Asia and Bloom Library} & \multirow{3}{*}{CBL (349)} & L1 & 167 & 1173 & 2184 \\ & & L2 & 100 & 2803 & 4003 \\ \cline{1-1} & & L3 & 82 & 3794 & 6115 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics on the readability corpora in Tagalog, Cebuano, and Bikol used in this study. The numbers in the brackets provided in the second column are the total number of documents per language broken down in the third and fourth columns per grade level. **Doc Count** and **Sent Count** denote the number of short story instances and the number of sentences per story. **Vocab** is the size of the vocabulary or of the accumulated unique word lists per level.
ment models. Thus, we introduce CrossNGO, which quantifies linguistic similarity using character overlap from a curated list of high-frequency n-grams within languages of high mutual intelligibility. We propose the following formula for calculating this metric:
\[\mathrm{CrossNGO}_{L,n}=\frac{m(L)\bigcap m(d)}{\mathrm{count}(m(d))} \tag{2}\]
where \(n\in\{2,3\}\) denotes bigrams and trigrams, and \(m(\cdot)\) is a function that extracts unique n-grams from a document instance \(d\) and compares them to a list of top n-grams from a specific language \(L\). For each instance in a dataset, a vector containing three new features will be added representing the overlap between the text and the top n-grams from each of the three languages. We apply this calculation to both bigrams and trigrams using the n-gram lists for Tagalog, Bikol, and Cebuano obtained from the preliminary experiments, which results in a total of \(6\) new features.
While we presented two quantitative methods of mutual intelligibility in Section 2, only CrossNGO is applied as a metric _and_ a feature for this study. Staying faithful to the work of Beaufils and Tomin (2020), we did not use Genetic Distance to generate another set of features as it was originally developed as a language-to-language metric. Thus, we use it only as additional secondary evidence of language similarity. At the same time, we note that the proposed CrossNGO bears certain conceptual similarities to Genetic Distance as it measures the frequency of n-gram overlap with other languages. We perform an ablation study and demonstrate the contribution of individual feature sets in Section 5.
## 5 Results and Discussion
Table 4 shows the accuracy values obtained when training Random Forest models with various combinations of feature groups for each language of interest. The experiments were divided into three setups: (a) _singular cross-lingual_ (\(l_{1}{\rightarrow}l_{2}\)), (b) _pair-wise cross-lingual_ (\([l_{1}+l_{2}]{\rightarrow}l_{3}\)), and (c) _full cross-lingual_ (\([l_{1}+l_{2}+l_{3}]{\rightarrow}l_{1}\)), each corresponding to a separate subsection of Table 4. We use the term cross-lingual in this context when a model is trained with a readability corpus from a chosen language \(l_{n}\) or a combination of languages and evaluated with a test set from another language \(l_{m}\) as is demonstrated in Table 4. Similar to our preliminary experiments (Section 2), we include English using the CommonCore dataset as counter-evidence for comparison with closely related languages.
### Low-Resource Languages Benefit from Specialized Cross-lingual Features
For the singular cross-lingual experiments, the effectiveness of exploiting the bigram and trigram overlap via CrossNGO is demonstrated by high scores for Bikol and Cebuano (\(75.862\) and \(78.270\)) and comparable performance for Tagalog (\(50.100\)). Moreover, only for this setup, there is an observed trend where traditional features combined with CrossNGO outperform mBERT embeddings or the combination of all features for the respective language pair \(l_{1}{\rightarrow}l_{2}\). For Tagalog, this results in \(50.100\) vs. \(26.921\) and \(23.077\); for Bikol - \(75.862\) vs. \(68.965\) and \(69.000\); for Cebuano - \(78.270\) vs. \(71.015\) and \(73.913\). In terms of cross-linguality, in the case of Tagalog, using a model trained with Bikol data proves to be more effective than training with the original Tagalog data with approximately \(5.8\)-point difference in accuracy. However, we still recommend the Tagalog model using all features with \(50.000\) accuracy since the \(0.1\) difference is not a significant improvement. Consequently, this trend is not observed in the Bikol and Cebuano experiments where the best-performing models of readability assessment are trained on the data from the same language \(l_{1}{\rightarrow}l_{1}\).
To further confirm if the addition of the CrossNGO feature statistically improves models' performance as compared to the representations from mBERT for low-resource languages, we aggregate the scores from the TRAD+CrossNGO group and compare them with the scores obtained when we use mBERT embeddings only, conducting a \(t\)-test. We did not include the scores using the combination of all types of features as it would confound the significance test. We achieve statistical significance at \(\alpha=0.01\) level (\(p=0.006\)) which shows that using traditional handcrafted features extended with CrossNGO significantly improves ARA models for low-resource languages, _provided_ the availability of data in a closely related language in the case of non-availability of multilingual LLMs (e.g., lack of mBERT model in Bikol).
### Inclusion of a Closely Related Language in Data Produces More Confident Predictions
For pairwise cross-lingual experiments, we investigate the effect of adding a closely related language
on a model's performance using confusion matrices. As the middle section of Table 4 demonstrates, there are three possible pairwise combinations of Tagalog, Bikol, and Cebuano tested on each individual language. As there can be numerous ways to analyze the table, we highlight the results of the cross-lingual models with the top-performing pair and their utilized feature groups and compare them to their equivalent models in the singular cross-lingual experiment. Figure 2 illustrates this method of comparison for each language.
In the case of the Tagalog-Tagalog pair, most misclassifications occur between grades \(1\) and \(2\) in both training and test data using all features. This, in turn, is alleviated by incorporating the Bikol dataset in the training data, which reduces the level of confusion by approximately \(7\%\). The inclusion of Bikol also improves classification between grades \(2\) and \(3\) by three instances. In the case of the Bikol test data, the same finding is observed for the combined Bikol and Cebuano model using all features, where confusion in classifying grades \(1\) and \(3\) is reduced by two instances. Lastly, for Cebuano, the top-performing model in the pairwise cross-lingual setup includes Bikol data and uses all features. For this model, misclassifications in predicting grade \(1\) against the other two levels are reduced, and performance for predicting grade \(3\) is improved.
We further corroborate our observations that pairwise cross-lingual models outperform singular cross-lingual models by aggregating the scores from the two setups and running a \(t\)-test. Further to the results reported in the previous section, we observe statistically significant difference at the \(\alpha=0.01\) level (\(p=0.003\)) when pairwise cross-lingual models are compared to singular cross-lingual models. Overall, our findings provide solid empirical evidence that including a closely related language in the training data for a low-resource language significantly improves performance.
Combining Specialized Cross-Lingual Features with Multilingual Neural Embeddings Achieves SOTA Results
While the previous sections highlight the significant increase in performance when using traditional features with CrossNGO as compared to mBERT embeddings only, we now discuss results and contributions when both linguistic representations are combined. As is demonstrated in Table 4, the scores obtained using the combined features applied to Tagalog and Cebuano achieve state-of-the-art results for ARA in these languages. For Tagalog, our model's accuracy of \(57.692\) outperforms the SVM with \(57.10\) accuracy and the Random Forest model with \(46.70\) presented in Imperial (2021). For Cebuano, our model achieves \(79.710\) beating the Random Forest model presented in Imperial et al. (2022) with a score of \(57.485\) with both models utilizing the same Cebuano dataset. Lastly, as there are no automated readability assessment models yet for Bikol, we report a baseline accuracy of \(79.328\), which is achieved using a model with a combination of traditional features (extended with CrossNGO) and mBERT embeddings extracted from data in all three Philippine languages.
### Conventional Fine-Tuning of mBERT Underperforms for Low Resource Cross-Lingual ARA
While the main focus of our work is on using traditional machine learning models with Random Forest, we explore if the standard approach for fine
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c|}{**TGL**} & \multicolumn{4}{c|}{**BCL**} & \multicolumn{4}{c}{**CEB**} \\ \cline{2-13} & **TRAD** &
\begin{tabular}{c} **TRAD +** \\ **CrossNGO** \\ \end{tabular} & **mBERT** & **ALL** & **TRAD +** & **mBERT** & **ALL** & **TRAD +** & **mBERT** & **ML** & **TRAD +** & **mBERT** & **ALL** \\ & **CrossNGO** & **Embdng** & **ALL** & **TRAD +** & **CrossNGO** & **Embdng** & **ALL** & **TRAD** & **CrossNGO** & **Embdng** & **ALL** \\ \hline
**TGL** & 43.153 & 44.231 & 46.100 & 50.000 & 55.172 & 41.379 & 20.689 & 24.137 & 53.623 & 57.971 & 47.826 & 50.725 \\
**BCL** & 50.000 & 50.100 & 26.921 & 23.077 & 74.620 & 75.862 & 68.965 & 69.000 & 63.768 & 62.320 & 60.869 & 66.667 \\
**CEB** & 32.692 & 38.462 & 34.615 & 42.308 & 51.720 & 65.517 & 48.276 & 44.823 & 74.058 & 78.270 & 71.015 & 39.913 \\
**ENG*** & 26.923 & 44.230 & 28.846 & 26.923 & 48.275 & 37.681 & 48.250 & 48.275 & 46.375 & 62.018 & 43.478 & 43.376 \\ \hline
**TGL+BCL** & 51.101 & 51.923 & 40.384 & **57.692** & 72.441 & 69.965 & 69.000 & 68.966 & 56.521 & 60.869 & 62.318 & 69.565 \\
**BCL+CEB** & 48.077 & 50.000 & 42.307 & 48.076 & 68.956 & 72.414 & 75.6511 & 75.862 & 74.400 & 75.362 & 75.362 & **72.716** \\
**CEB+TGL** & 44.230 & 36.538 & 48.076 & 48.100 & 52.720 & 55.172 & 41.379 & 34.483 & 77.711 & 76.811 & 73.913 & 74.464 \\ \hline
**ALL** & 50.000 & 52.910 & 46.153 & 32.692 & 72.413 & 79.113 & 65.517 & **79.328** & 77.710 & 78.000 & 78.261 & 75.630 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The accuracy of cross-lingual modeling per language with various iterations using different combinations of traditional and neural-based features. The underlined values correspond to the best model for each of the three setups while the **boldfaced** values correspond to the overall highest-performing model for each language across all setups. We included English as counter-evidence only for the singular cross-lingual setup.
tuning LLMs such as mBERT can produce comparable performance. We use the same uncased mBERT model as presented in Section 4.
Table 5 shows the performance of singular, pairwise, and full cross-lingual setups formatted similarly to Table 4. These results confirm the findings of Ibanez et al. (2022), who have applied a similar setup to monolingual Tagalog ARA using a Tagalog BERT model. Judging by their results, the conventional fine-tuning approach proved to be inferior to the traditional way of extracting linguistic features from text and training a machine learning model like SVM or Random Forest. For this study, the highest-performing setups for Tagalog and Cebuano use Cebuano data only, and that for Bikol uses the combined Cebuano + Bikol datasets. None of the fine-tuned models outperform those presented in Table 4 using combinations of traditional features and CrossNGO. While previous work in cross-lingual ARA by Lee and Vajjala (2022) and Madrazo Azpiazu and Pera (2020) achieved relatively high performance with non-closely related languages using LLMs, we obtain less promising results which we can attribute to: (a) the use of datasets of substantially smaller sizes (a total of \(13,786\) documents used in Azpiazu and Pera (2019) and \(17,518\) in Lee and Vajjala (2022) vs. only \(764\) in out study), and (b) lack of diverse data sources since only Wikipedia dumps were used for Tagalog and Cebuano for training the mBERT model.
## 6 Conclusion
In this work, we took a step back from the trend of exploring various technical components of the complex, deep learning models and, instead, focused on studying the potential effectiveness of linguistic characteristics such as mutual intelligibility for ARA in closely related Philippine languages - Tagalog, Bikol, and Cebuano. We implemented three cross-lingual setups to closely study the effects of interaction between the three languages and proposed a new feature utilizing n-gram overlap, CrossNGO, which is specially developed for cross-lingual ARA using closely related languages. Our results show that: (a) using CrossNGO combined with handcrafted features achieves significantly higher performance than using mBERT embeddings, (b) the inclusion of another closely related Philippine language reduces model confusion, and (c) using the conventional fine-tuning for LLMs like mBERT in this setup still does not
Figure 2: Confusion matrices for pairwise cross-lingual setup for all three languages. All models using an additional language dataset for ARA achieved an improved performance with an average of \(10.131\) across the board (\(7.692\) for TGL, \(6.862\) for BCL, and \(15.841\) for CEB).
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **TGL** & **BCL** & **CEB** \\ \hline
**TGL** & 0.420 & 0.500 & 0.333 \\
**BCL** & 0.420 & 0.633 & 0.575 \\
**CEB** & **0.520** & 0.500 & **0.697** \\ \hline
**TGL+BCL** & 0.440 & 0.566 & 0.469 \\
**BCL+CEB** & 0.400 & **0.637** & 0.666 \\
**CEB+TGL** & 0.480 & 0.500 & 0.590 \\ \hline
***ALL** & 0.460 & 0.633 & 0.636 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The accuracy scores of the conventional fine-tuning strategy applied to LLMs for various methods of cross-lingual ARA using the same uncased mBERT model for the extraction of embeddings.
outperform models with traditional features. Consequently, we come to the conclusion that using languages with high intelligibility is more suited for cross-lingual ARA. This is demonstrated in experiments with English added as an example of a non-related language, in which we do not achieve a substantial increase in performances for Tagalog, Cebuano, and Bikol.
Our results agree with the findings of previous studies in cross-lingual ARA such as those of Madrazo Azpiazu and Pera (2020) using English, Spanish, Basque, Italian, French, Catalan, and Weiss et al. (2021) using English and German, that also showed that the inclusion of additional language data can improve ARA results on other languages. However, our work is primarily motivated by the degree of language relatedness: we show that better results can be achieved for ARA in low-resource languages if we use closely related languages rather than any language, including non-related ones like English. Our study also provides an encouragement for researchers to consider approaches grounded in linguistic theories which can potentially be used to improve the performance in NLP tasks rather than always resorting to models that are expensive to train and hard to interpret.
## 7 Limitations
We discuss some limitations of our current work which can be further explored in the future.
**On Data Format**. We specifically use fictional short stories as our primary data for the study since we require gold standard labels for this document classification task. Moreover, fictional short stories are easier to find as they often come with a specified grade level compared to other types of literary texts such as magazines or web articles written in any of the three Philippine languages. We do not claim that our models are able to generalize on these other types of literary materials or on other types of closely related language pairs unless a full study is conducted which is outside the scope of this work.
**On Handcrafted Features**. We were only able to use traditional handcrafted features covering count-based predictors such as sentence or word count and syllable pattern-based features for training the Random Forest models. We did not extract other feature sets one may find in the previous work on English such as lexical density or discourse-based features since such features require NLP tools that are able to extract POS, named entities, relations, and discourse patterns that do not yet exist for all three Philippine languages used in this study. The work of Imperial and Ong (2021) covered a small set of lexical features such as _type-token ratio_ and _compound word density_ for readability assessment in Tagalog. Still, we cannot use this approach since all languages would need to have the same number of features as is a standard practice in model training.
**On Model Training**. Our choice of the Random Forest algorithm for training the ARA models is based on the substantial amount of previous work supporting the application of this method to low-resource ARA, e.g., to Tagalog and Cebuano in a monolingual setup Imperial and Ong (2020, 2021); Imperial (2021); Imperial et al. (2022), where it achieved better results than other algorithms such as SVM or Logistic Regression. One can consider these algorithms for comparison but the analysis of each ARA model trained with various algorithms to the same level of depth and focus that we have given to the Random Forest classifier in the present study would require a considerable amount of time as well as a higher page limit.
**On Current Measures of Mutual Intelligibility**. The majority of existing literature in linguistics, specifically on the topic of mutual intelligibility in Philippine languages, discusses examples in the context of speech communication. As such, one might claim that Cebuano and Tagalog are _not_ mutually intelligible by giving an example where a Tagalog speaker may not fully comprehend (or only recognize a few common words) another speaker if they are talking in Cebuano. While this is certainly true, in this study, we specifically focus on the mutual intelligibility of languages at a word and character level via written texts such as children's fiction books. From this, we see a substantial degree of _closeness_ between Tagalog, Cebuano, and Bikol compared to English. Thus, based on our results, we posit that mutual intelligibility may be used as an additional feature (see CrossNGO in Section 4) for text-based tasks such as readability assessment. We leave the exploration of our proposed novel feature in the speech communication
area to future work.
## 8 Ethical Considerations
We foresee no ethical issues related to the study.
## Acknowledgements
We thank the anonymous reviewers and area chairs for their constructive and helpful feedback. We also thank the communities and organizations behind the creation of open-source datasets in Philippine languages used in this research: DepED, Adarna House, Bloom Library, Let's Read Asia, SIL, and BookLabs. JMI is supported by the UKRI CDT in Accountable, Responsible, and Transparent AI of the University of Bath and by the Study Grant Program of the National University Philippines.
|
2307.05463 | EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the
Backbone | Video-language pre-training (VLP) has become increasingly important due to
its ability to generalize to various vision and language tasks. However,
existing egocentric VLP frameworks utilize separate video and language encoders
and learn task-specific cross-modal information only during fine-tuning,
limiting the development of a unified system. In this work, we introduce the
second generation of egocentric video-language pre-training (EgoVLPv2), a
significant improvement from the previous generation, by incorporating
cross-modal fusion directly into the video and language backbones. EgoVLPv2
learns strong video-text representation during pre-training and reuses the
cross-modal attention modules to support different downstream tasks in a
flexible and efficient manner, reducing fine-tuning costs. Moreover, our
proposed fusion in the backbone strategy is more lightweight and
compute-efficient than stacking additional fusion-specific layers. Extensive
experiments on a wide range of VL tasks demonstrate the effectiveness of
EgoVLPv2 by achieving consistent state-of-the-art performance over strong
baselines across all downstream. Our project page can be found at
https://shramanpramanick.github.io/EgoVLPv2/. | Shraman Pramanick, Yale Song, Sayan Nag, Kevin Qinghong Lin, Hardik Shah, Mike Zheng Shou, Rama Chellappa, Pengchuan Zhang | 2023-07-11T17:50:15Z | http://arxiv.org/abs/2307.05463v2 | # EgoVLPv2: Egocentric Video-Language Pre-training with
###### Abstract
Video-language pre-training (VLP) has become increasingly important due to its ability to generalize to various vision and language tasks. However, existing egocentric VLP frameworks utilize separate video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of a unified system. In this work, we introduce the second generation of egocentric video-language pre-training (EgoVLPv2), a significant improvement from the previous generation, by incorporating cross-modal fusion directly into the video and language backbones. EgoVLPv2 learns strong video-text representation during pre-training and reuses the cross-modal attention modules to support different downstream tasks in a flexible and efficient manner, reducing fine-tuning costs. Moreover, our proposed fusion in the backbone strategy is more lightweight and compute-efficient than stacking additional fusion-specific layers. Extensive experiments on a wide range of VL tasks demonstrate the effectiveness of EgoVLPv2 by achieving consistent state-of-the-art performance over strong baselines across all downstream. Our project page can be found at [https://shramanpramanick.github.io/EgoVLPv2/](https://shramanpramanick.github.io/EgoVLPv2/).
## 1 Introduction
Video-Language Pre-training (VLP) has proven to be the _de-facto_ solution for a variety of video-text tasks, e.g., video-text retrieval [107, 73, 4], VQA [104, 116, 127], zero-shot recognition, [8, 56, 36] and video-text grounding [68, 58]. This is fueled by recent advances in vision [17, 60, 6, 4, 2, 22, 61] and language [92, 16, 59, 112, 81, 14, 80], coupled with large-scale data [107, 126, 66, 4, 27, 15]. Existing video-language datasets generally fall under two categories: third-person view and first-person view (egocentric). The noticeable domain gap between them restricts VLP frameworks pre-trained on third-person videos from performing well on egocentric benchmarks [57]. However, the recent introduction of a massive-scale egocentric dataset Ego4D [27] helps unlock the full potential of egocentric VLP.
Existing egocentric VLP approaches [57, 125, 67, 3] pre-train separate (_dual_) video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of unified egocentric VL frameworks. Moreover, they lack strong zero-shot inference ability on multi-modal downstream tasks. This issue is commonly addressed by stacking dedicated fusion layers on top of the dual video and text encoders [64, 44, 105, 90, 109, 110, 117], or with a shared video-language architecture [48, 1, 41, 91, 94]. However, these approaches introduce a large number of fusion-specific pa
Figure 1: **EgoVLPv2 achieves the state-of-the-art performance across a broad range of egocentric video understanding tasks (see Table 1 for details) among similar-sized baselines by incorporating cross-modal attention in the transformer backbones to learn video-language representation.**
rameters, and the resulting encoder cannot be directly applied to uni-modal (video-only) tasks.
In this work, we present the second generation of egocentric VLP (EgoVLPv2), a significant improvement over the previous generation [57] by incorporating cross-modal fusion directly into the video and language backbones. Our approach improves over existing VLP frameworks by: \((i)\) fewer fusion parameters compared to stacked fusion-specific transformer layers or shared encoders, requiring less GPU memory, compute resources, and training time; \((ii)\) the flexibility to switch between dual and fusion encoders, by turning on and off cross-attention fusion using a gating mechanism; \((iii)\) being applicable to both uni- and multi-modal tasks.
Inserting cross-modal fusion directly into the backbone helps unify a wide range of dual- and fusion-encoder-based downstream tasks. Specifically, the "switching" ability of EgoVLPv2 enables us to utilize the same pre-trained encoders for fast retrieval and grounding tasks, which require dual and fusion encoders, respectively. Moreover, in contrast to existing egocentric VLP frameworks that learn task-specific fusion parameters during fine-tuning, EgoVLPv2 reuses the pre-trained cross-attention modules across different tasks, significantly reducing the fine-tuning cost. This enables us to introduce query-focused video summarization as a downstream task, which has recently gained attention in the community [69, 100, 101, 34, 102, 70]. The scarcity of annotated data has been a bottleneck to training decent-sized models end-to-end on this task, with the only available egocentric dataset, QFVS [84], providing merely \(135\) video-query training samples. EgoVLPv2 achieves new state-of-the-art results on QFVS with a decent margin over the baselines.
In summary, our contributions are: \((i)\) We advance a step forward in egocentric VLP by proposing EgoVLPv2, the second generation of EgoVLP [57] with cross-modal fusion in the backbone. Our proposed framework can switch between dual and fusion encoders and requires 45% lesser compute (GMACs) than learning additional fusion-specific transformer layers. \((ii)\) The switching capability of EgoVLPv2 allows us to unify a wide range of dual- and fusion-encoder-based downstream tasks under the same VLP framework and reduce the task-specific fine-tuning cost by employing the same pre-trained cross-attention modules across different video-language tasks. \((iii)\) We demonstrate the effectiveness of EgoVLPv2 on eight egocentric benchmarks and achieve state-of-the-art performance among comparable-sized backbones. We summarize these results in Figure 1.
## 2 Related Works
### VLP Frameworks
Video-language pre-training (VLP) has attracted increasing attention in recent years, following the success of image-language pre-training [78, 46, 33, 18, 5, 11, 63, 52, 19, 118, 111, 76, 53, 95, 98, 30, 96, 72, 45] and their applications [10, 24, 29, 50, 77]. There are three broad categories of VLP frameworks (see Figure 2):
**Dual Encoders:** Many existing egocentric VLP frameworks [57, 125, 3, 67] falls into this category. They use separate video and language backbones and learn task-specific cross-modal fusion during fine-tuning [4, 65, 106, 93]. They are commonly trained using InfoNCE [71] or MIL-NCE [65] objectives, and have been successful in video-text retrieval.
**Shared Encoder:** Approaches that learn a combined encoder for video and text fall under this category [48, 1, 41, 94, 91]. They are modality independent and can be applied to an image, video, text, audio, time-series, and single-view 3D data. Common learning objectives include masked language modeling [16, 127], masked frame modeling [89, 127], masked token modeling [105], masked modal modeling [64, 105], sentence ordering modeling [43], frame ordering modeling [43, 47], and video-text matching [43].
**Encoders with Stacked Fusion Layers:** This line of work uses dedicated cross-modal fusion layers on top of dual encoders [64, 44, 105, 90, 109, 110, 117], trained using similar objectives as shared encoders.
The latter two categories introduce a large number parameters for cross-modal fusion. In this work, we propose a fourth category (Figure 2 (d)) by inserting cross-modal fusion in uni-modal backbones using a gating mechanism.
Figure 2: **Four categories of VLP frameworks. (a) use separate (_dual_) video and text backbones, with InfoNCE [71] as the common pretraining objective [57, 125, 3, 67] (b) use cross-modal fusion layers on top of dual encoders, with MLM, VTM, etc. as common pretraining tasks [64, 44, 105, 90] (c) use a single encoder for different modalities, with similar learning objectives as (b) [48, 1, 41] (d) Fusion in the Backbone (Ours).**
Our framework is flexible to act as either dual or shared encoders by switching cross-attention modules off and on.
### Video-Language Datasets
The success of VLP can be partially attributed to the availability of large-scale open-world video-text datasets such as ActivityNet [39], WebVid-2M [4], and HowTo\(100\)M [66]. These datasets comprise videos sourced from the Web, and are paired with the corresponding ASR captions, making them popular for VLP pre-training. Despite their impressive size, these existing video-text pretraining datasets typically feature 3rd-person views. On the other hand, egocentric videos has received increasing interests from the community. Previous egocentric datasets [15, 86, 55, 82, 74] were small-scale and domain-specific. The recently released Ego4D [27] is the first massive-scale egocentric dataset consisting of \(3670\) hours of videos collected by 931 people from 74 locations across 9 different countries world-wide. Recently, EgoClip [57] offered a filtered version of Ego4D with variable-length clip intervals instead of single timestamps. We train our proposed framework, EgoVLPv2, on the EgoClip version of Ego4D.
## 3 EgoVLPv2
### Fusion in the Backbone
We use TimeSformer [6, 4] and RoBERTa [59] as our video and language backbones. However, such separate (_dual_) uni-modal encoder design does not capture cross-modality interaction and, thus, fails to produce fine-grained multi-modal representation. Existing VLP frameworks achieve cross-modal fusion by: \((i)\) learning a shared architecture [48, 1, 41, 91, 94] or stack fusion layers on top of dual encoders [64, 44, 105, 90, 109, 110, 117], or \((ii)\) learning cross-modal fusion during fine-tuning [57, 125, 3, 67, 4, 65, 106, 93]. While the former offers superior cross-modal representation and zero-shot inference ability on multi-modal downstream tasks, they introduce a large number of fusion parameters than the latter. In this work, we insert cross-modal fusion into the top few layers of uni-modal backbones to strike a balance between the two ideas.
Figure 3 shows the architecture of EgoVLPv2. Each TimeSformer encoder layer has a divided space-time attention module containing temporal and spatial self-attentions with residual connections. The output of space-time attention at \(k^{th}\) encoder layer, \(z^{(k)}\), can be expressed as:
\[\hat{x}^{(k)}_{vid} =x^{(k-1)}_{vid}+\text{Tem-SA}(x^{(k-1)}_{vid})\] \[z^{(k)} =x^{(k-1)}_{vid}+\text{\text{\text{\text{\text{Spa-SA}}}}}(\hat{ x}^{(k)}_{vid})\] \[=\text{\text{\text{\text{Space-Time}}}}(x^{(k-1)}_{vid}) \tag{1}\]
where \(x^{(k-1)}_{vid}\) is the output of the \((k-1)^{th}\) encoder layer, Tem-SA and Spa-SA represent temporal and spatial self-attention blocks, respectively. We insert multi-modal fusion inside the backbone by introducing gated cross-attention after the space-time attention module. Hence, the output
Figure 3: **Computation of three objectives, \(\mathcal{L}_{\text{EgoNCE}}\), \(\mathcal{L}_{\text{MLM}}\), and \(\mathcal{L}_{\text{VTM}}\).** We insert cross-modal fusion into uni-modal backbones with a gating mechanism. During pre-training, every forward iteration contains three steps: \((i)\) cross-attention modules are switched off, EgoVLPv2 acts as dual encoder, \(\mathcal{L}_{\text{EgoNCE}}\) is computed. \((ii)\) cross-attention is switched on, EgoVLPv2 acts as fusion encoder, and video-masked narration pair is fed into EgoVLPv2 to compute \(\mathcal{L}_{\text{MLM}}\)\((iii)\) cross-attention is kept on, hard-negative video-narration pairs are fed into EgoVLPv2 to compute \(\mathcal{L}_{\text{VTM}}\). This _fusion in the backbone_ strategy results in a lightweight and flexible model compared to using fusion-specific transformer layers.
of \(k^{th}\) fused TimeSformer layer, \(x^{(k)}_{vid}\), can be expressed as:
\[\begin{split} z^{(k)}&=\text{Space-Time}(x^{(k-1)}_{vid} )\\ x^{(k)}_{vid}&=x^{(k-1)}_{vid}+z^{(k)}+\alpha*\text{ CA}(z^{(k)},x^{(k-1)}_{text})\\ x^{(k)}_{vid}&=x^{(k)}_{vid}+\text{FFN}(x^{(k)}_{vid} )\end{split} \tag{2}\]
where \(x^{(k-1)}_{text}\) is the output from the \((k-1)^{th}\) RoBERTa layer, CA, FFN denote cross-attention block and feed-forward network, respectively, and \(\alpha\) is a learnable gating parameter initialized from \(0\). Each RoBERTa layer contains multi-head self-attention [92] followed by feed-forward layers. Similar to the fused TimeSformer module, we insert cross-attention into the RoBERTa backbone:
\[\begin{split}\hat{x}^{(k)}_{text}&=\text{SA}(x^{(k- 1)}_{text})\\ x^{(k)}_{text}&=x^{(k-1)}_{text}+\hat{x}^{(k)}_{ text}+\alpha*\text{CA}(\hat{x}^{(k)}_{text},x^{(k)}_{vid})\\ x^{(k)}_{text}&=x^{(k)}_{text}+\text{FFN}(x^{(k)} _{text})\end{split} \tag{3}\]
where SA is the traditional self-attention module. For simplicity, we insert cross-attention into the same number of layers in both backbones. Notably, such _fusion in the backbone_ strategy is not only limited to TimeSformer and RoBERTa; but can also be applied to any transformer-based video [61, 22, 2] and text [16, 81, 112] encoders.
Fusion in the backbone with gated cross-attention has the following advantages: \((i)\) Cross-attention parameters can easily be switched off by setting the gating scalar \(\alpha\) to 0; thus, the model behaves as a dual encoder, which is helpful for scenarios that require "unfused" video and textual features; \((ii)\) Our fusion approach is more lightweight and compute-efficient than adding fusion-specific transformer layers, which is demonstrated in detail in Section 4.5.
### Pre-training Objectives
We use three pre-training objectives: \((1)\) Egocentric noise contrastive estimation (EgoNCE), \((2)\) masked language modeling (MLM), and \((3)\) video-text matching (VTM).
**EgoNCE:** Lin et al. [57] proposed EgoNCE for dual-encoder-based egocentric VLP. It makes two modifications over InfoNCE [71]: \((i)\) Besides the matched video-text samples, all pairs that share at least one noun or one verb are treated as positives. \((ii)\) Every batch of \(N\) video-text samples is augmented with another \(N\) visually similar videos, which are treated as additional negatives. Overall, video-to-text EgoNCE objective, \(\mathcal{L}^{\text{ego}}_{\text{v2t}}\), can be expressed as:
\[\begin{split}\mathcal{L}^{\text{ego}}_{\text{v2t}}=\frac{1}{| \vec{\mathcal{B}}|}\sum\limits_{i\in\vec{\mathcal{B}}}\log\frac{\sum\limits_ {k\in\vec{\mathcal{P}}_{1}}\exp\left(\frac{\mathbf{v}_{i}^{T}\mathbf{t}_{k}}{ \tau}\right)}{\sum\limits_{j\in\vec{\mathcal{B}}}\left(\exp\left(\frac{ \mathbf{v}_{i}^{T}\mathbf{t}_{j}}{\tau}\right)+\exp\left(\frac{\mathbf{v}_{i}^ {T}\mathbf{t}_{j}}{\tau}\right)\right)}\end{split} \tag{4}\]
where the \(i^{th}\) video embedding \(v_{i}\) and \(j^{th}\) text embedding \(t_{j}\) are L\({}_{2}\) normalized features, and \(\tau\) is a temperature factor. \(\vec{B}\) is the augmented batch with \(2N\) samples. The term in brown are the modified positive samples, and the term in blue are the modified negative samples. The text-to-video EgoNCE objective, \(\mathcal{L}^{\text{ego}}_{\text{t2v}}\), can be defined symmetrically. The total EgoNCE loss is: \(\mathcal{L}_{\text{EgoNCE}}=\mathcal{L}^{\text{ego}}_{\text{v2t}}+\mathcal{L}^{ \text{ego}}_{\text{t2v}}\).
We compute EgoNCE in a dual-encoder setting. Specifically, we set \(\alpha=0\), and thus, the cross-attention modules are switched off to calculate the EgoNCE loss.
**MLM:** Masked language modeling and video-text matching are proven helpful in fusion-encoder-based VLP literature [16, 127]. For MLM, we randomly mask \(15\%\) text tokens,1 and the loss, \(\mathcal{L}_{\mathrm{MLM}}\), aims to reconstruct the masked tokens based on surrounding words and video patches by minimizing the negative log-likelihood.
Footnote 1: Following BERT, we decompose this \(15\%\) into \(10\%\) random words, \(10\%\) unchanged, and \(80\%\) with a special token [MASK].
**VTM:** For the VTM objective, the model is given a video-text sample, and the output is a binary label \(y\in\{0,1\}\) indicating if the input pair is matched. \(\mathcal{L}_{\mathrm{VTM}}\) is constructed as a binary cross-entropy loss over the predicted scores. Following [5, 18], we sample the global hard-negative video-text pairs using the similarities computed by EgoNCE.
We compute \(\mathcal{L}_{\mathrm{MLM}}\) and \(\mathcal{L}_{\mathrm{VTM}}\) in a fusion-encoder setting. In this case, \(\alpha\neq 0\) and the cross-attention modules are switched on. Overall, our EgoVLPv2 pre-training pipeline can be summarized in the following three steps:
* **EgoNCE** requires unfused video and text features, so we switch off cross-attention (\(\alpha=0\)). Thus, \(\mathcal{L}_{\mathrm{EgoNCE}}\) is computed with EgoVLPv2 acting as a dual encoder.
* **MLM & VTM** requires multi-modal representation. We switch on cross-attention modules and compute \(\mathcal{L}_{\mathrm{MLM}}\)
Figure 4: **EgoVLPv2 can be adapted to various dual- and fusion-encoder-based video-language tasks, ranging from retrieval, video question-answering, and video grounding to query-focused video summarization.**
and \(\mathcal{L}_{\mathrm{VTM}}\) with EgoVLPv2 acting as a fusion encoder.
* For **back-propagation**, the three losses are added, resulting in \(\mathcal{L}_{\mathrm{total}}=(1-\gamma-\delta)\mathcal{L}_{\mathrm{EgoNCE}}+ \gamma\mathcal{L}_{\mathrm{MLM}}+\delta\mathcal{L}_{\mathrm{VTM}}\), and back-propagated into the model end-to-end. \(\gamma\) and \(\delta\) are hyper-parameters that control the contribution of different terms on \(\mathcal{L}_{\mathrm{total}}\). An ablation on different pre-training objectives of EgoVLPv2 is provided in Section 4.5. The pseudo-code for pre-training EgoVLPv2 can be found in the supplementary.
### Adaptation to Downstream Tasks
We now describe how we adapt EgoVLPv2 to different downstream tasks as shown in Figure 4.
**Video-Text Retrieval:** We perform retrieval in two settings: \((i)\)_dual encoders:_ we switch off cross-attention and use EgoVLPv2 as a dual encoder, and compute the cosine similarity between video clips and text narrations. \((ii)\)_fusion encoders:_ we switch on cross-attention. The top \(M\) layers of the video and language backbones interact and produce multi-modal representations, which are fed into the pre-trained VTM head to compute matching scores. We also compute an ensemble of the two to further boost the performance, discussed in Section 4.5.
**Video Grounding and Question Answering:** We perform both uni- (video-only) and multi-modal (text-guided) video grounding. We switch off cross-attention for uni-modal grounding and use only the video encoder. We use EgoVLPv2 as a fusion encoder for text-guided grounding and video question answering.
**Query-focused Video Summarization:** The input videos are very long (\(3\)-\(5\) hours) for this task. We first use the unfused \(N-M\) layers2 of our video and text encoders to extract uni-modal features from 5 second clips and the text query. Next, we apply the KTS shot boundary detector [75] to segment the long video. After this, the query and segment-wise clip features are fed into the top \(M\) fused layers of EgoVLPv2 to compute the multi-modal representation. Finally, we learn an additional single-layer transformer to design the interrelation across all \(5\) second long clips in every segment. We present additional details for the query-focused video summarization framework in the supplementary.
Footnote 2: For simplicity, we keep the number of unfused and fused layers the same in the video and text encoder.
## 4 Experiments
### Pre-training & Downstream Datasets
We pre-train EgoVLPv2 on the EgoClip [57] version of Ego4D [27], the largest publicly available egocentric video dataset. EgoClip sources untrimmed egocentric videos from Ego4D and offers filtered video-narration samples with variable-length clip intervals instead of single timestamps of Ego4D. Moreover, EgoClip excludes the videos appearing in the validation and test sets of the Ego4D benchmark [27], resulting in around \(3.8\)M pre-training samples covering over \(2927\) hours of video from \(129\) different scenarios.
We evaluate EgoVLPv2 across multiple benchmarks on five egocentric datasets, summarized in Table 1:
* On Ego4D [27] benchmarks: Multiple-Choice Questions (**EgoMCQ**) is a text-to-video (T \(\rightarrow\) V) retrieval task with five video clips for every query text. Natural Language Query (**EgoNLO**) is a natural language grounding [28, 25, 87] task that aims to localize a single time interval within a video given a text query. Moment Query (**EgoMQ**) is a video-only temporal action localization [9] task.
* Query-focused video summarization (**QFVS**) [84] aims to generate a concise version of a long (3-5 hours) egocentric video based on a natural language query.
* Video question-answering on **EgoTaskQA**[32] provides four question types (descriptive, predictive, explanatory, and counterfactual) with direct and indirect references, and evaluates the prediction over spatial, temporal, and causal domains of goal-oriented task understanding. Notably, to the best of our knowledge, we are the first to unify QFVS and EgoTaskQA as two downstream tasks of a VLP framework.
* Action Recognition on **CharadesEgo**[86]: a multi-class classification of daily indoor activities, with class names being short natural language phrases like '_Putting something on a shelf:_' Hence, leveraging text representations with class names, we treat this task as a retrieval problem.
\begin{table}
\begin{tabular}{l c|c c|c|c} \hline \hline
**Dataset** & **Task** &
\begin{tabular}{c} **Multi-** \\ **modal** \\ \end{tabular} & **Fusion** & **Metrics (\%)** & **Eval.** \\ \hline \hline \multirow{4}{*}{Ego4D [27]} & MCQ w/ dual & ✓ & ✗ & Inter- \& Intra Acc. & ZS \\ & MCQ w/ fusion & ✓ & ✓ & Inter- \& Intra Acc. & ZS \\ & NLQ & ✓ & ✓ & Recall @ N & HT \\ & MQ & ✗ & – & mAP, Recall @ N & HT \\ \hline QFVS [84] & Video Summ. & ✓ & ✓ & F-1 & HT \\ EggTaskQA [32] & Video QA & ✓ & ✓ & Mean Acc. & HT, FT \\ CharadesEgo [86] & CLS\({}^{\prime}\) & ✓ & ✗ & Video-level mAP & ZS, FT \\ E-K-100 [15] & MIR w/ dual & ✓ & ✗ & mAP, nDCG & ZS, FT \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Egocentric downstream datasets, metrics, and evaluation protocols.** We evaluate EgoVLPv2 on a wide variety of benchmarks: video-text retrieval (EgoMCQ, CharadesEgo, EK-100), uni-modal and text-guided video grounding (EgoMQ, EgoNLO), video question answering (EgoTaskQA) and query-focused video summarization (QFVS). The evaluation protocols include zero-shot (ZS), task-specific head-tuning (HT), and end-to-end fine-tuning (FT). \({}^{\dagger}\)ChardesEgo is a multi-class classification problem, but we convert this to a retrieval task. Please find more details in Section 4.1 and in supplementary.
* Multi-instance retrieval on Epic-Kitchens-100 [15] (**EK-100 MIR**): this is a text-to-video (T \(\rightarrow\) V) and video-to-text (V \(\rightarrow\) T) retrieval task, with a significant semantic overlap between different narrations. Detailed statistics of pre-training and downstream datasets and evaluation metrics are given in the supplementary.
### Evaluation Protocol
We evaluate EgoVLPv2 using three evaluation protocols:
* **Zero-Shot (ZS)**. The pre-trained backbones are directly applied for V \(\leftrightarrow\) T retrieval without fine-tuning on downstream datasets. We perform zero-shot retrieval via: \((i)\)_dual encoders,_ computing the cosine similarity between video clips and textual narrations, and (ii) _fusion encoder_, incorporating the pre-trained VTM head to compute the video-text matching score.
* **Task-specific Head-tune (HT)**. We extract features using the frozen encoder and train task-specific heads3 using the training split of downstream datasets. Footnote 3: VSLNet [119] for EgoNLQ, VSGN [124] for EgoMQ, single-layer transformer encoder [92] for summarization, and linear layers for video QA.
* **Fine-tune (FT)**. We fine-tune the entire pre-trained video-text model end-to-end using the training split of downstream datasets.
### Implementation Details
We use TimeSformer-B [6, 4] and RoBERTa-B [59] as our video and language backbones. The video encoder has 12 layers and 12 heads, and is configured with the patch size of \(16\times 16\) and the hidden dimension of \(768\). The spatial attention modules are initialized from a ViT [17]. We resize videos to \(224\times 224\) and sample \(4\) frames per video for pre-training and \(16\) for fine-tuning on downstream tasks. We use RoBERTa-B pre-trained on English Wikipedia and Toronto Book Corpus. For our best model,4 we fuse the top \(6\) layers of the two encoders. We pre-train our model for \(20\) epochs with a batch size of \(256\), using AdamW [62] with a peak learning rate of \(3\)e-\(5\) for the backbones and \(12\)e-\(5\) for the cross-modal parameters. We use linear warmup over the first \(2\) epochs and use linear decay. Pre-training takes five days on \(32\) A\(100\) GPUs. Other necessary pre-training and downstream details are given in the supplementary.
Footnote 4: An ablation on the number of fusion layers is provided in Section 4.5.
### Main Results
We use **boldface** and underline for the best and second-best performing methods in every table and indicate the performance improvements over the state-of-the-art with \(\Delta\).
**Ego4D:** Table 2 and 3 present the performance of EgoVLPv2 on three different Ego4D benchmarks: EgoMCQ, EgoNLQ and EgoMQ. On EgoMCQ, our model achieves \(91.0\%\) inter-video and \(60.9\%\) intra-video accuracy, significantly improving over the baselines. Note that EgoVLPv2 achieves \(1\%\) absolute gain on the challenging intra-video MCQ task over LAViLA, which is trained using \(15\times\) more narrations generated by a pre-trained large language model, GPT-\(2\)[79]. On EgoNLQ, EgoVLPv2 yields an impressive gain of \(2.11\%\) R@\(1\) for IoU = \(0.3\) over EgoVLP. Moreover, using a smaller task-specific head and fewer epochs of head-tuning,
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Method** & **Video-1** & **Video-2** & **Video-3** & **Video-4** & **Average** \\ \hline SeqDPP [26] & 36.59 & 43.67 & 25.26 & 18.15 & 30.92 \\ SH-DPP [83] & 35.67 & 42.72 & 36.51 & 18.62 & 33.38 \\ QC-DPP [84] & 48.68 & 41.66 & 36.51 & 29.96 & 44.19 \\ TPAN [122] & 48.74 & 45.30 & 56.51 & 33.64 & 46.05 \\ CHAN [102] & 49.14 & 46.53 & 58.65 & 33.42 & 46.94 \\ HVN [34] & 51.45 & 47.49 & 61.08 & 35.47 & 48.87 \\ QSAN [101] & 48.52 & 46.64 & 56.93 & 34.25 & 46.59 \\ WHIM [69] & 50.96 & 48.28 & 58.41 & **39.18** & 49.20 \\ IntentVizer [100] & 51.27 & 53.48 & 61.58 & 37.25 & 50.90 \\ \hline EgoVLP\({}^{\dagger}\)[57] & 49.64 & 53.60 & 59.87 & 35.76 & 49.72 \\ EgoVLPv2 & **53.30** & **54.13** & **62.64** & 38.25 & **52.08** \\ \hline \hline \(\Delta_{\text{Omax}}\) - EgoVLP & 3.66 \(\uparrow\) & 0.53 \(\uparrow\) & 2.77 \(\uparrow\) & 2.49 \(\uparrow\) & 2.36 \(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Performance on query-focused video summarization (QFVS).** Existing baselines are trained end-to-end, whereas EgoVLPv2 only learns a tiny head on top of pre-trained encoders. \({}^{\dagger}\)EgoVLP denotes the performance achieved by the officially released checkpoint.
\begin{table}
\begin{tabular}{l|c c c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**id-to-3**} & \multicolumn{2}{c|}{**id-to-5**} & \multicolumn{2}{c|}{**id-to-7**} & \multicolumn{2}{c}{**mAP.(\%) @ IdU**} \\ & R@1 & R@5 & R@1 & R@5 & R@1 & R@5 & 0.1 & 0.3 & 0.5 & Avg. \\ \hline SlowFast [23] & 33.45 & 58.43 & 25.16 & 46.18 & 15.36 & 25.81 & 9.10 & 5.76 & 3.41 & 6.03 \\ Frozen [4] & 40.60 & 63.71 & 25.99 & 43.82 & 17.41 & 26.33 & 15.90 & 10.54 & 6.19 & 10.69 \\ SeqVLPv2 & 40.43 & 65.67 & 30.14 & 51.98 & 29.06 & 29.79 & 16.63 & 11.45 & 6.57 & 11.29 \\ \hline EgoVLPv2 & **41.97** & **68.24** & **31.08** & **54.15** & **20.96** & **31.10** & **17.58** & **11.92** & **6.90** & **12.23** \\ \hline \(\Delta_{\text{Omax}}\) - EgoVLPv2 & 1.54 \(\uparrow\) & 2.57 \(\uparrow\) & 0.94 \(\uparrow\) & 2.17 \(\uparrow\) & 1.90 \(\uparrow\) & 1.33 & 0.95 \(\uparrow\) & 0.47 \(\uparrow\) & 0.33 \(\uparrow\) & 0.84 \(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Performance on EgoMQ’s validation set.** EgoVLPv2 sets a new state-of-the-art across all baselines using VSGN [124] as grounding head.
EgoVLPv2 outperforms existing baselines, which indicates the importance of learning cross-modal information during pre-training.5 On the uni-modal grounding task, EgoMQ, our framework also sets a new state-of-the-art, outperforming EgoVLP by \(1.54\%\) R@\(1\) for IoU = \(0.3\), implying the flexibility of _fusion in the backbone_ over dual and shared encoder-based pre-training.
Footnote 5: Additional details are provided in supplementary.
**QFVS:** We evaluate EgoVLPv2 on query-focused video summarization task. The QFVS dataset contains only \(135\) video-query training samples with long (\(3\)-\(5\) hours) videos, and all existing baselines are trained end-to-end. In contrast, we learn a tiny head (single-layer transformer) on top of the pre-trained encoders. As shown in Table 4, our model persistently attains the state-of-the-art F-1 score across all four videos in this dataset. The pre-trained video-language representation helps EgoVLPv2 to achieve strong performance, whereas the baselines struggle to learn good cross-modal features due to the small training set.
**EgoTaskQA:** Table 5 shows the results on the egocentric video question-answering tasks on the EgoTaskQA dataset. Our model achieves significant gains across various baselines in the fine-tuning regime. Notably, EgoVLPv2 performs consistently well in the challenging _indirect_ split, which demonstrates its ability to solve complicated reference tasks. In the head-tuning regime, we only learn a linear layer on top of frozen encoders, where EgoVLPv2 beats EgoVLP by a strong margin, which proves the efficacy of cross-modal pre-trained representation.
**CharadesEgo:** This is a multi-class action recognition task, with class names as short text phrases. We convert this to a video-to-text (V \(\rightarrow\) T) retrieval problem as in CLIP [78], and perform dual-encoder-based retrieval. As shown in Table 6, EgoVLPv2 obtains a new state-of-the-art in both fine-tuning and zero-shot regimes. Since CharadesEgo videos are significantly different from Ego4D, being captured by crowd-sourced workers using mobile cameras, these results demonstrate the generalizability of EgoVLPv2.
**EK-100:** Table 6 shows our results on EK-100 MIR. In the fine-tuning regime, EgoVLPv2 achieves noticeable improvements over the supervised approaches (S3D, MME, JPoSE) and VLP methods (EgoVLP, HierVL). In the zero-shot setup, EgoVLPv2 beats EgoVLP and HierVL by \(7.8\%\) mAP and \(4.4\%\) nDCG scores. The consistent performance gains again show the quality of pre-trained encoders.
### Ablation Study
**Fusion in the Backbone:** We compare our fusion module to the conventional practice of using fusion-specific transformer layers, which we implement following ALBEF [46].6 Table 7 shows that the proposed fusion strategy performs
\begin{table}
\begin{tabular}{l c|c|c|c|c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Eval.**} & \multicolumn{2}{c|}{**CharadesEgo**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Eval.**} & \multicolumn{2}{c}{**EK-100 MIR**} \\ & & & mAP & & mAP & nDCG \\ \hline Actor [85] & FT & 20.0 & S3D [103] & FT & 29.2 & 44.7 \\ SSDA [12] & FT & 23.1 & MME [99] & FT & 38.5 & 48.5 \\ Ego-Ego [54] & FT & 30.1 & JPoSE [99] & FT & 44.0 & 53.5 \\ EgoVLPv [57] & FT & 32.1 & EgoVLP [57] & FT & 45.0 & 59.4 \\ HierVL-Avg [3] & FT & 32.6 & HierVL-Avg [3] & FT & 44.9 & 59.8 \\ HierVL-SA [3] & FT & 33.8 & HierVL-SA [3] & FT & 46.7 & 61.1 \\ EgoVLPv2 & FT & **34.1** & _EgoVLPv2_ & FT & **47.3** & **61.9** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} \hline \(\Delta_{\text{norm-gapNLP}}\) \\ \(\Delta_{\text{norm-IntVL-SA}}\) \\ \end{tabular} } & FT & 2.0 \(\uparrow\) & \(\Delta_{\text{norm-gapNLP}}\) & FT & 2.3 \(\uparrow\) & 2.5 \(\uparrow\) \\ \(\Delta_{\text{norm-IntVL-SA}}\) & FT & 3.7 \(\downarrow\) & \(\Delta_{\text{norm-IntVL-SA}}\) & FT & 0.6 \(\uparrow\) & 0.8 \(\uparrow\) \\ \hline EgoVLP [57] & ZS & 25.0 & EgoVLP [57] & ZS & 16.6 & 23.1 \\ HierVL-Avg [3] & ZS & 25.2 & HierVL-Avg [3] & ZS & 16.7 & 23.5 \\ EgoVLPv2 & ZS & **26.0** & EgoVLPv2 & ZS & 18.9 & 24.7 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{c} \(\Delta_{\text{norm-gapNLP}}\) \\ \(\Delta_{\text{norm-gapNLP}}\) \\ \(\Delta_{\text{norm-IntVL-SA}}\) \\ \end{tabular} } & ZS & **26.2** & EgoVLPv2 & ZS & **26.7** & **29.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Performance on CharadesEgo and EK-100 MIR. EgoVLPv2 achieves significant gains in fine-tuning and zero-shot settings for both tasks. Results are achieved by dual-encoder-based inference.**
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline \multirow{2}{*}{**Fusion**} & \multirow{2}{*}{**\# Fusion**} & \multirow{2}{*}{**\#F Trainable**} & \multirow{2}{*}{**GMACS per**} & \multirow{2}{*}{**EgoMCQ**} \\ & & & & **instance** & Inter & Intra \\ \hline \multirow{4}{*}{Fusion in the} & 3 & 374.5M & 288.62 & 90.5 & 60.0 \\ & 6 & 381.6M & 300.16 & **91.0** & **60.9** \\ \multirow{2}{*}{Backbone} & 9 & 388.7M & 311.71 & **91.0** & **60.9** \\ & 12 & 395.8M & 323.26 & **91.0** & **60.9** \\ \hline \multirow{4}{*}{Additional} & 3 & 396.9M & 402.88 & 90.5 & 60.3 \\ & 6 & 414.6M & 437.90 & 90.5 & 60.8 \\ \multirow{2}{*}{Layers} & 9 & 432.4M & 472.91 & 90.6 & 60.8 \\ \cline{1-1} & 12 & 450.1M & 507.92 & 90.6 & **60.9** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Ablation study on fusion strategies. Our proposed _fusion in the backbone_ strategy performs slightly better than using fusion-specific transformer layers, but with less parameters and less compute.
\begin{table}
\begin{tabular}{l c|c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Eval.**} & \multicolumn{2}{c|}{**Direct**} & \multicolumn{2}{c}{**Indirect**} \\ & & Open & Binary & All & Open & Binary & All \\ \hline VisualBERT [49] & FT & 24.62 & 68.08 & 37.93 & 21.05 & 57.61 & 37.01 \\ PASC [51] & FT & 26.97 & 65.98 & 38.90 & 15.31 & 57.75 & 32.72 \\ HME [21] & FT & 27.66 & 68.60 & 40.16 & 18.27 & 52.55 & 33.06 \\ HGA [35] & FT & 22.75 & 68.53 & 36.77 & 8.66 & 53.72 & 28.36 \\ HCRN [40] & FT & 30.23 & 69.42 & 42.40 & 27.82 & 59.29 & 41.56 \\ ClipBERT [44] & FT & 27.70 & 67.52 & 39.87 & 11.17 & 40.71 & 24.08 \\ \hline EgoVLPv1 [57] & FT & 31.69 & 71.26 & 42.51 & 27.04 & 55.28 & 38.69 \\ EgoVLPv2 & FT & **35.56** & **75.60** & **46.26** & **29.14** & **59.68** & **42.28** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} \(\Delta_{\text{norm-gapNLP}}\) \\ \end{tabular} } & FT & 37.87 & 43.44 & 37.57 & 21.04 & 44.0 & 3.59 \(\uparrow\) \\ \hline EgoVLPv1 [57] & HT & 20.52 & 64.63 & 32.76 & 16.87 & 48.40 & 29.19 \\ EgoVLPv2 & HT & **26.59** & **69.10** & **37.87** & **22.11** & **57.19** & **35.20** \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{c} \(\Delta_{\text{norm-gapNLP}}\) \\ \(\Delta_{\text{norm-gapNLP}}\) \\ \end{tabular} } & HT & 6.07 \(\uparrow\) & 4.47 \(\uparrow\) & 5.11 \(\uparrow\) & 5.24 \(\uparrow\) & 8.79 \(\uparrow\) & 6.01 \(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Performance on EgoTaskQA _direct_ and _indirect_ splits. EgoVLPv2 outperforms
slightly better than stacked fusion layers. For both methods, increasing the number of fusion layers to \(6\) results in a non-trivial performance gain. However, our proposed architecture is significantly more parameter- and compute-efficient. For instance, with \(6\) fusion layers, the proposed architecture contains \(33\)M fewer parameters and requires \(45\%\) lesser computing cost, which shows the efficacy of our method.
**Pre-training Objectives:** We ablate different pre-training objectives and evaluate the pre-trained models on EgoMCQ using EgoVLPv2 as a _dual_ encoder, as a _fusion_ encoder, and an ensemble of the two by summing their similarity scores for each video-text pair. As shown in Table 8, removing any pre-training objective lead to a performance drop. Specifically, VTM with hard-negative mining is largely beneficial across all three evaluation strategies. Fusion encoder-based evaluation brings significant improvements over dual-encoders; moreover, as EgoMCQ contains only \(5\) sentences for every video, both evaluation methods offer similar latency. Ensembling the two yields further \(1-2\%\) performance gain for both inter- and intra-video accuracy metrics.
### Attention Visualization & Error Analysis
In Figure 5, we show that different heads in the cross-modal attention can attend to different semantic regions of the video frames, guided by the narration. We observe that the pre-trained model learns well to recognize a wide variety of objects appearing in egocentric actions, such as indoor furniture, cooking appliances, phones, tablets, car steering, bicycle handles, etc. Such strong cross-modal information learned during pre-training helps EgoVLPv2 in multi-modal downstream tasks. The visualizations in Figure 5 are obtained with \(960\)p video frames, resulting in sequences of \(3601\) tokens for \(16\times 16\) patches. However, vastly hindered objects in cluttered environments, especially in low-light conditions, are occasionally not focused. We show such error cases in the supplementary.
## 5 Conclusion
This work introduces EgoVLPv2, the second generation of egocentric video-language pre-training and a significant improvement over the previous generation [57] by incorporating cross-modal fusion directly into the video and language backbones. Our proposed _fusion in the backbone_ strategy is lightweight, compute-efficient, and allows us to unify various VL tasks in a flexible and efficient manner. We conduct extensive experiments to demonstrate the effectiveness of EgoVLPv2 on a wide range of downstream tasks, consistently achieving state-of-the-art performance. Moreover, we visually demonstrate the effectiveness of the learned cross-attention representation.
Figure 5: **Text-to-video cross-attention from multiple heads in the last layer of EgoVLPv2 with \(16\times 16\) patches. We look at the attention maps of the [CLS] token from the text encoder on input video frames. Different heads, depicted in different colors, focus on different objects or parts. These maps show the strong cross-modal representation learned by EgoVLPv2 during pre-training, which helps to enhance performance on video-language downstream tasks.**
\begin{table}
\begin{tabular}{c c c c c|c c c|c c c} \hline \hline \multicolumn{1}{c}{\multirow{3}{*}{**Pre-training Objectives**}} & \multicolumn{5}{c}{**EgoMCQ (\%)**} \\ \cline{3-11} \multicolumn{1}{c}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{**Dual Enc.**} & \multicolumn{3}{c|}{**Fusion Enc.**} & \multicolumn{3}{c}{**Ensemble**} \\ EgoNCE & MLM & VTM & VTM-Hard & Inter & Intra & Inter & Intra & Inter & Intra \\ \hline ✓ & – & – & – & 89.5 & 52.6 & – & – & – & – \\ ✓ & ✓ & – & – & 89.6 & 52.4 & – & – & – & – \\ ✓ & – & – & ✓ & 89.6 & 53.4 & **90.6** & 59.1 & **91.0** & 60.0 \\ ✓ & ✓ & ✓ & – & 89.5 & 53.6 & 89.1 & 51.5 & 90.2 & 56.8 \\ ✓ & ✓ & – & ✓ & **89.8** & **56.7** & **90.6** & **59.6** & **91.0** & **60.9** \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Ablation study on different pre-training objectives of EgoVLPv2. We evaluate on EgoMCQ using our model either as a dual encoder, as a fusion encoder, or an ensemble of both. Removing any objective leads to a performance drop. The flexibility of the proposed fusion in the backbone module helps us boost retrieval performance using an ensembling strategy.**
## Acknowledgement
The codebase for this work is built on the EgoVLP [57], LaViLA[125], FIBER [18], and VSLNet [119] repository. We would like to thank the respective authors for their contribution, and the Meta AI team for discussions and feedback. Shraman Pramanick and Rama Chellappa were partially supported by a MURI program from the Army Research Office under the grant W911NF17-1-0304.
|
2306.12236 | Critical Multi-Cubic Lattices: A Novel Implication Algebra for Infinite
Systems of Qudit Gates | We introduce a new structure, the critical multi-cubic lattice. Notably the
critical multi-cubic lattice is the first true generalization of the cubic
lattice to higher dimensional spaces. We then introduce the notion of a
homomorphism in the category of critical multi-cubic lattices, compute its
automorphism group, and construct a Hilbert space over which we represent the
group. With this unitary representation, we re-derive the generalized Pauli
matrices common in quantum computation while also defining an algebraic
framework for an infinite system of qudits. We also briefly explore the
critical multi-cubic lattice as a novel implication algebra serving as a
logical framework for qudit gates. | Morrison Turnansky | 2023-06-21T12:53:38Z | http://arxiv.org/abs/2306.12236v2 | # Critical Multi-Cubic Lattices: A Novel Implication Algebra for Infinite Systems of Qudit Gates
###### Abstract
We introduce a new structure, the critical multi-cubic lattice. Notably the critical multi-cubic lattice is the first true generalization of the cubic lattice to higher dimensional spaces. We then introduce the notion of a homomorphism in the category of critical multi-cubic lattices, compute its automorphism group, and construct a Hilbert space over which we represent the group. With this unitary representation, we re-derive the generalized Pauli matrices common in quantum computation while also defining an algebraic framework for an infinite system of qudits. We also briefly explore the critical multi-cubic lattice as a novel implication algebra serving as a logical framework for qudit gates.
Keywords and Phrases: Hilbert Lattice, Infinite Tensor Product, Symmetry Group Representation, Propositional Logic
MSC Subject Classifications: 06B15, 47A80, 46L40, 46L60, 03G12
## Acknowledgements
I would like to thank Professor B. Hayes for the many discussions about this paper and the thesis on which it is based. Also, I am grateful to Dr. J. S. Oliveira for introducing me to the topic of multi-cubic lattices.
Introduction
The multi-cubic lattice [1] has long been seen as a successor to the cubic lattice [2] because it maintains many of the qualities of the cubic lattice. However, the multi-cubic lattice is not really a generalization as there is no set of parameters such that a multi-cubic lattice is a cubic lattice. In section 2), we introduce a proper generalization of the cubic lattice, namely the critical multi-cubic lattice. Our intuitive inspiration for such a construction is as follows. Just as the cubic lattice can be viewed as the lattice of the faces of a \(n\) dimensional cube with an operator, \(\Delta\), representing antipodal symmetry, the critical multi-cubic lattice can be viewed as the lattice of the faces of an \(n\) dimensional square grid with operators preserving translational symmetry. As a result of this formulation, we are able to generalize the results of [3]. The cubic lattice and its automorphism group were seen to be similar to the spin \(\frac{1}{2}\) system of an arbitrary cardinal of qubits, so it is reasonable to consider higher order spin systems as we generalize the results of the cubic lattice.
Section 3) begins by defining the morphisms in the category of critical multi-cubic lattices. In the sense that the critical multi-cubic lattice is the generalization of the antipodal symmetry of a cube, an action of \(Aut(\mathbb{Z}_{3})\cong\mathbb{Z}_{2}\), we have that \(Aut(\mathbb{Z}_{2k+1})\) enforces this new symmetry.
**Main Result 1** (Theorem 3.9).: _Let \(M\) be an \(|I|\)-critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\), and \(\wr\) denote an unrestricted wreath product. We have that \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\cong C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})) \wr S_{I}\). Let \(Aut(\mathbb{Z}_{2k+1})\) be generated by \(\{\sigma_{i}\}_{i=1}^{k}\), then \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))=\cap_{i=1}^{k}C_{S_{2k}}(\sigma_{i})\), and \(C_{S_{2k}}(\sigma_{i})\cong\Pi_{j=1}^{l_{i}}(\mathbb{Z}_{j_{i}}\wr S_{N_{j_{i }}})\)._
We highlight that this result reduces to the case of the cubic lattice when \(2k+1=3\), and the group reduces to an infinite version of the Coxeter group \(B_{n}\cong\mathbb{Z}_{2}\wr S_{n}\). As \(C_{S_{2k}}(Aut_{2k+1})\) is cyclic if and only if \(2k+1\) is prime, we are able to simplify our results significantly for the prime case.
**Main Result 2** (Corollary 3.14).: _If \(2k+1\) is prime, \(\mathbb{Z}_{2k}\wr S_{I}\cong Aut(M)\cong Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\)._
In [3], we embedded a cubic lattice into a specifically constructed Hilbert space, namely a potentially infinite tensor product of \(\mathbb{C}^{2}\). Section 4) answers the question: are there other similar lattice embeddings into a Hilbert space?
**Main Result 3** (Theorem 4.1).: _Let \(H\) be Hilbert space constructed as an infinite tensor product for vector spaces of dimension \(2k\), \(k\in\mathbb{N}\) over an index set \(I\). For the Hilbert lattice \(HL\), there exists a critical multi-cubic lattice \(M\) such that \(M\subseteq HL\), and the atoms of \(M\) are projections onto subspaces forming an orthonormal basis of \(H\)._
It is worth noting that to our knowledge, no attempt of any analytic structure has been attempted on the multi-cubic lattice, much less something as strong as an embedding into a Hilbert space. As is standard in quantum logic, we now have projections to serve as propositions. The remainder of Section 4) investigates this.
**Main Result 4** (Remark 4.14).: _A critical multi-cubic lattice \(M(\wedge,\vee,\Delta,\rightarrow)\) is an implication algebra, with a weak form of complementation and propositions faithfully embedded as projections onto a Hilbert space._
In section 5), we consider unitary actions on the critical multi-cubic lattice. With proper representation, the automorphisms of the critical multi-cubic lattice can be seen as set of well known quantum gates acting on an infinite set of qudits, which we demonstrate with this example.
**Main Result 5** (Example 5.11).: _As noted if \(M\) is an \(|1|\) critical multi-cubic lattice over \(\mathbb{Z}_{3}\), then \(U_{2k}\) is the Hadamard matrix up to a normalization constant. If \(M\) is an \(|1|\) critical multi-cubic lattice over \(\mathbb{Z}_{5}\), then_
\[U_{2k}=\frac{1}{2}\begin{bmatrix}1&1&1&1\\ 1&i&-1&-i\\ 1&-1&1&-1\\ 1&-i&-1&i\end{bmatrix},X_{2k}=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\end{bmatrix}\text{, and }D_{2k}=\begin{bmatrix}1&0&0&0\\ 0&i&0&0\\ 0&0&-1&0\\ 0&0&0&-i\end{bmatrix}.\]
_In general up to a normalization constant, \(\left[U_{2k}\right]_{ij}=\omega_{j}^{i-1}\) where \(\omega_{j}\) is \(jth\) element of the \(2k\) roots of unity with counterclockwise ordering starting at 1. We recognize \(U_{2k}\) as the quantum Fourier transform, \(X_{2k}\) as the shift matrix, and \(D_{2k}\) as the clock matrix._
Originally introduced by [4], it is well known that these matrices form the framework of quantum mechanical dynamics in finite dimensions. Furthermore, \(X_{2k}\) and \(D_{2k}\) are known as Generalized Pauli matrices, which are a representation of the respective Heisenberg-Weyl group [5].
As further evidence of the utility of this generalization, we are able to generalize that the Pauli matrices form an orthonormal basis of \(M_{2}(\mathbb{C})\) to infinite systems of qudits.
**Main Result 6** (Theorem 5.16).: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime, \(C\) be the coatoms of \(M\), and \(H\) be constructed as in Theorem 4.1. Then \(B(H)=W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C},\{p_{c}\}_{c\in C})=W^{*}(\{\rho_{i }(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\}_{i\in I},\{p_{c}\}_{c\in C})\) if and only if \(2k+1\) is prime._
## 2 Defining the Critical Multi-Cubic Lattice and Foundational Results
For the sake of completeness, we first introduce the multi-cubic lattice as it exists in the literature. We then proceed to define the critical multi-cubic lattice as a quotient of the multi-cubic lattice in Definition 2.6. Lastly, we conclude with some general properties such as atomisticity, Proposition 2.12, and coatomisticty, Proposition 2.13, that will be used in later sections and validate the utility of our newly introduced structure.
**Definition 2.1**.: _Let \(\Omega=\{1,2,\ldots,n\}\). For each \(i\in\Omega\), let \(\overline{M}_{i}=\{-n_{i},\ldots,0,\ldots,n_{i}\}\) be a finite \(\mathbb{Z}\)-module of odd size. We define \(\mathscr{V}=\Pi_{i=1}^{n}\overline{M}_{i}\). Elements of \(\mathscr{V}\) will be denoted by \(\vec{v}\) with the \(i^{th}\) component \((\vec{v})_{i}\). Note also that \(\overline{M}_{i}\) being of odd size is equivalent to the property \(2m=0\) implies \(m=0\). [1]_
We have now described the set \(\mathscr{V}\) over which our lattice will be defined. We begin with defining the poset on \(\mathscr{V}\).
**Definition 2.2**.: _For \(A\subseteq\Omega\) define \(X_{A}=\langle e_{i}|i\in A\rangle\), the submodule generated by the standard basis indexed by the set \(A\). We now have a poset \(P=(\{\vec{a}+X_{A}|\vec{a}\in\mathscr{V},A\subseteq\Omega\},\subseteq)\), henceforth \(P\) will be known as a multi-cube. [1]_
To be clear, when we say the ordering is determined by \(\subseteq\), we mean as the poset of the submodule, \(\vec{a}+X_{A}\leq\vec{b}+X_{B}\) if and only if \(A\subseteq B\) and \(\vec{b}-\vec{a}\in X_{B}\).
We now make \(P\) into a join lattice and meet semi lattice. We define the partial meet as follows:
**Definition 2.3**.: _Let \(P\) be a multi-cube and define \((\vec{a}+X_{A})\wedge(\vec{b}+X_{B})=\vec{c}+X_{A\cap B}\) if there exists \(c\in(\vec{a}+X_{A})\cap(\vec{b}+X_{B})\) and undefined otherwise. Now let us define \(\delta_{\vec{b}\neq\vec{a}}=\{i\in\Omega|b_{i}\neq a_{i}\}\), this is analogous to a support function as \(\delta_{\vec{a}\neq\vec{0}}=\{i\in\Omega|a_{i}\neq 0\}\), precisely the non-zero coordinates of \(\vec{a}\). We can now turn \(P\) into a join lattice by defining join as \((\vec{a}+X_{A})\vee(\vec{b}+X_{B})=\vec{a}+X_{A\cup B\cup\delta_{\vec{a}\neq \vec{a}}}\). For a given indexing set, \(I\), and odd \(k\in\mathbb{N}\), we say that \(\overline{M}\) is a \(I\)-multi-cubic lattice over \(\mathbb{Z}_{k}\) when \(\overline{M}\) is the lattice constructed from the multi-cube \(P\) for \(\mathscr{V}=\Pi_{i\in I}\mathbb{Z}_{k}\). [1]_
Hence \(\overline{M}\) when considered as a lattice is only complete under the join operation but not meet. In order to move to a critical mulit-cubic lattice, we first need to define a notion of a critical element.
**Definition 2.4**.: _Let \(\overline{M}\) be a multi-cubic lattice. A vector \(\vec{x}\in\vec{a}+X_{A}\) is critical if \(x_{i}\neq 0\) implies that \(e_{i}\notin X_{A}\). Let the critical elements of \(\overline{M}\) be denoted by \(\Gamma(\overline{M})\)._
**Proposition 2.5**.: _There exists a well defined function \(\Gamma:\overline{M}\to\Gamma(\overline{M})\) defined by \(\vec{x}\mapsto\inf\{m\in\Gamma(\overline{M}):\vec{x}\leq m\}\)[1]._
In order to have the multi-cubic lattice be a true generalization of the cubic lattice, we will need to make some slight modifications. We generalize the notion of a signed set from a three valued set \(\{-1,X,1\}\) to an \(2n+1\) valued set \(\{-n,-n-1,\ldots,-1,X,1,2,\ldots n-1,n\}\). With the mild change of definition, the \(0\) of the original multi-cubic lattice is now referred to as \(X\). The importance of this will become clear as we move from a Cartesian product of \(\mathbb{Z}\) modules to a tensor product of \(\mathbb{Z}\) modules. The algebraic structure in terms of scalar multiplication, addition, and multiplication will be of course different, but more importantly consistent with an embedding into a von Neumann algebra of similar structure to [3].
With the above discussion, we remove the ambiguity of non-critical elements.
**Definition 2.6**.: _We define a critical multi-cubic lattice, \(M\), as a multi-cubic lattice, \(\overline{M}\), modulo the equivalence relation where for any \(m,n\in\overline{M}\), we have \(m\sim n\) if and only if \(\Gamma(m)-\Gamma(n)\in X_{\sigma(m)}\) and \(X_{\sigma(m)}=X_{\sigma(n)}\). From this point forward, we also allow a critical multi-cubic lattice to be a potentially infinite direct product of \(\mathbb{Z}\) algebras. If \(M\) is a critical multi-cubic lattice indexed by a set \(I\) over a ring \(\mathbb{Z}_{2k+1}\), we say that \(M\) is an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\)._
For a critical multi-cubic lattice, \(M\), if we let \(n=\Gamma(m)+X_{\sigma(m)}\), we see that with this new notion of equality, all elements are critical. In addition, we have the property that for \(m,n\in M\), \(m\neq n\), then \(\Gamma(m)-\Gamma(n)\neq 0\).
One can view a critical multi-cubic lattice as a mixture between a direct product of finite \(\mathbb{Z}\) algebras, and the lattice of submodules defined by the closure of \(\{\{\mathbb{Z}e_{i}\}_{i\in I},0\}\) under meets and joins. By reducing to the equivalence class, we lose addition of the product ring, preserve to some degree both ring multiplication and \(\mathbb{Z}\) scalar multiplication, and gain a structure of non-zero valued outcomes with an additional indeterminate value seen in the cubic lattice.
Before going forward we show that our preserved operations are well defined. The essential concept is that two elements in a multi-cubic lattice map to the same equivalence class in a critical multi-cubic lattice if their value of the indices which are neither \(0\) nor \(X\) are equal.
**Proposition 2.7**.: _Let \(\overline{M}\) be a multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). Then the operations of scalar multiplication by units in the base ring and ring multiplication by units are independent of equivalence class representative in the corresponding critical multi-cubic lattice \(M\)._
Proof.: Let \(P:\overline{M}\to M\) be the natural projection map, and \(m,n\in\overline{M}\) such that \(P(m)=P(n)\). Since \(P(m)=P(n)\), \(m_{i}=n_{i}\) for all \(m_{i},n_{i}\neq 0\). For all \(m_{i}=0\), \(P_{i}(m_{i})=X\), and similarly for all \(n_{j}=0\), \(P_{j}(n_{j})=X\). As we assumed that all non-zero entries of \(m\), \(n\) are identical, we now have that \(P(m)\) and \(P(n)\) have identical nonzero entries and all other values as \(X\).
For nonzero divisor scalar multiplication, \(c\in Aut(\mathbb{Z}_{2k+1})\). We have \(c\cdot m=c\cdot\Pi_{i\in I-\sigma(m)}m_{i}+X_{\sigma(m)}=\Pi_{i\in I-\sigma(m) }cm_{i}+X_{\sigma(m)}\), \(m_{i}\in\mathbb{Z}_{2k+1}\), and \(c\cdot n=c\cdot\Pi_{i\in I-\sigma(n)}n_{i}+X_{\sigma}(n)=\Pi_{i\in I-\sigma(n )}cn_{i}+X_{\sigma}(n)\), \(n_{i}\in\mathbb{Z}_{2k+1}\). Then \(cm_{i}=cn_{i}\) for all \(m_{i},n_{i}\neq 0\), so \(cP(m)=P(cm)=P(cn)=cP(n)\) as nonzero divisor scalar multiplication does not change which indices had \(0\) values.
Similarly we show multiplication of units is well defined. Let \(\mu\in\overline{M}\), and further all nonzero induces of \(m\) and \(\mu\) are units in \(\mathbb{Z}_{2k+1}\). As \(m_{i}=n_{i}\) for all nonzero entries, we have \(m_{i}\mu_{i}+X_{\sigma(m)\cup\sigma(\mu)}=n_{i}\mu_{i}+X_{\sigma(n)\cup\sigma( \mu)}\). Then \(P(m\mu)=P(n\mu)\) as they agree on all nonzero entries.
The atoms and coatoms will play a significant role in our results, so we offer some intuition.
**Example 2.8**.: _Let \(M\) be a \(2\) critical multi-cubic lattice over \(\mathbb{Z}_{5}\). The atoms of \(M\) are of the form \(\{a,b\}\) where \(a,b\in\{-2,-1,1,2\}\). This generalizes the two dimensional cubic lattice case, where atoms are of the form \((A^{-},A^{+})\), \(|A^{-}\cup A^{+}|=|S|=2\). We assign each index in \(S\) to a either the set \(A^{-}\) or the set \(A^{+}\). We can equivalently assign each index in \(S\) to either value \(-1\) or the value \(1\), and see that this is the reduction to the \(2\) critical multi-cubic lattice over \(\mathbb{Z}_{3}\). In general, an \(|I|\) critical multi-cubic lattice has \((2k)^{|I|}\) atoms of the form \(\Pi_{i\in I}a_{i}\), \(a_{i}\in\mathbb{Z}_{2k+1}-\{0\}\)._
**Example 2.9**.: _Let \(M\) be a \(2\) critical multi-cubic lattice over \(\mathbb{Z}_{5}\). The coatoms of \(M\) are of the form \(\{a,b\}\) where exactly one \(a,b\in\{-2,-1,1,2\}\) and all others are assigned to \(X\). This generalizes the two dimensional cubic lattice case, where atoms are of the form \((A^{-},A^{+})\), \(|A^{-}\cup A^{+}|=|1|\). We assign exactly one index in \(S\) to a either the set \(A^{-}\) or the set \(A^{+}\). We can equivalently assign exactly one index in \(S\) to either value \(-1\) or the value \(1\), and see that this is the reduction to the \(2\) critical multi-cubic lattice over \(\mathbb{Z}_{3}\). In general, an \(|I|\) critical multi-cubic lattice has \(2k|I|\) coatoms of the form \(\Pi_{i\in I}V\), where \(V=X\) for all but exactly one \(i\in I\) and \(V\in\mathbb{Z}_{2k+1}-\{0\}\) otherwise. One may find it helpful to think of X as the submodule \(\mathbb{Z}_{2k+1}\) on each index._
The fact that the atoms and coatoms in the \(2\) critical multi-cubic lattice of \(\mathbb{Z}_{3}\) are the same as the \(2\) dimensional cubic lattice is not a coincidence. We show that the case of the cubic lattice is just a specific case of the critical multi-cubic lattice with an automorphism.
**Theorem 2.10**.: _Let \(M\) be a \(|S|\)-critical multi-cubic lattice over \(\mathbb{Z}_{3}\). Then M is a cubic lattice._
Proof.: We need only show that \(M\cong L(S)\) as lattices. For each \(m\in M\), we have that \(m=\Pi_{i\in I-\sigma(m)}a_{i}+X_{\sigma(m)}\), \(a_{i}\in\mathbb{Z}_{3}^{*}=\{-1,1\}\), is mapped to an element \(a=(A^{+},A^{-})\in L(S)\) defined by \(i\in A^{+}\) if \(a_{i}=1\) and \(i\in A^{-}\) if \(a_{i}=-1\), so that \(A^{+}\cap A^{-}=\emptyset\). We denote this mapping by \(f:M\to L(S)\), and one can see that f is a bijection.
It remains to show that we have an order homomorphism. If \(m\leq n\), \(f(m)=(A^{+},A^{-})\), and \(f(n)=(B^{+},B^{-})\), then \(m-n\in X_{\sigma(n)}\), which implies that \(B^{+}\subseteq A^{+}\) and \(B^{-}\subseteq A^{-}\). Therefore, \(f(n)=(B^{+},B^{-})\subseteq(A^{+},A^{-})=f(m)\). As order homomorphism induce lattice homomorphism we have that \(f\) is also a lattice homomorphism.
We have shown many similarities between cubic lattices and multi-cubic lattices, we now highlight that they are in fact a weaker object as general critical multi-cubic lattices do not meet the axiomatic definition of cubic lattices.
**Proposition 2.11**.: _Let \(M\) be a critical multi-cubic lattices over \(\mathbb{Z}_{2k+1}\), \(k\geq 2\). Then \(M\) does not meet axiom 2 of Definition [3, Definition 1.1.2]._
Proof.: Let \(\phi\) be any order automorphism of \(M\). Then for any coatoms \(a,b\in M\) such that \(a\wedge b=0\) implies \(a,b\in\{a_{j}e_{i}:a_{j}\in\mathbb{Z}_{2k+1}\}\) for some fixed \(i\in I\). Since \(\phi\) is an order automorphism, \(\phi(b)\) is a coatom as well, and \(a\vee\phi(b)<1\) implies \(\phi(b)=a\) because the join of any two distinct coatoms is equal to \(1\). Then for a fixed a, \(\phi(b)=a\) for all \(b\in\mathbb{Z}_{2k+1}-\{a,0\}\), so \(\phi\) is not injective.
In summation, the failure of a general multi-cubic lattice comes down to the fact that \(|\mathbb{Z}_{2k+1}-\{a,0\}|>1\) for \(k\geq 2\). However, other axiomatic properties of the cubic lattice hold.
**Proposition 2.12**.: _Let \(M\) be a critical multi-cubic lattice. Then M is a atomistic._
Proof.: Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\), and \(\Pi_{i\in I-\sigma(m)}a_{i}+X_{\sigma(m)}=m\in M\), \(a_{i}\in\mathbb{Z}_{2k+1}-\{0\}\). Then \(m=\vee_{j\in\sigma(m)}m_{j}\), where \(m_{j_{i}}=m_{i}\) for all \(i\in I-\sigma(m)\), \(m_{j_{j}}=-1\), and \(m_{j_{i}}=1\) for all \(i\in\sigma(m)-\{j\}\). We have that each \(m_{j}\) is an atom of \(M\), so the result follows.
**Proposition 2.13**.: _Let \(M\) be a critical multi-cubic lattice. Then M is a coatomistic._
Proof.: As \(M\) is a critical multi-cubic lattice, for all \(m\in M\), \(m_{i}\in\mathbb{Z}_{2k+1}-\{0\}\) or \(m_{i}=\mathbb{Z}_{2k+1}\). Therefore, for any \(m\in M\), we have that \(m=\Pi_{i\in I-\sigma(m)}a_{i}+X_{\sigma(m)}\) where \(a_{i}\in\mathbb{Z}_{2k+1}-\{0\}\), so \(m=\wedge_{i\in I}c_{i}(a)\) where \(c_{i}(a)\) denotes the coatom in the \(i\)th index equal to \(a\).
## 3 Automorphisms of the Critical Multi-Cubic Lattice
As is often the case with the introduction of a new structure, one often asks what are the important properties that should be preserved by homomorphisms? This leads to another natural question: given a characterization of homomorphisms, can we classify the respective automorphism group up to isomorphism? These two basic questions will be the focus of this section, and they will allow us to study their unitary representations in later sections.
Informally, a critical multi-cubic homomorphism should preserve any of its well defined operations of which there are four, namely \(\wedge\), \(\vee\), scalar multiplication by units of the module, and multiplication of units in the ring. Since order homomorphisms are lattice homomorphisms, and our lattice is coatomistic, it is necessary that a critical multi-cubic lattice homomorphism maps coatoms to coatoms. In addition, any homomorphism of a critical multi-cubic lattice should also preserve scalar and ring multiplication. We formalize the above in the following definition.
**Definition 3.1**.: _Let \(k\in\mathbb{Z}\), \(I_{M},I_{N}\) be indexing sets, and \(M\) be an \(|I_{M}|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) with coatoms \(C_{M}\), and \(N\) be an \(|I_{N}|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) with coatoms \(C_{N}\). A map \(\phi:M\to N\) is a critical multi-cubic homomorphism if \(\phi\) a scalar multiplication homomorphism over the units of the base ring such that for all \(c\in C_{M}\) either \(\phi(c)\subseteq C_{N}\) or \(\phi(c)=0\). In particular if \(N=M\) and \(\phi\) is a \(\mathbb{Z}\) module automorphism, then \(\phi\in Per(C_{M})\)._
Sufficiency of the conditions follows as we are guaranteed to keep the \(\mathbb{Z}\) linear structure, and as a function of coatoms between coatomistic lattices, we have an order homomorphism, which is necessarily a lattice homomorphism as well. In fact, ring multiplication is preserved by any permutation of indices, so we do not include in the definition. We will show that this notion is a generalization of a cubic homomorphism.
**Definition 3.2**.: _Let \(k\in\mathbb{N}\) and \(M\) be a critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). We denote \(Aut(M)\) as the automorphism group of \(M\)._
We show by example that the homomorphism conditions are not equivalent.
**Example 3.3**.: _Let \(M\) be a 1 critical multi-cubic lattice over \(\mathbb{Z}_{3}\) i.e. a cubic lattice. Now we demonstrate that there are \(\mathbb{Z}\) module automorphisms that are not order automorphisms._
\[A=\begin{bmatrix}1&1\\ 0&1\end{bmatrix}.\]
\(A[0,1]^{t}=[1,1]^{t}\notin C_{M}\)_, so \(A\) is a \(\mathbb{Z}\) linear transformation that is not even a permutation of coatoms and therefore not an order homomorphism._
_We will show that there are order automorphisms that are not \(\mathbb{Z}\) module automorphisms. Let \(N\) be a 2 critical multi-cubic lattice over \(\mathbb{Z}_{3}\). The atoms of \(N\) are the standard basis. Let \(P_{ij}\) denote the projection onto the sum of standard basis vectors \(e_{i}\), \(e_{j}\), so that \(C_{N}=\{P_{12},P_{13},P_{13},P_{34}\}\), and with this ordering let \(\sigma=(1234)\in S_{4}\cong Per(C_{N})\). We claim that \(\sigma\) is not representable as an invertible linear transformation. Recall that permutations act on the lattice of projections by inner automorphism. Then \(\sigma(P_{12})\sigma^{-1}=P_{13}\), and \(\sigma(P_{34})\sigma^{-1}=P_{12}\), but \(\sigma(P_{12}+P_{34})\sigma^{-1}=\sigma I\sigma^{-1}=I\neq P_{13}+P_{12}= \sigma P_{12}\sigma^{-1}+\sigma P_{34}\sigma^{-1}\)._
With this view in mind we have a new and equivalent notion of the poset of a critical multi-cubic lattice.
**Definition 3.4**.: _Let \(\Pi_{i}\) be the coordinate-wise projection onto the \(i\)th index and \(\Pi_{J}\) be the projection onto the \(J\subseteq I\) coordinates. Implicitly, if \(m=\Gamma(m)+X_{\sigma(m)}\), we define \(\Pi_{J}\) as \(\Pi_{J}^{m}\), \(J^{m}\subseteq I-\sigma(m)\) where \(J^{m}=J-\sigma(m)\)._
**Proposition 3.5**.: _Let \(M\) be a critical multi-cubic lattice and \(m\), \(n\in M\) such that \(m\leq n\). Then there exists \(\Pi_{J}\), \(J\subseteq I-\sigma(n)\) such that \(\Pi_{J}(m)=n\)._
Proof.: Recall that \(m\leq n\), then \(m-n\in X_{\sigma(n)}\), so let \(J=\sigma(n)-\sigma(m)\).
Because we fix a generating set of the \(\mathbb{Z}\) algebra, we can view critical multi-cubic homomorphisms as an automorphism composed with a projection as defined above. With the importance of the critical multi-cubic automorphisms now highlighted, we proceed to classify them as a generalization of [6].
**Definition 3.6**.: _Let \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\) be the permutations of coatoms of M that commute with the action of \(Aut(\mathbb{Z}_{2k+1})\) defined by \(c\mapsto ac\) for \(c\in C_{M}\) and \(a\in Aut(\mathbb{Z}_{2k+1})\)._
**Lemma 3.7**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). Then \(Aut(M)\cong Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\)._
Proof.: By definition, we have an injective group homomorphism \(i:Aut(M)\to Per(C_{M})\) where \(i\) is inclusion map. We want to show the range of \(i\) is strictly contained in \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\). Any \(\phi\in Aut(M)\) is a \(\mathbb{Z}\) module homomorphism of \(\mathbb{Z}_{2k+1}^{I}\), so we have that for any \(a\in Aut(\mathbb{Z}_{2k+1})\), \(a\) induces an automorphism on \(\mathbb{Z}_{2k+1}\) by multiplication. Now by the property of module homomorphisms over commutative rings for any \(m\in M\), \(\phi(a)\circ\phi(m)=a\phi(m)=\phi(am)=\phi(ma)=\phi(m)\circ\phi(a)\). Thus, \(Aut(M)\leq Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\).
For the reverse direction, we show that any \(\psi\in Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\) defines a critical multi-cubic automorphism. Firstly, it is a permutation of coatoms of \(M\) by definition, and thus bijective. Secondly, for the linearity condition, let \(\psi_{a}\) be multiplication by \(a\in Aut(\mathbb{Z}_{2k+1})\), and \(m\in M\). Then \(a\psi(m)=\psi_{a}\circ\psi(m)=\psi\circ\psi_{a}(m)=\psi(am)\)
Unlike the case of the cubic lattice, the permutations that commute with \(Aut(\mathbb{Z}_{2k+1})\) are in general a larger group than just the automorphism group, so we must consider some additional factors. As we will show, it is for this reason that the automorphism groups of the cubic lattice and critical multi-cubic lattice become more similar when \(2k+1\) is prime.
**Definition 3.8**.: _As \(Aut(\mathbb{Z}_{2k+1})\) fixes the identity, by relabeling, we can consider the action on \(\{a:1\leq a\leq 2k\}\) by left multiplication. With this action, let \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) denote the centralizer of \(Aut(\mathbb{Z}_{2k+1})\) in \(S_{2k}\)._
**Theorem 3.9**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\), and \(\wr\) denote an unrestricted wreath product. Then \(Per_{Aut(2_{2k+1})}(C_{M})\cong C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\wr S_{I}\), Let \(Aut(\mathbb{Z}_{2k+1})\) be generated by \(\{\sigma_{i}\}_{i=1}^{k}\), then \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))=\cap_{i=1}^{k}C_{S_{2k}}(\sigma_{i})\), and \(C_{S_{2k}}(\sigma_{i})\cong\Pi_{j=1}^{I_{i}}(\mathbb{Z}_{j_{i}}\wr S_{N_{j_{i} }})\)._
Proof.: Any \(\phi\in Per_{Aut(2_{2k+1})}(C_{M})\) is determined by the image of its coatoms. Note that for any two distinct coatoms their meet is empty if and only if they are of the form, \(ae_{i}\) and \(be_{i}\) for distinct \(a,b\in\mathbb{Z}_{2k+1}-\{0\}\) and some \(i\in S\). By Lemma 3.7, \(\phi\) defines a critical multi-cubic automorphism, so we have that for any \(i\in S\) and distinct \(a\), \(b\in\mathbb{Z}_{2k+1}-\{0\}\), \(\emptyset=\phi(ae_{i}\wedge be_{i})=\phi(ae_{i})\wedge\phi(be_{i})\). Hence, for all \(\alpha\in\mathbb{Z}_{2k+1}-\{0\}\), \(\phi(\alpha e_{i})=\beta_{\alpha}e_{j}\) for some \(\beta_{\alpha}\in\mathbb{Z}_{2k+1}-\{0\}\) and a fixed \(j\in S\).
We now need only consider \(\phi\) for each individual index. \(Aut(\mathbb{Z}_{2k+1})\) acts on the coatoms of fixed index, \(C_{i}=\{a\in\mathbb{Z}_{2k+1}-\{0\}\}\) by left multiplication, so by relabeling, we consider the action on \(\{a:1\leq a\leq 2k\}\). Let \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{i})\) denote the permutations of \(C_{i}\) commuting with the action of \(Aut(\mathbb{Z}_{2k+1})\). We observe that \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{i})\) is isomorphic to the centralizer of \(Aut(\mathbb{Z}_{2k+1})\) in \(S_{2k}\). Thus, \(\phi\in C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\wr S_{I}\).
For a given \(\sigma\in S_{m}\), with a standard cycle decomposition consisting of \(N_{j}\) cycles of length \(j\), we use that \(C_{S_{m}}(\sigma)\cong\Pi_{j=1}^{j}(\mathbb{Z}_{j}\wr S_{N_{j}})\). As the centralizer of subgroup is the intersection of the centralizer of its generators, and we conclude our result.
It is known that the hyperoctehedral group can be represented as the signed permutation group. We have used this terminology to inspire our generalization.
**Definition 3.10**.: _We define the \(Aut(\mathbb{Z}_{2k+1})\)-value permutation group of degree \(I\) as a generalization of the elements of the permutation group in \(M_{I}(\mathbb{Z}_{2k+1})\) where each nonzero element can take any value of \(Aut(\mathbb{Z}_{2k+1})\). Note that \(M_{I}(\cdot)\) denotes the \((\cdot)\) valued matrices indexed by \(I\)._
**Proposition 3.11**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) for \(k\in\mathbb{N}\). Then there exists a group representation \(\rho:Aut(\mathbb{Z}_{2k+1})\wr S_{I}\to M_{I}(\mathbb{Z}_{2k+1})\) as the \(Aut(\mathbb{Z}_{2k+1})\)-value permutation group of degree \(I\). Furthermore, when considering as the base ring \(Aut(\mathbb{Z}_{2k+1})\subseteq Z(\rho(Aut(\mathbb{Z}_{2k+1})\wr S_{I}))\)._
Proof.: We have that the standard basis vectors \(\{e_{i}\}_{i\in I}\) form a generating set of \(\mathbb{Z}_{2k+1}^{I}\) as a \(\mathbb{Z}\) module. Now \(Aut(\mathbb{Z}_{2k+1})\wr S_{I}\) can be represented as the \(Aut(\mathbb{Z}_{2k+1})\)-value permutation group by considering the action \(e_{i}\mapsto a_{i}e_{\sigma}(i)\) for a given \((\times_{i\in I}a_{i},\sigma)\in Aut(\mathbb{Z}_{2k+1})\wr S_{I}\), which we define as \(\rho\). As a commutative base ring, \(\mathbb{Z}_{2k+1}^{*}\subseteq Z(M_{I}(\mathbb{Z}_{2k+1}))\), which are in the \(Aut(\mathbb{Z}_{2k+1})\)-value permutation group by definition and in \(Z(Aut(\mathbb{Z}_{2k+1})\wr S_{I})\) because \(\rho\) is a multiplicative group representation.
**Lemma 3.12**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\), and \(\rho\) be defined by Proposition 3.11. Then \(\rho(Aut(\mathbb{Z}_{2k+1})\wr S_{I})\leq Aut(M)\)._
Proof.: We claim that \(\rho(Aut(\mathbb{Z}_{2k+1})\wr S_{I})\leq Aut(M)\). We have a bijection, \(f:\{ae_{i}:a\in\mathbb{Z}_{2k+1}-\{0\},\,i\in I\}\to C\), where \(e_{i}\) denotes the standard basis vector, and \(C\) is the coatoms of the form specified in
Example 2.9, and \(f\) is defined by \(ae_{i}\mapsto\Pi_{j\in I}V_{j}\), \(V_{i}=a\), and \(V_{j}=X\) for all \(j\neq i\). Then we have that \(\rho(Aut(\mathbb{Z}_{2k+1})\wr S_{I})\) defines a permutation of the coatoms by its action on \(\{ae_{i}:a\in\mathbb{Z}_{2k+1}-\{0\}\), \(i\in I\}\). We have already shown in Proposition 3.11 that \(\rho(Aut(\mathbb{Z}_{2k+1})\wr S_{I})\) commutes with the action of \(\mathbb{Z}\) scalar multiplication of units on \(\{ae_{i}:a\in\mathbb{Z}_{2k+1}-\{0\}\), \(i\in I\}\), so we conclude our result.
**Theorem 3.13**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). Then \(Aut(\mathbb{Z}_{2k+1})\wr S_{I}\leq Aut(M)\cong Per_{Aut(\mathbb{Z}_{2k+1})}(C _{M})\)._
Proof.: As a result of Lemma 3.12 and Lemma 3.7, we have \(Aut(\mathbb{Z}_{2k+1})\wr S_{I}\) is isomorphic to a subgroup of \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\).
We have now rederived the group theoretic results of [6] and shown it in much more generality. We highlight the cyclic case below as the operator algebraic structure will be most similar to the results of Section 5, and identical if \(k=1\).
**Corollary 3.14**.: _If \(2k+1\) is prime, \(\mathbb{Z}_{2k}\wr S_{I}\cong Aut(M)\cong Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\)._
Proof.: By Theorem 3.9, \(Per_{Aut(\mathbb{Z}_{2k+1})}(C_{M})\cong C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})) \wr S_{I}\). If \(2k+1\) is an odd prime, then \(Aut(\mathbb{Z}_{2k+1})\cong\mathbb{Z}_{2k}\), and \(C_{S_{2k}}(\mathbb{Z}_{2k})\cong(e\times e)\times(\mathbb{Z}_{2k}\times e) \cong\mathbb{Z}_{2k}\), where \(e\) denotes the group consisting of only the identity.
## 4 Logic of the Critical Multi-Cubic Lattice
Up the this point, we have explored the algebraic structure of the critical multi-cubic lattice. In order to consider a propositional logic, we require a set of propositions and logical connectives. Since, we are starting with a lattice, we begin with the connectives of \(\wedge\) and \(\vee\), and our set of propositions.
In order to embed our propositions as projections onto a Hilbert space as is standard with quantum logic, we proceed to generalize [3, Theorem 2.1.1] to the critical multi-cubic lattice. In the following theorem, we are using the infinite tensor product detailed in [7].
**Theorem 4.1**.: _Let \(H\) be Hilbert space constructed as an infinite tensor product for vector spaces of dimension \(2k\), \(k\in\mathbb{N}\) over an index set \(I\). For the Hilbert lattice \(HL\), there exists a critical multi-cubic lattice \(M\) such that \(M\subseteq HL\), and the atoms of \(M\) are projections onto subspaces forming an orthonormal basis of \(H\)._
Proof.: We see that each simple tensor \(\otimes_{i\in I}a_{i}\), \(a_{i}\in\mathbb{Z}_{2k+1}-\{0\}\), is a C-sequence [7], and we represent it by a respective projection operator. We use the notation \(\Pi_{i\in I}a_{i}\) to define an element \(m\) forming the atoms of \(M\) an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\).
Let \(\vee_{M}\) denote join in \(M\) and \(i:M\to HL\). We need only show that for all \(a,b\in M\), \(i(a\vee_{M}b)\in HL\). Let \(a=\Pi_{i\in I}a_{i}\) and \(b=\Pi_{i\in I}b_{i}\), then \(a\vee_{M}b=\Pi_{i\in J}a_{i}+X_{I-J}\), where \(J=\{i\in I:a_{i}=b_{i}\}\), and \(i(a\vee_{M}b)\) be the projection \(V=\otimes_{i\in I}V_{i}\), \(V_{i}=a_{i}\) for \(i\in J\) and \(V_{i}=\mathbb{C}^{2k}\), \(i\in J\). By atomisticity of Proposition 2.12, \(i\) is an order homomorphism and therefore a lattice homomorphism.
The set of simple tensors are all rank \(1\) and therefore atoms in \(HL\). The atoms of \(M\) form an orthonormal system in \(H\). For any distinct atoms \(a\), \(b\in M\), we have that there exists \(i\in I\) such that \(a_{i}\neq b_{i}\), so \(\langle a_{i},b_{i}\rangle_{H_{a}}=0\) which implies that \(\langle a,b\rangle_{H}=0\). Furthermore, these vectors span \(\Pi^{\prime}\otimes_{\alpha\in I}H_{\alpha}\), and therefore are dense in \(H\)
To relate to our previous notation, we can see that for all \(a\in M\), \(a\) can be considered as an element of \(p\in HL\) by the tensor coordinatization \(p_{i}=I_{2k}\) for \(i\in\sigma(a)\) and \(p_{i}=p_{a_{i}}\), the projection onto the \(a_{i}\) subspace of \(C^{2k}\), otherwise.
**Remark 4.2**.: _We want to highlight that the lattice homomorphism defined in Theorem 4.1 is defined on the lattice \(M\) considered as a subset of \(HL\) and not a a lattice homomorphism from \(M\) to the lattice \(HL\)._
We have seen that we can embed a critical multi-cubic lattice to a Hilbert lattice in much the same way the cubic lattice to the Hilbert lattice. From an analytic perspective these objects have been shown to share many of the same qualities. However this is where the similarities stop for the most part. By the arguments of the previous section, we have removed the cardinality constraints of [1], and defined and abstracted to critical multi-cubic auotomorphisms.
Now that we have projections acting as propositions, we can explore an additional connective, complement. One of the interesting characteristics of the cubic lattice is that we had an order preserving complementation, \(\Delta\). In fact, the action of \(\Delta\) on the coatoms of cubic lattice embedded in the Hilbert lattice exactly matched the action of negation on the same coatoms. However, we lose this in the general case.
**Proposition 4.3**.: _The action of \({}^{\perp}\) on a critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\), \(k>2\), \(M\), embedded in Hilbert lattice, \(HL\), in the manner of Theorem 4.1 does not define a permutation on \(C\), the coatoms of \(C\) of \(M\)._
Proof.: We can show that \({}^{\perp}\) does not map coatoms to coatoms. Fix a a coatom \(ae_{i}=c\in C\), then \(c^{\perp}=\vee_{b\in\mathbb{Z}_{2k+1}-\{a,0\}}be_{i}\), which is not even in \(M\).
Although the action \(\perp\) does not act nicely on the critical multi-cubic lattice, we still have an order preserving map that is nearly a complement. We are now in a position to readdress the definition of \(\Delta\) in the context of a critical multi-cubic lattice. We first recall the definition of \(\Delta\) on the multi-cubic lattice.
**Definition 4.4**.: _Let \(\overline{M}\) be a multi-cubic lattice and \(a,b\in M\) and \(a\leq b\). We define \(\sigma(a)\) to be the subset of \(\Omega\)\(i\notin\sigma(a)\) if and only if \(a_{i}\neq 0\), and \(\Delta:\overline{M}\times\overline{M}\to\overline{M}\) such that \(\Delta(b,a)=2\Gamma(b)-\Gamma(a)+X_{\sigma(a)}\)[1]._
Equivalently, we can define \(\Delta\) purely by its action of the multi-cubic lattice.
**Definition 4.5**.: _Let \(\overline{M}\) be a multi-cubic lattice and \(a,b\in\overline{M}\), \(a\leq b\). Then we define \(\Delta:\overline{M}\times\overline{M}\to\overline{M}\) by_
\[\Delta(b,a)_{i}=\begin{cases}-a_{i}&a_{i}\in X_{\sigma(b)-\sigma(a)}\\ a_{i}&\text{otherwise}\end{cases}\]
_for all \(i\in I\)._
As a quick justification, we note that both actions flip the sign of \(a_{i}\) if and only if \(\Gamma(a)_{i}\) is nonzero and \(\Gamma(a)_{i}\in X_{\sigma(b)}\). Another phrasing is that if a and b are critical with \(a\leq b\), then the coordinate, i, will change sign exactly when \(i\in\{B-A\}\). However, this action defined on product \(\mathbb{Z}\) modules will be defined as just another module homomorphism when \(b=1\).
By the definition of \(\Delta\), one can directly see that it utilizes the \(\mathbb{Z}\) module structure of the MCL. However, we can abstract \(\Delta\) by only focusing on its action and define it in the context of a multivalued signed set of the MCL. This will allow us to generalize the ideas proven for the cubic lattice.
**Definition 4.6**.: _Let \(M\) be a critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). For all \(n\in M\), we define \(\Delta_{M}(n,\cdot)\) as multiplication by \(2k\) on the respective ideal \((n)\)._
**Lemma 4.7**.: _Let \(M\) be a critical multi-cubic lattice, \(n\in M\). Then \(\Delta_{M}(n,\cdot)\) always exists and is well defined._
Proof.: For any odd \(k\geq 3\), \(\Delta_{M}\) is just scalar multiplication by \(2k\) which exists by construction. As \(gcd(2k,2k+1)=1\), \(\Delta\) and is well defined by Proposition 2.7.
The fact that such a map factors through the projection map should not be surprising as the original definition operated on on the equivalence class representatives.
**Theorem 4.8**.: _Let \(M\) be a critical multi-cubic lattice such that \(M\cong L(S)\) for an indexing set \(S\) for the lattice isomorphism \(f\) of Theorem 2.10. The action of \(\Delta_{M}(n,m)\) on \(M\) is equal to the action of \(\Delta(f(n),f(m))\) on \(L(S)\)._
Proof.: As the cubic lattice atomistic, we need only show that the claim holds for the atoms of the respective ideal. By the proof of Lemma 4.7, we have a well defined \(\Delta_{M}(n,m)\) that is equivalent to multiplication by \(-1\) on the ideal \((n)\).
When considering the action of the pushforward on \(L(S)\), \(f\circ\Delta_{M}(n,m)\), we swap all nonzero indices of \(m\in\sigma(n)\) that are not equal to \(1\) to \(-1\) or vice versa. If we let \((B^{-},B^{+})=f(n)\), and \((A^{-},A^{+})=f(m)\), then the pushforward is equal to \((B^{-}\cup\{A^{+}-B^{+}\},B^{+}\cup\{A^{-}-B^{-}\})=\Delta(f(m),f(n))\).
While the interplay of \(\Delta\) and \(\bot\) is lost, \(\Delta\) has some of the properties of an order preserving compliment.
**Corollary 4.9**.: _Let \(M\) be a critical multi-cubic lattice, \(a,b\in M\). Then the following hold._
1. \(\Delta(a,a)=a\)__
2. _if_ \(a\leq b\)_, then_ \(\Delta(b,a)\leq b\)__
3. _if_ \(a\leq b\)_, then_ \(\Delta(b,a)=2b-a\)__
4. _if_ \(a\leq b\) _then_ \(\Delta(b,\Delta(b,a))=a\)__
5. _if_ \(a\leq b\leq c\) _then_ \(\Delta(c,a)\leq\Delta(c,b)\)__
6. _if_ \(a\leq b\)_, then_ \(\Delta(b,a)=b\) _if and only if_ \(a=b\)__
7. _if_ \(a<b\)_, then_ \(\Delta(b,a)=b\) _or_ \(\Delta(b,a)\lor b=\emptyset\)__
Proof.: The result follows from [1, Proposition 2.13] and that \(\Delta\) factors through the quotient map.
We can see that \(\Delta\) is order preserving (2,4,5), and it acts as the identity on the diagonal elements (1) of the product \(M\times M\). In addition, it retains a notion of a local complement (6,7).
Lastly, we look at the logical connective of implication.
**Definition 4.10**.: _Let \(\overline{M}\) be a multi-cubic lattice, \(a,b\in\overline{M}\). Define implication by \(b\to a=\Gamma(a)+X_{\sigma(a)\cup\sigma(b)^{c}}\)[1, Definition 2.14]._
**Proposition 4.11**.: _Multi-cubic lattices form implication algebras [1, Lemma 2.15]._
Using the above result, we only need to show that implication is well defined on the quotient in order to form an implication algebra.
**Lemma 4.12**.: _Implication is well defined when considered on the critical multi-cubic lattice._
Proof.: As the definition of implication is in terms of the critical element of a, we have that \(b\to a=b\to\Gamma(a)\). Therefore it is sufficient to show \(b\to a=\Gamma(b)\to a\). We note that for all \(c\in M\) such that \(\Gamma(c)=\Gamma(b)\), we have that \(X_{\sigma(c)}=X_{\sigma(b)}\), and equivalently \(X_{\sigma(c)}^{c}=X_{\sigma(b)}^{c}\). Therefore the equality follows.
**Theorem 4.13**.: _Critical multi-cubic lattices are implication algebras with implication defined by \(b\to a=a+X_{\sigma(a)\cup\sigma(b)^{c}}\)._
Proof.: This follows directly from Proposition 4.11 and Lemma 4.12.
In summation,
**Remark 4.14**.: _A critical multi-cubic lattice \(M(\wedge,\vee,\Delta,\rightarrow)\) is an implication algebra, with a weak form of complementation and propositions faithfully embedded as projections onto a Hilbert space._
We believe that this is the beginnings of a propositional quantum logic for higher spin qudits. The relation to qudits will be made clear in the following section.
## 5 Operator Algebras of a Critical Multi-Cubic Lattice
Now that have an embedding of the critical multi-cubic lattice into a Hilbert space, we can consider the representation of its automorphism group. From this, we obtain the generalized Pauli matrices and the quantum Fourier transform. To our knowledge, this is the first time, that these operators have acted on an infinite system of qudits.
**Definition 5.1**.: _Let M be a critical multi-cubic lattice such that where \(M\subseteq HL\) as in Theorem 4.1, \(C\) be coatoms of \(M\), and \(A\) be the atoms of \(M\). We denote the projection operator onto \(c\in C\) as \(p_{c}\), and the projection operator onto \(a\in C\) as \(p_{a}\)._
**Proposition 5.2**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). Then there exists a unitary representation \(\rho:Aut(M)\to B(H)\) where \(H\) is constructed in Theorem 4.1._
Proof.: For any \(\phi\in Aut(M)\), we equivalently consider \(\phi\in Per_{Aut(\mathbb{Z}_{2k+1})}C\) by Lemma 3.7. We use coatomicity of the critical multi-cubic lattice to define an action on the atoms of \(M\). As the atoms of M form an orthonormal basis of \(H\), we can then consider the linear extension to obtain \(\rho(\phi)\). Since \(\rho(\phi)\) sends an orthonormal basis to an orthonormal basis, we have that it must be unitary.
Ultimately, we have made a choice in our construction of \(\rho\). This is because we have an issue of ambiguity when translating actions on projection operators to a given subspace. As an example, \(\pm I\) both define exactly the same action the atoms of \(M\) when considered as unitary similarity transformations of projection operators. We now discuss an equivalent way of defining \(\rho\) that highlights this. Recall that operators \(p_{a}\) are rank one projections onto some vector forming an orthonormal basis of \(H\). From another perspective, every atom is an infinite tensor product of exactly one element, \(a_{i}\in H_{i}\) of our original choice of orthonormal basis of \(H_{i}\) for each \(i\in I\). In order to reduce ambiguity, for each \(p_{a}\), and a given permutation \(\phi:\{p_{a}\}\rightarrow\{p_{a}\}\), we chose the vector \(a=\otimes_{i\in I}a_{i}\) as an equivalence class representative of vectors \(\{h\in H:p_{a}=h\rangle\langle h\}=\{za:z\in\partial B_{1}(0)\subseteq\mathbb{ C}\}\), so \(\rho(\phi):H\to H\) is defined by the map \(a\mapsto\phi(a)\).
As a consequence of this choice, we have the following.
**Proposition 5.3**.: _Let \(M\subseteq HL\) as in Theorem 4.1 and \(U\in W^{*}(\rho(Aut(\mathbb{Z}_{2k+1}))^{\prime}\) be unitary. There exists a unitary \(V\in\rho(Aut(M))\) such that \(Ad_{U}=Ad_{V}:M\to M\) and \(U=VS\) for \(S\in W^{*}(\{p_{c}\}_{c\in C})\cap W^{*}(\rho(Aut(\mathbb{Z}_{2k+1})))^{\prime}\)._
Proof.: If \(U\in W^{*}(\rho(Aut(\mathbb{Z}_{2k+1}))^{\prime}\), then \(Ad_{U}\in\rho(Aut(M))\) by Lemma 3.7. Now let \(V=\rho(Ad_{U})\subseteq W^{*}(U_{\Delta})^{\prime}\). Then \(Ad_{V^{*}}=Ad_{V}^{-1}\), so \(Ad_{UV^{*}}|_{M}=Ad_{I}|_{M}\). As the action of inner automorphism stabilizes \(M\), \(UV^{*}\in W^{*}(\{p_{c}\}_{c\in C})^{\prime}\), so there exists \(S\in W^{*}(\{p_{c}\}_{c\in C})^{\prime}=W^{*}(\{p_{c}\}_{c\in C})\)[3, Proposition 3.1.15] such that \(U=VS\). Furthermore, \(S=UV^{*}\), and we conclude that \(S\in W^{*}(\rho(Aut(\mathbb{Z}_{2k+1})))^{\prime}\) as well.
**Proposition 5.4**.: \(W^{*}(Aut(\mathbb{Z}_{2k+1}))^{\prime}=W^{*}(\rho(Per_{Aut(\mathbb{Z}_{2k+1}) }C),W^{*}(\{p_{c}\}_{c\in C})\cap W^{*}(Aut(\mathbb{Z}_{2k+1}))^{\prime})\)_._
Proof.: This proof is a direct application Proposition 5.3.
As an interesting side note, when reduced to the finite case, the following corollary can be viewed as generalization of the group theoretic result that \(Z(B_{n})\) is equal to the identity and the antipodal map.
**Corollary 5.5**.: _Let \(M\) be a critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). Then \(Z(\rho(Aut(M)))=\rho(Aut(\mathbb{Z}_{2k+1}))\)._
Proof.: We only show one containment in Proposition 3.11. The reverse containment follows by Proposition 5.4 and that \(\rho(Aut(M))\cap W^{*}(\{p_{c}\}_{c\in C})=I\).
We now reconstruct the relevant matrix unit structure of \(B(H)\) in terms of critical multi-cubic lattice automorphisms.
**Lemma 5.6**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\), \(C_{\alpha}\) be the coatoms of \(M\) for a fixed index \(\alpha\in I\), and \(H\) be constructed in the manner of Theorem 4.1, then \(B(H)\cong M_{2k}(B)\), where \(B\) is the mutual commutant of a set of matrix units._
Proof.: Let \((\cdot)\) denote the respective element in the permutation group contained in \(M_{2k}(\mathbb{Z}_{2})\) represented in the standard basis, and \(C_{\alpha}\) denote the coatoms for a fixed index \(\alpha\in I\). Now we claim the following matrix units form matrix units of \(B(H)\). For \(i\in C_{\alpha}\):
\[e_{ii} =p_{c}\] \[e_{ij} =e_{ii}(ij)\] \[e_{ji} =(ij)e_{ii}\]
We can directly compute that \(\sum_{i\in C_{\alpha}}e_{ii}=I\), \(e_{ij}=e_{ji}^{*}\) as permutation group is subgroup of the unitary group, and \(e_{ij}e_{kl}=\delta_{jk}e_{il}\). Therefore, \(B(H)\cong M_{2k}(B)\), where \(B\) is the commutant of the matrix units, see Lemma 4.27 of [8].
**Definition 5.7**.: _Let H be constructed as in Theorem 4.1. We define the unitary representation \(\rho_{i}:C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\to B(H)\) as elements of the permutation group in \(M_{2k}(\mathbb{Z}_{2})\) of the \(ith\) index of the tensor product defining \(H\)._
**Lemma 5.8**.: _With the conditions of Lemma 5.6, \(W^{*}(\rho_{i}(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))),\{p_{C_{i}}\}_{l=1}^{2k}) \subseteq W^{*}(\{e_{ij}\}_{i,j=1}^{2k})\)._
Proof.: We claim \(W^{*}(\rho_{i}(C_{S_{2k}}Aut(\mathbb{Z}_{2k+1})),\{p_{C_{i}}\}_{l=1}^{2k}) \subseteq W^{*}(\{e_{ij}\}_{i,j=1}^{2k})\).
The projections onto the appropriate coatoms are the diagonal elements by construction. In place of \(\Delta\) enforcing the conditions to be a cubic lattice, we have that the module conditions of the critical multi-cubic lattice must be preserved.
We identify \(\phi\in C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\) with \(\sigma_{\phi}\in S_{2k}\subseteq M_{2k}(\mathbb{Z}_{2})\) represented by the permutation group. Thus, any element of \(\rho_{i}(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\) is a linear combination of matrix units.
Note that our previous results of [3] only fully generalize to a particular subset of critical multi-cubic lattices.
**Lemma 5.9**.: _With the conditions of Lemma 5.6, \(B=W^{*}(\rho_{i}(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))),\{p_{C_{i}}\}_{l=1}^{2k})^{\prime}\) if and only if \(2k+1\) is prime._
Proof.: It is sufficient to consider when \(W^{*}(\{e_{ij}\}_{i,j=1}^{2k})\subseteq W^{*}(\rho_{i}(Aut(\mathbb{Z}_{2k+1})),\{p_{C_{i}}\}_{l=1}^{2k})\), which is equivalent to the case when the action of \(\rho_{i}(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\) on \(\{e_{ii}\}_{i=1}^{2k}\) generates \(\{e_{ij}\}_{i,j=1}^{2k}\). This occurs exactly when \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) acts transitively on \(\mathbb{Z}_{2k+1}-\{0\}\) with our relabeling of Theorem 3.9.
Let \(e\neq\sigma\in Aut(\mathbb{Z}_{2k+1})\). Consider \(C_{S_{2k}}(\sigma)\cong\Pi_{j=1}^{l}(\mathbb{Z}_{j}\wr S_{N_{j}})\). The action of \(\Pi_{j=1}^{l}(\mathbb{Z}_{j}\wr S_{N_{j}})\) on \(\sigma\) must preserve the cycle type of \(\sigma\). Let \(1\leq i<j\leq 2k\), then the action of \(\Pi_{j=1}^{l}(\mathbb{Z}_{j}\wr S_{N_{j}})\) can map \(i\) to \(j\) only if \(i\) and \(j\) are both in cycles of the same length. Thus, if \(\Pi_{j=1}^{l}\mathbb{Z}_{j}\wr S_{N_{j}}\) acts transitively, \(\sigma\) must have a cycle decomposition of \(m\) cycles of length \(\frac{2k}{m}\) where \(m\) divides \(2k\). On the other hand, these cycles are the orbits of the action \(\langle\sigma\rangle\) on \(\mathbb{Z}_{2k+1}\) disregarding the orbit of \(e\in\mathbb{Z}_{2k+1}\). For \(a\in\mathbb{Z}_{2k+1}\), if \(|a|=2k+1\), then \(|orb(a)|=\phi(a)\), where \(\phi\) denotes the Euler totient function. If \(|a|<2k+1\), then \(|orb(a)|=\phi(2k+1)/\phi(d)\) for some \(1<d\,|\,(2k+1)\). As \(2k+1\) is odd, \(d>2\) and \(\phi(d)>1\). Therefore, \(\phi(2k+1)/\phi(d))\neq\phi(2k+1)\), so \(2k+1\) must be prime.
Conversely, if \(2k+1\) is prime, recall that \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))=C_{S_{2k}}(\sigma)\cong\mathbb{Z}_{2k}\) whose action as a \(2k\) cycle in \(S_{2k}\) is transitive on \(\{a:1\leq a\leq 2k\}\).
We generalize the Hadamard matrix for a critical multi-cubic lattice. Let \(M\) be an \(|1|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime. Let \(\mathbb{Z}_{2k}\cong\rho(C_{2k}(Aut(\mathbb{Z}_{2k+1})))\) be generated by the unitary \(X_{2k}\). As \(X_{2k}\) is unitary, there exists a unitary \(U_{2k}\) such that \(U_{2k}^{*}X_{2k}U_{2k}=D_{2k}\), where \(D_{2k}\) is the diagonal matrix of the roots of \(2k\) roots of unity in counterclockwise order. In the case where \(2k+1=3\), \(X_{2k}=X\), and \(U_{2k}=H\), where \(X\) is the Pauli gate, and \(H\) is the normalized Hadamard gate.
**Definition 5.10**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime. Define \(U_{H}=\otimes_{i\in I}U_{2k}\),_
**Example 5.11**.: _As noted if \(M\) is an \(|1|\) critical multi-cubic lattice over \(\mathbb{Z}_{3}\), then \(U_{2k}\) is the Hadamard matrix up to a normalization constant. If \(M\) is an \(|1|\) critical multi-cubic lattice over \(\mathbb{Z}_{5}\), then_
\[U_{2k}=\frac{1}{2}\begin{bmatrix}1&1&1&1\\ 1&i&-1&-i\\ 1&-1&1&-1\\ 1&-i&-1&i\end{bmatrix},X_{2k}=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\end{bmatrix}\text{, and }D_{2k}=\begin{bmatrix}1&0&0&0\\ 0&i&0&0\\ 0&0&-1&0\\ 0&0&0&-i\end{bmatrix}.\]
_In general up to a normalization constant, \(\left[U_{2k}\right]_{ij}=\omega_{j}^{i-1}\) where \(\omega_{j}\) is \(jth\) element of the \(2k\) roots of unity with counterclockwise ordering starting at 1. We recognize \(U_{2k}\) as the quantum Fourier transform, \(X_{2k}\) as the shift matrix, and \(D_{2k}\) as the clock matrix._
Originally introduced by [4], it is well known that these matrices form the framework of quantum mechanical dynamics in finite dimensions. Furthermore, \(X_{2k}\) and \(D_{2k}\) are known as Generalized Pauli matrices, which are a representation of the respective Heisenberg-Weyl group [5].
**Definition 5.12**.: _Let \(X_{(2k)_{i}}=\otimes_{j\in I}A_{j}\), where \(A_{i}=X_{2k}\) and \(A_{j}=I_{2k}\) for all \(j\neq i\). Similarly, let \(D_{(2k)_{i}}=\otimes_{j\in I}A_{j}\), where \(A_{i}=D_{2k}\) and \(A_{j}=I_{2k}\) for all \(j\neq i\)._
**Proposition 5.13**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime and \(C\) be the coatoms of \(M\). Then \(W^{*}(\rho(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\subseteq W^{*}(\{X_{(2k)_{i}}\}_{ i\in I})\subseteq W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C}\)._
Proof.: We first show \(W^{*}(\rho(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))))\subseteq W^{*}(\{X_{(2k)_{i}}\}_{i \in I})\). Since \(2k+1\) is prime, \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))=Aut(\mathbb{Z}_{2k+1})\cong\mathbb{Z}_{2k}\), so we reduce to the case of the representation of the cyclic group \(\rho(\mathbb{Z}_{2k})\), which is a subset of the standard representation of the permutation group for our chose basis of Theorem 4.1. As \(\rho(\mathbb{Z}_{2k})=\otimes_{i\in I}X_{(2k)_{i}}\) by construction, we have shown the first containment.
Now we show \(W^{*}(\{X_{(2k)_{i}}\}_{i\in I})\subseteq W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C}\). We have \(D_{(2k)_{i}}=\sum_{j=1}^{2k}\omega_{j}p_{j}\) where \(\{\omega_{j}\}\) are the \(2k\) cyclic roots of unity and \(p_{j}\) are the rank one projections onto the basis used to construct \(H_{i}\) in Theorem 4.1, so that \(D_{(2k)_{i}}\) is diagonal in our choice of basis. As all \(p_{j}\) are atoms of \(M\), \(D_{(2k)_{i}}\subseteq W^{*}(\{p_{c}\}_{c\in C})\) by coatomisticity. Lastly, we conclude that \(X_{(2k)_{i}}=U_{H}D_{(2k)_{i}}U_{H}^{*}\subseteq W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_ {c\in C})\).
**Proposition 5.14**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime and \(C\) be the coatoms of \(M\). Then \(W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C}\subseteq W^{*}(\{\rho_{i}(C_{S_{2k}}( Aut(\mathbb{Z}_{2k+1})))\}_{i\in I})\)._
Proof.: It is sufficient to reduce to an arbitrary \(i\in I\). By Proposition 2.12, we need only show that \(W^{*}(X_{2k_{i}})=W^{*}(\{U_{H}p_{ij}U_{H}\}_{j=1}^{2k})\), where \(p_{ij}\) is the projection onto \(jth\) atom forming an orthormal basis of \(H_{i}\) used to construct \(H\) in Theorem 4.1.
We have already shown one containment, so it sufficient to show that the vector spaces have equal rank. This follows as \(rank(W^{*}(X_{2k_{i}})=2k\), as we have a matrix with distinct eigenvalues, so \(X_{2k_{i}}\) has minimal polynomial equal to its characteristic polynomial, \(x^{2k}-1\). \(\mathrm{W}^{*}(\{U_{H}p_{ij}U_{H}\}_{j=1}^{2k}))=2k\) as well since it is just a unitary transformation of \(2k\) orthogonal projections.
**Lemma 5.15**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime and \(C\) be the coatoms of \(M\). Then \(W^{*}(\{\rho_{i}(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\}_{i\in I})=W^{*}(\{U_{H} p_{c}U_{H}^{*}\}_{c\in C}\) if and only \(2k+1\) is prime._
Proof.: If \(2k+1\) the result follows by Proposition 5.13 and Proposition 5.14.
Assume that \(2k+1\) is not prime and \(W^{*}(\rho_{i}(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))_{i\in I})=W^{*}(\{U_{H}p_{c} U_{H}^{*}\}_{c\in C}\). Specifically this implies that \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) contains a \(2k\) cycle. In addition, \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) is an abelian centralizer and therefore a maximal abelian subgroup of \(S_{2k}\). Similarly, \(2k\) cycles are abelian subgroups equal to their centralizer in \(S_{2k}\), so they are maximal abelian subgroups. Then \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) contains a \(2k\) cycle if and only if \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})\) is equal to a \(2k\) cycle. This contradicts that \(Aut(\mathbb{Z}_{2k+1})\leq C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})\) as either \(Aut(\mathbb{Z}_{2k+1})=\{1\}\) or \(Aut(\mathbb{Z}_{2k+1})=C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})\). The former is false as \(|Aut(\mathbb{Z}_{2k+1})|=\phi(2k+1)>1\) for all \(k\geq 1\), and the latter holds if and only if \(2k+1\) is prime.
Our final theorem can be viewed as a generalization of the result that generalized Pauli matrices form an orthonormal basis.
**Theorem 5.16**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\) where \(2k+1\) is prime, \(C\) be the coatoms of \(M\), and \(H\) be constructed as in Theorem 4.1. Then \(B(H)=W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C},\{p_{c}\}_{c\in C})=W^{*}(\{\rho_{i }(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1})))\}_{i\in I},\{p_{c}\}_{c\in C})\) if and only if \(2k+1\) is prime._
Proof.: By Lemma 5.15, the second equality holds if and only if \(2k+1\) is prime. Now assume that \(2k+1\) is prime, and consider a coatom in \(p_{i}\in\{Up_{c}U^{*}\}_{c\in C}\) and \(q_{i}\in\{p_{c}\}_{c\in C}\) for some fixed index \(i\in I\). Then \(p_{i}\wedge q_{i}=\lim_{n\to\infty}(p_{i}q_{i}p_{i})^{n}=\lim_{n\to\infty}(\frac{ 1}{2k}p_{i})^{n}=0\). By construction, any atom \(a\in W^{*}(\{Up_{c}U^{*}\}_{c\in C})\), \(a\) is bounded by a coatom for all \(i\in I\), so we assume without loss of generality that \(a\leq p_{i}\), and by symmetry we assume \(b\leq q_{i}\). Then \(a\wedge b\leq p\wedge q=0\). Therefore the atomistic Boolean lattice of projections associated with \(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C}\) and \(\{p_{c}\}_{c\in C}\) have distinct sets of atoms. By atomisticity, \(W^{*}(\{p_{c}\}_{c\in C})\)
and \(W^{*}(\{U_{H}p_{c}U_{H}^{*}\}_{c\in C})\) are abelian von Nuemann algebras whose only common projections are \(0\) and \(I\), so their intersection is \(\mathbb{C}I\) by [9].
Lastly, we have a result about the possible transitions of the critical multi-cubic lattice using only gates associated with the automorphism group.
**Corollary 5.17**.: _Let \(M\) be an \(|I|\) critical multi-cubic lattice over \(\mathbb{Z}_{2k+1}\). The action of \(Aut(M)\) acts transitively on the atoms if and only if \(2k+1\) is prime._
Proof.: As \(Aut(M)\cong C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\wr S_{I}\), we need only show that \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) acts transitively on the coatoms on fixed index \(i\in I\). As the action is just standard modular multiplication, we apply \(C_{S_{2k}}(Aut(\mathbb{Z}_{2k+1}))\) acts transitively on \(\mathbb{Z}_{2k+1}-\{0\}\) if and only if \(2k+1\) is prime. By coatomisticity of the critical multi-cubic lattice, we conclude the result.
Our original interest in the critical multi-cubic lattice began as a generalization of the cubic lattice. Fortuitously, as in [3], the appropriate representation of the critical multi-cubic lattice created a framework for infinite systems of quantum gates, in this case qudits. In order to do so, we have lost the Boolean-adjacent properties of the cubic lattice namely axiom 2, and therefore a nice characterization of the logic that followed. An interesting future pursuit would be to more fully discuss the implicative structure of the critical multi-cubic lattice. Another pursuit is to discover an intermediate structure between the generality of the critical multi-cubic lattice and the Boolean-like logic of the cubic lattice.
In order to highlight the utility and importance of moving between a logical formulation and algebraic formulation, we conclude with a quote by J. Sylvester: "The application of Algebra to Logic is now an old tale\(-\)the application of Logic to Algebra marks a far more advanced stadium of the human intellect," [4].
|
2306.15559 | RansomAI: AI-powered Ransomware for Stealthy Encryption | Cybersecurity solutions have shown promising performance when detecting
ransomware samples that use fixed algorithms and encryption rates. However, due
to the current explosion of Artificial Intelligence (AI), sooner than later,
ransomware (and malware in general) will incorporate AI techniques to
intelligently and dynamically adapt its encryption behavior to be undetected.
It might result in ineffective and obsolete cybersecurity solutions, but the
literature lacks AI-powered ransomware to verify it. Thus, this work proposes
RansomAI, a Reinforcement Learning-based framework that can be integrated into
existing ransomware samples to adapt their encryption behavior and stay
stealthy while encrypting files. RansomAI presents an agent that learns the
best encryption algorithm, rate, and duration that minimizes its detection
(using a reward mechanism and a fingerprinting intelligent detection system)
while maximizing its damage function. The proposed framework was validated in a
ransomware, Ransomware-PoC, that infected a Raspberry Pi 4, acting as a
crowdsensor. A pool of experiments with Deep Q-Learning and Isolation Forest
(deployed on the agent and detection system, respectively) has demonstrated
that RansomAI evades the detection of Ransomware-PoC affecting the Raspberry Pi
4 in a few minutes with >90% accuracy. | Jan von der Assen, Alberto Huertas Celdrán, Janik Luechinger, Pedro Miguel Sánchez Sánchez, Gérôme Bovet, Gregorio Martínez Pérez, Burkhard Stiller | 2023-06-27T15:36:12Z | http://arxiv.org/abs/2306.15559v1 | # _RansomAI_: AI-powered Ransomware
###### Abstract
Cybersecurity solutions have shown promising performance when detecting ransomware samples that use fixed algorithms and encryption rates. However, due to the current explosion of Artificial Intelligence (AI), sooner than later, ransomware (and malware in general) will incorporate AI techniques to intelligently and dynamically adapt its encryption behavior to be undetected. It might result in ineffective and obsolete cybersecurity solutions, but the literature lacks AI-powered ransomware to verify it. Thus, this work proposes _RansomAI_, a Reinforcement Learning-based framework that can be integrated into existing ransomware samples to adapt their encryption behavior and stay stealthy while encrypting files. _RansomAI_ presents an agent that learns the best encryption algorithm, rate, and duration that minimizes its detection (using a reward mechanism and a fingerprinting intelligent detection system) while maximizing its damage function. The proposed framework was validated in a ransomware, Ransomware-PoC, that infected a Raspberry Pi 4, acting as a crowdsensor. A pool of experiments with Deep Q-Learning and Isolation Forest (deployed on the agent and detection system, respectively) has demonstrated that _RansomAI_ evades the detection of Ransomware-PoC affecting the Raspberry Pi 4 in a few minutes with >90% accuracy.
Ransomware, Reinforcement Learning, Artificial Intelligence, Malware, Evasion
## I Introduction
With the growing progress of digitalization, companies have become more dependent on information systems that uphold their business missions. It has influenced a recent increment in malware-based attacks targeting heterogeneous enterprises [1]. Among existing attack vectors, ransomware is one of the most significant threats affecting companies due to its impact on data and economic losses. In 2022, industries experienced a rise of 87% in ransomware attacks [2]. Moreover, although ransom payments are slowly declining [2], ransomware is still a highly impactful threat to most companies. As an example, IBM assessed in 2022 that the average loss of a ransomware attack was 4.54M USD [3].
Detecting cyberattacks and ransomware, in particular, is the first step toward mitigating their impact. As outlined by recent work [4], dynamic detection approaches incorporating behavioral data into machine and deep learning (ML and DL) techniques have demonstrated to be highly effective against ransomware. However, while these behavioral-based approaches are not vulnerable to obfuscation (as static approaches), they rely on specific assumptions. In particular, both classification and anomaly detection approaches assume that malicious behavior is stable and static enough to allow ML/DL models to differentiate it from normal or benign behavior [5].
However, integrating Artificial Intelligence (AI) into ransomware samples could change it by adding intelligent dynamicity to encryption behaviors. It would complicate the detection since ransomware samples could learn the encryption rate that maximizes impact while minimizing its detection. Furthermore, the encryption rate could automatically change depending on the device status. Therefore, the literature presents an open challenge that focuses on studying the suitability of integrating AI-based techniques into ransomware to learn how and when to encrypt real devices while staying undetected. This challenge is vital to later, in a second stage, evaluate the detection capabilities of current cybersecurity mechanisms and update them if needed.
This work fills this research gap and proposes _RansomAI_, a framework using Reinforcement Learning (RL) and fingerprinting to maximize the stealth capabilities of ransomware samples while maximizing their damage function. _RansomAI_ contains an RL-based agent that learns the optimum ransomware behavior (encryption rate, duration, and algorithm combination) according to the reward received for each ransomware configuration and the device behavior. The reward is calculated according to the output of a fingerprinting and ML-based anomaly detection system and the encryption rate of the selected ransomware configuration. A prototype of _RansomAI_ (available in [6]) that implements Deep Q-Learning and Isolation Forest (for the agent and the anomaly detector, respectively) has been integrated into a real ransomware called Ransomware-PoC. The prototype effectiveness has been evaluated in a Raspberry Pi 4, acting as a crowdsensor through a pool of experiments with six configurations of Ransomware-PoC dealing with the encryption algorithm, rate, and duration. The experiments have demonstrated that _RansomAI_ evades the detection of Ransomware-PoC in a few minutes and with high accuracy.
The remainder of this paper is structured as follows. Section II introduces the background and related approaches.
Then Section III presents the problem and scenario tackled by Section IV with the framework architecture and its implementation. The performance of the solution is demonstrated in the experiments presented in Section V. Finally, Section VI draws conclusions from the results.
## II Related Work
In contrast to many works that demonstrate the applicability of AI for defense, there are only a few papers discussing the offensive perspective. TABLE I compares the most relevant and recent ones. In [7, 8, 9], the authors leveraged RL to evade a static detection system. Here, static detection refers to an ML-based system that detects whether a piece of software is malicious based on the static structure of the malware sample. Thus, the three approaches consider RL to find an optimal way to structure the malware sample before executing the payload on the victim's premises. This is achieved by relying on an adversarial technique to evade the ML-based detection system. None of these solutions consider ransomware, and only [8] presents experimental results in a realistic scenario where commercial antivirus systems were deployed in cloud-based virtual machines.
Aside from using RL, [10] presented how Genetic Algorithms can help with byte-level modifications to evade malware detection. [11] proposed an ML model to inject strategic system failure into a cyber-physical system. The model was not used to address evasion but to optimize the time and location of the failure. Therefore, no adversarial techniques were used in that work. _RoboTack_ was proposed in [12] to estimate attacks impact instead of evading detection. A neural network with three hidden layers gave insight into the optimal deployment target, time, and strategy. Like the previous two approaches, _RoboTack_ was only evaluated in a simulated environment.
While all aforementioned evasion approaches deal with a static detection system, _DeepLocker_[13] is the first generic malware that does not rely on static obfuscation for evasion. In contrast, it achieves evasion by employing dynamic obfuscation of arbitrary sources before the breach event. _DeepLocker_ encrypts the payload and injects itself into the target system, making static analysis infeasible. The trigger conditions are transformed into a deep neural network (DNN), encoding it in another black box system. The DNN was trained on multiple target markers (_e.g.,_ system-level features, geolocation, input and output systems). Upon recognizing the target, these markers can be converted into the encryption key, unleashing the payload. _DeepLocker_ was evaluated in a real scenario.
In conclusion, while several approaches leverage AI for malware, most assume a static defense model. Furthermore, most solutions rely on adversarial attacks for evasion, thereby focusing on optimizing the malware before its execution. Finally, most works do not present evaluation in realistic scenarios, and no work is focused on ransomware, which is currently one of the most harmful malware families.
## III Problem Statement & Scenario Description
Some of the most recent and promising cybersecurity solutions combine fingerprinting and ML to successfully detect anomalies and classify ransomware samples [14]. However, they do not consider ransomware adapting its encryption mechanisms according to specific criteria to stay undetected. Therefore, this work seeks to answer the following question: Is it possible to effectively and automatically evade novel detection systems by incorporating AI-based techniques into ransomware? Here, the main goal of AI would be to learn and select in an online and autonomous fashion what malicious configuration maximizes encryption while minimizing its detection. In addition, if evasion is possible, the next question would be: How much time is needed to adapt the ransomware behavior? This work considers the following scenario to tackle these questions.
* A Raspberry Pi 4 acting as a real sensor of ElectroSense [15], a platform that monitors the radio frequency spectrum. The Raspberry Pi would host the intelligent fingerprinting detection system that must be evaded. More information about the potential detection mechanism can be found in [14].
* An extended version of Ransomware-PoC composed of i) a client running on the Raspberry Pi and encrypting files by using different algorithms, rates, and duration; and ii) a command and control (C&C) server that selects between six different configurations (see TABLE II) the encryption setup executed by the client. It is important to mention that the C&C only has control over the client, so it is not able to change the configuration of the Raspberry Pi or the anomaly detector.
## IV _RansomAI_ Architecture
This section presents the design and implementation details of _RansomAI_, a framework adding intelligence to existing
ransomware samples to evade detection systems while maximizing encryption. Fig. 1 shows the main components of _RansomAI_, which is available in [6].
In summary, the Agent uses RL to learn and deploy (interacting with the C&C) the best ransomware configuration (from a list of existing ones, see TABLE II). The learning process is driven by the feedback provided by a Reward function, which prioritizes the ransomware configuration that maximizes the encryption rate while minimizing its detection. For that, the Reward function computes the output of an Anomaly Detector that uses behavioral fingerprinting and ML, and the encryption rate of the selected ransomware configuration. More in detail, the Anomaly Detector (which tries to mimic a genuine system potentially deployed in the Raspberry Pi) compares the current state (behavior) of the Raspberry Pi with its normal one (monitored when it is not under attack). The Fingerprint Monitor is responsible for continuously gathering the Raspberry Pi states. All the previous components are deployed on the server where the C&C runs, except the Fingerprint Monitor, which runs on the Raspberry Pi with the Ransomware client. More details about each component are provided below.
### _State_
A state is the agent vision of the environment (Raspberry Pi) at a given time. Modeling states precisely is crucial to allow the agent to understand the impact of each action on the environment and learn proper actions.
_RansomAI_ proposes behavioral fingerprinting to represent states. In particular, it uses the _perf_ Linux command to collect different events from _system calls_, _CPU_, _device drivers_, _scheduler_, _network_, _file system_, _virtual memory_, and _random numbers_ families. The reason for selecting these heterogeneous data sources is to detect small perturbations produced by encryption phases and later evade robust detection systems. More in detail, 50 features were chosen from the previous families. For this selection, 103 features were initially monitored in time windows of 5 s (decided according to previous work [14]) during 8 hours of Raspberry Pi normal behavior (sensor without being attacked). Then, duplicated, temporal, and constant features were removed during a data cleaning process. All remaining features were plotted, and 28 features whose normal behavior was volatile overall fingerprints were manually identified and subsequently dropped. Finally, features with more than 99% correlation were removed. After the whole process and as mentioned, 50 features were selected to create the device fingerprints.
### _Action_
Actions are the way in which the agent interacts with the environment. In _RansomAI_, actions correspond to the execution of the six ransomware configurations created in the extended version of Ransomware-PoC (see TABLE II). It is important to mention that the original version of Ransomware-PoC provides a fixed encryption configuration (in terms of algorithm and rate), and this work extended it with new functionality in terms of different algorithms, encryption rates, and pauses between bursts.
### _Anomaly Detector_
The Anomaly Detector component is critical for _RansomAI_ since it tries to mimic the functionality of existing novel detection systems that might be deployed on the Raspberry Pi to detect ransomware attacks. Specifically, its outputs are used by the Reward function to provide positive or negative feedback for each encryption configuration.
_RansomAI_ proposes the combination of unsupervised ML and behavioral fingerprint. More in detail, it considers an anomaly detection system to model the normal behavior of the Raspberry Pi (acting as a sensor of ElectroSense) and detect anomalies produced by each ransomware configuration. The behavior of the device is represented by the 50 features selected to model the environment state (previously described). These features cover as many different data sources as possible to detect changes in device resource usage. Then, different unsupervised ML models such as Isolation Forest, Autoencoder, Local Outlier Factor, and One Class-Support Vector Machine were trained with the normal behavior of the device (running for 8 hours) and evaluated with the normal and six ransomware configurations. Isolation Forest was selected for the prototype implementation after analyzing the performance of each model when detecting normal and under-attack behaviors. More in detail and as can be seen in TABLE III, normal behavior and configurations 2 and 3 were correctly detected with almost 89% True Negative Rate (TNR) and 0% False Negative Rate (FNR), respectively. Then, configurations 1, 4, 5, and 6 were incorrectly detected as normal behavior with 77-91% FNR. These results were achieved with a 5% contamination factor in Isolation Forest hyperparameters. At this point, it can be concluded that it is feasible to encrypt files without being detected by novel works. However, evaluating if _RansomAI_ can automatically learn the optimum encryption configuration within an acceptable time is still essential.
### _Reward_
Positive and negative rewards allow the Agent to learn if selected ransomware configurations are good or bad. _RansomAI_
Fig. 1: _RansomAI_ Architecture
provides rewards according to the output of the anomaly detector and the encryption rate of the selected configuration. In other words, if one ransomware configuration is not detected, a higher encryption rate should be more favorable than a lower encryption rate. Moreover, if the ransomware configuration is detected, a lower encryption rate is worse than a higher one. In summary, encryption time is important and considered by the reward mechanism.
_RansomAI_ proposes two separate reward functions, one for hidden and another for detected encryption. After several experiments and fine-tuning, \(R_{hid}(r)=10*ln(r+1)+h\) and \(R_{det}(r)=\frac{-d}{max(r,1)}-d\) are the two proposed reward functions. In both functions, \(h\) and \(d\) are constants aiming to distinguish clearly between rewards for hidden and detected behaviors, and \(r\) is the current encryption rate. The constants were set as \(h=0\) and \(d=20\) to avoid impacting the weights in the network with unnecessarily high rewards. Therefore, with the reward functions and the probability of being detected (see TABLE III), the Agent should learn that configuration 4 is the most convenient because it achieves the best expected average reward (\(0.8018*62.166+0.1982*(-20.04)=45.87\)). More in detail, following configuration 4, Ransomware-PoC could encrypt approximately 200 KB per minute, 12 MB per hour, 288 MB per day, and 8.6 GB per month without being detected.
### _Agent_
The agent of _RansomAI_ considers RL and learns following a trial-and-error approach. When the agent takes one action (selects a ransomware configuration), the environment (Raspberry Pi) changes to a new state (behavioral fingerprint called afterstate) and receives a reward. This loop is repeated, and sequences of the previous steps (state, action, afterstate, and reward) are called episodes. An episode concludes when no more actions can be taken.
In _RansomAI_, since the state space is vast due to the number of features (50) modeling the behavior fingerprint and their continuous values, a tabular approach for the RL model is not feasible. Instead, a neural network is used together with the Deep Q-Learning algorithm. Q-learning is an off-policy temporal difference control algorithm defined by \(Q(S_{t},A_{t})\gets Q(S_{t},A_{t})+\alpha\big{[}R_{t+1}+\gamma\ max_{a}\ Q(S_{t+1},a)-Q(S_{t},A_{t})\big{]}\). \(Q\) is the learned action-value function that approximates the optimal action-value function based on value estimates of state-action pairs \((S,A)\) and reward \(R\) independent of the followed policy. In addition, the agent follows an epsilon-greedy policy that decides with probability \(\epsilon\) to take a random action, which may not coincide with the current estimated optimal action. This policy ensures the agent explores different actions to find the optimum one. Regarding the Agent neural network, it has three fully-connected layers. The input layer has 50 neurons (matching the features of the fingerprint), the hidden layer has 25 neurons, and the output layer size is 6, corresponding to the number of ransomware configurations. The details regarding the hyperparameters selection, such as exploration versus exploitation (\(\epsilon\) and \(\delta\)), learning rate (\(\alpha\)), discount factor (\(\gamma\)), hidden neurons, activation function, and weights initialization, are provided in Section V.
## V Experiments
This section evaluates the performance of _RansomAI_ while learning the optimum configuration to encrypt the Raspberry Pi presented in Section III. For that, the First experiment performs an individual search of hyperparameters to optimize the agent accuracy and learning time. Then, the second shows the learning accuracy across time.
### _Hyperparameters Search_
The goal of this experiment is to find the optimum configuration of hyperparameters to maximize accuracy and reduce the training time of _RansomAI_. In this sense, the following hyperparameters of the RL-based Agent are considered: i) exploration settings (\(\epsilon\), \(\delta\), \(\alpha\), and \(\gamma\)), ii) number of hidden neurons in the neural network, iii) activation functions used in the forward and backward pass of the neural network, and iv) method for weight initialization in the neural network. The following baseline configuration is fixed while one hyperparameter per test is fine-tuned. Episodes = 5,000, \(\epsilon\) = 0.4, \(\delta\) = 0.01, \(\alpha\) = 0.0050, \(\gamma\) = 0.10, hidden neurons = 40, activation = _Log and SiLU_, weights = _He_.
#### V-A1 **Exploration vs. Exploitation**
It presents the trade-off between exploring new actions to discover new states and exploiting learned optimal actions. For different exploration and exploitation configurations, TABLE IV shows the average learning time of the Agent, the accuracy after training, the number of episodes required to find the best encryption configuration, and the average Q-value difference (AQD). As can be seen, the exploration does not significantly impact the training or learning duration, as most values for the average training time are similar. Instead, the impact is manifested in the final accuracy as well as the number of episodes and AQD. The best setting (\(\epsilon=0.20,\delta=0.01\)) has the highest performance considering the trade-off for exploration and exploitation. It offers great accuracy, 99.63%, close to the maximum of 99.84%, needs one of the lowest numbers of episodes, and achieves high differences between the Q-values of configuration 4 and the closest ones (5 and 6).
#### V-A2 **Learning Rate**
It represents the weight given to the update of the Q-values. If the step size is too small, the current Q-value estimates will likely get stuck in a local maximum. In contrast, when \(\alpha\) is too large, convergence is impossible in most cases. TABLE V shows that adjusting the learning rate causes significant differences in the average learning time. In this sense, it was infeasible to do a complete hyperparameter exploration given the infinite value space.
However, \(\alpha=0.0050\) is the best configuration considering the average training time and episodes to learn the best encryption configuration (configuration 4). Although other variants achieved a higher AQD, they required more episodes and time to find configuration 4 as the best. In summary, it trades the speed against distinction capabilities as the Q-values for configuration 4 are very close to those of configurations 5 and 6. Nevertheless, given the presented selection of variants, \(\alpha=0.0050\) is considered the best approximation of the optimal setting due to its clear advantage in speed and accuracy.
#### Iv-A3 **Discount Factor**
It controls the importance of future estimations over the current one. With \(\gamma\) approaching 1, the agent considers future value estimations more strongly. In contrast, with \(\gamma=0\), the agent only maximizes immediate estimates. TABLE VI shows the impact of the discount factor on the average learning time, the accuracy, the number of episodes, and the AQD values to find configuration 4 as the best one. Interestingly, the accuracy remains relatively stable for a discount factor between 0.2 and 0.6. Analyzing the results, the setting \(\gamma=0.10\) is considered best as it has the shortest average training time and the third lowest number of episodes. In addition, it provides a reasonable distinction of Q-values from configuration 4 (the best) and the rest.
#### Iv-A4 **Hidden Neurons**
TABLE VII shows how the number of neurons in the hidden layer affects the learning time, accuracy, number of episodes, and AQD of the agent. Sizes from 10 to 50 neurons were tested, achieving the best setup configuration with 25 hidden neurons.
#### Iv-A5 **Activation Function**
TABLE VIII lists the activation functions evaluated in the first and second layers of the neural network and their impact on the learning process. As can be seen, the activation function significantly impacts the average training time. More in detail, the Log-SiLU setting obtains the best results because, although Log-ReLU achieves slightly better accuracy, many Q-values are equal to zero due to the dying ReLU problem.
#### Iv-A6 **Weights Initialization**
Plain random-uniform distribution, Xavier uniform distribution, and He initialization were tested. The plain uniform initialization randomly selects weights from a uniform distribution in the \([0,1]\) range. In the Xavier initialization, the distribution is dynamically scaled according to the dimensions of the previous layer. Lastly, He initialization generates random numbers selected from a standard normal distribution. After evaluating the three of them, Xavier provided the best learning time (105.4 min), accuracy (99.31%), and the number of episodes (90).
### _Agent Final Performance_
According to the results obtained in the previous experiment, the Agent of _RansomAI_ is configured with the following configuration of hyperparameters: \(\epsilon=0.20\), \(\delta=0.01\)
\(\alpha=0.005\), \(\gamma=0.30\), hidden neurons = 25, activation functions = _Log-SiLU_. TABLE IX shows the accuracy with which the Agent selects configuration 4 for different episodes and therefore learning times. As can be seen, \(>\)96% accuracy is obtained in less than 10 minutes.
In conclusion, the obtained results demonstrated that _RansomAI_ is able to learn how to evade intelligent detection systems in just a few minutes and with promising accuracy. Therefore, more efforts are needed to improve detection systems against intelligent ransomware samples.
## VI Summary and Future Work
This work proposes _RansomAI_, a framework able to intelligently and automatically adapt the encryption behaviors of ransomware and avoid being detected by dynamic defense mechanisms. The main contribution of _RansomAI_ is an Agent that combines RL and device fingerprinting to learn the encryption rate, duration, and algorithm combination that maximizes encryption and minimizes detection. The learning task is driven by a reward mechanism that prioritizes the encryption rate and stealth capabilities of ransomware configurations. Stealth is evaluated using an anomaly detection system that uses ML and fingerprinting, as proposed in the literature.
_RansomAI_ has been deployed in a real scenario composed of a Raspberry Pi 4 acting as a crowd-sensor affected by a recent ransomware called Ransomware-PoC. More in detail, the components of _RansomAI_ have been integrated into Ransomware-PoC, which has been modified to dynamically adapt its algorithms, encryption rates, and duration. A pool of experiments combining Deep Q-Learning and Isolation Forest (in the Agent and detection system, respectively) has demonstrated that _RansomAI_ evades the detection of Ransomware-PoC affecting the Raspberry Pi 4 in 2 minutes with \(>\)90% accuracy.
Future work plans to evaluate the functionality of _RansomAI_ with different benign device behaviors to show its adaptability. It is also planned to try other malware samples, such as backdoors leaking sensitive data. Finally, it is expected to work on intelligent and adaptive detection mechanisms to detect _RansomAI_.
## Acknowledgment
This work has been partially supported by _(a)_ the Swiss Federal Office for Defense Procurement (armasuisse) with the CyberForce project (CYD-C-2020003) and _(b)_ the University of Zurich UZH.
|
2301.11798 | MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer | The Diffusion Probabilistic Model (DPM) has recently gained popularity in the
field of computer vision, thanks to its image generation applications, such as
Imagen, Latent Diffusion Models, and Stable Diffusion, which have demonstrated
impressive capabilities and sparked much discussion within the community.
Recent investigations have further unveiled the utility of DPM in the domain of
medical image analysis, as underscored by the commendable performance exhibited
by the medical image segmentation model across various tasks. Although these
models were originally underpinned by a UNet architecture, there exists a
potential avenue for enhancing their performance through the integration of
vision transformer mechanisms. However, we discovered that simply combining
these two models resulted in subpar performance. To effectively integrate these
two cutting-edge techniques for the Medical image segmentation, we propose a
novel Transformer-based Diffusion framework, called MedSegDiff-V2. We verify
its effectiveness on 20 medical image segmentation tasks with different image
modalities. Through comprehensive evaluation, our approach demonstrates
superiority over prior state-of-the-art (SOTA) methodologies. Code is released
at https://github.com/KidsWithTokens/MedSegDiff | Junde Wu, Wei Ji, Huazhu Fu, Min Xu, Yueming Jin, Yanwu Xu | 2023-01-19T03:42:36Z | http://arxiv.org/abs/2301.11798v2 | # MedSegDiff-V2: Diffusion based Medical Image Segmentation with Transformer
###### Abstract
The Diffusion Probabilistic Model (DPM) has recently gained popularity in the field of computer vision, thanks to its image generation applications, such as Imagen, Latent Diffusion Models, and Stable Diffusion, which have demonstrated impressive capabilities and sparked much discussion within the community. Recent studies have also found DPM to be useful in the field of medical image analysis, as evidenced by the strong performance of the medical image segmentation model MedSegDiff in various tasks. While these models were originally designed with a UNet backbone, they may also potentially benefit from the incorporation of vision transformer techniques. However, we discovered that simply combining these two approaches resulted in subpar performance. In this paper, we propose a novel transformer-based conditional UNet framework, as well as a new Spectrum-Space Transformer (SSFormer) to model the interaction between noise and semantic features. This architectural improvement leads to a new diffusion-based medical image segmentation method called MedSegDiff-V2, which significantly improves the performance of MedSegDiff. We have verified the effectiveness of MedSegDiff-V2 on eighteen organs of five segmentation datasets with different image modalities. Our experimental results demonstrate that MedSegDiff-V2 outperforms state-of-the-art (SOTA) methods by a considerable margin, further proving the generalizability and effectiveness of the proposed model.
Keywords:Multi-rater learning Optic disc/cup segmentation Glaucoma diagnosis
## 1 Introduction
Medical image segmentation is the process of dividing a medical image into distinct regions of interest. It is a crucial step in many medical image analysis applications, such as diagnosis, surgical planning, and image-guided surgery. The ability to better understand and track changes over time in these images is vital for medical professionals. In recent years, there has been a growing interest in automated medical image segmentation methods, as they have the potential to improve the consistency and accuracy of results. With the advancement of deep learning techniques, several studies have successfully applied neural network-based models, including classical convolutional neural networks (CNNs) [11] and
the recently popular vision transformers (ViTs) [2, 22], to medical image segmentation tasks.
Very recently, the Diffusion Probabilistic Model (DPM) [9] has gained popularity as a powerful class of generative models, capable of generating high-quality and diverse images [18, 19, 20]. Inspired by its success, some researchers have attempted to apply DPM in the field of medical image segmentation [6, 13, 16, 23, 25]. One such method, called MedSegDiff [25], achieved great success and outperformed previous state-of-the-art (SOTA) segmentation methods, such as nnUNet and TransUNet. However, these methods are all based on classical UNet backbones. In a separate line of research, vision transformers, which have shown outstanding performance in vision representation learning on natural images, have also brought success in medical image segmentation and have quickly become a popular approach. Among them, transformer-convolution hybrid architectures have attracted the most attention and achieved the best performance.
A natural next step is to combine the transformer-based UNet, such as TransUNet, with DPM. However, we found that this straightforward strategy leads to subpar performance. One issue is that the transformer-abstracted conditional feature is not compatible with the feature of the backbone. The transformer learns deep semantic features from the raw image, while the diffusion backbone abstracts features from a corrupted, noisy mask. Additionally, the dynamic and global nature of the transformer makes it more sensitive than CNNs. Thus, the adaptive condition strategy used in MedSegDiff causes larger variance in the outputs in the transformer setting. This requires running the model more times for ensemble and makes it harder to converge during training.
To overcome the aforementioned challenges, we have designed a novel transformer-based conditional UNet architecture for the diffusion process. The main idea is to use two different conditioning techniques to condition the backbone model with the source image segmentation features in the diffusion process. One is the anchor condition, which integrates the conditional segmentation features into the diffusion model encoder to reduce the diffusion variance. The other is the semantic condition that integrates the conditional segmentation embedding into the diffusion embedding. To effectively bridge the gap between diffusion noise embedding and conditional semantic features, we propose a novel transformer mechanism called the Spectrum-Space Transformer (SS-Former) that learns the interaction between them. This allows the model to have a smaller diffusion variance while also benefiting from the global and dynamic representation capabilities provided by the transformer.
More specifically, in the anchor condition, we integrate the decoded segmentation feature of the condition model into the encoded features of the diffusion model. We design a novel Gaussian Spatial Attention mechanism to implement this integration. It relaxes the conditional segmentation feature with more uncertainty, thus providing the diffusion process more flexibility to further calibrate the predictions. In the semantic condition, we integrate the semantic segmentation embedding into the diffusion model embedding using our novel SS-Former. SS-Former is an interlaced cross-attention chain with one part that enhances the
semantic embedding using the noise embedding and another part that enhances the noise embedding using the semantic embedding. We design a novel cross-attention mechanism over the frequency domain to eliminate the high-frequency noises in the noise embedding, thus aligning the noise and semantic features. We have verified MedSegDiff-V2 on a wide range of medical segmentation tasks, such as optic-cup segmentation, brain tumor segmentation, abdominal organs segmentation, and thyroid nodule segmentation. The images used in these tasks have different modalities, such as MRI, CT, and ultrasonography. MedSegDiff-V2 outperforms the previous state-of-the-art (SOTA) on all the tasks with different modalities, which showcases the generalization and effectiveness of the proposed method. In brief, the contributions of this paper are:
* The first to integrate transformer into a diffusion-based model for general medical image segmentation.
* An anchor condition with Gaussian Spatial Attention to mitigate the diffusion variance and speed up the ensemble.
* A semantic condition with SS-Former to model the segmentation noise and semantic feature interaction.
* SOTA performance on sixteen medical segmentation tasks with different image modalities.
## 2 Method
### Overall architecture
The overall flow of MedSegDiff-V2 is shown in Figure 1. To introduce the process, consider a single step \(t\) of the diffusion process. The noisy mask \(x_{t}\) is first inputted to a UNet with conditional integration, called the Diffusion Model. The condition sources are the segmentation features extracted from the raw images through another standard UNet, called the Condition Model. Two different conditioning manners are applied to the Diffusion Model: anchor condition and semantic condition. Following the flow of the input, the anchor condition is first imposed on the encoder of the Diffusion Model. It integrates the anchor segmentation features, which are the decoded segmentation features of the Condition Model, into the encoded features of the Diffusion Model. This allows the diffusion model to be initialized by a rough but static reference, which helps to reduce the diffusion variances. The semantic condition is then imposed on the embedding of the Diffusion Model. This integrates the semantic segmentation embedding of the Condition Model into the embedding of the Diffusion Model. This conditional integration is implemented by the SS-Former, which bridges the gap between the noise and semantic embedding, and abstracts a stronger representation with the advantage of the global and dynamic nature of transformer.
MedSegDiff is trained using a standard noise prediction loss \(\mathcal{L}_{noise}\) following DPM [9] and an anchor loss \(\mathcal{L}_{anchor}\). \(\mathcal{L}_{anchor}\) is a combination of soft dice loss and cross-entropy loss. Specifically, the total loss function is represented as:
\[\mathcal{L}_{total}^{t}=\mathcal{L}_{noise}^{t}+(t\equiv 0\pmod{\alpha})( \mathcal{L}_{dice}+\beta\mathcal{L}_{ce}) \tag{1}\]
where \(t\equiv 0\pmod{\alpha}\) control the times of supervision over Condition Model through hyper-parameter \(\alpha\), \(\beta\) is another empirical hyper-parameter to weight the cross-entropy loss.
### Anchor Condition with Gaussian Spatial Attention
Without the inductive bias of convolution layer, transformer blocks have stronger representation but also to be more sensitive to the input variance. Directly adding the transformer block to the Diffusion Model will cause the large variance on each times' outputs, as we show the experimental results in Section 3. To overcome this negative effect, we introduce the anchor condition operation to the Diffusion Model.
The anchor condition integrates the anchor, which is the decoded segmentation features of the Condition Model into the encoder features of the Diffusion Model. We propose a Gaussian Spatial Attention to represent the uncertainty nature of the given segmentation features from the Condition Model. Formally, consider we integrate the last conditional segmentation feature \(f_{c}^{-1}\) into the first diffusion feature \(f_{d}^{0}\). Gaussian Spatial Attention can be expressed as:
\[f_{anc}=Max(f_{c}^{-1}*k_{Gauss},f_{c}^{-1}), \tag{2}\]
\[f_{d}^{{}^{\prime}0}=Sigmoid(f_{anc}*k_{Conv_{1\times 1}})\cdot f_{d}^{0}+f_{d}^{0}, \tag{3}\]
Figure 1: An illustration of MedSegDiff. For the clarity, the time step encoding and skip connection in UNet are omitted in the figure.
where \(*\) denotes slide-window kernel manipulation, \(\cdot\) denotes general element-wise manipulation. In Eqn. 2, we first apply a Gaussian kernel \(k_{G}\) over \(f_{c}^{-1}\) to smooth the activation, as \(f_{c}^{-1}\) serves as an anchor but may not be completely accurate. The mean and variance of. The mean and variance of \(k_{G}\) are learnable. We then select the maximum value between the smoothed map and the original feature map to preserve the most relevant information, resulting in a smoothed anchor feature \(f_{anc}\). In Eqn. 3, we integrate \(f_{anc}\) into \(f_{d}^{0}\) to obtain an enhanced feature \(f_{d}^{{}^{\prime}0}\). Specifically, we first apply a \(1\times 1\) convolution \(k_{1\times 1conv}\) to reduce the number of channels in the anchor feature to \(1\). Then, we use a sigmoid activation function on the anchor feature and add it to each channel of \(f_{d}^{0}\), similar to the implementation of spatial attention [24]. Gaussian Spatial Attention extracts a rough anchor feature from the Condition Model and integrates it into the Diffusion Model. This provides the Diffusion Model with a correct range for predictions while also allowing it to further refine the results.
### Semantic Condition with SS-Former
We propose a novel transformer architecture, called Spectrum-Space Transformer (SS-Former), to effectively integrate the conditional segmentation embedding into the diffusion embedding. SS-Former is composed of several blocks that share the same architecture. Each block consists of two cross-attention-like modules. The first encodes the diffusion noise embedding into the condition semantic embedding, and the next module encodes the noise-blended semantic embedding into the diffusion noise embedding. This allows the model to learn the interaction between noise and semantic features and achieve a stronger representation.
Since the Diffusion Model predicts the redundant noise from the noisy mask input, it will have a domain gap between its embedding and that of the conditional segmentation semantic embedding. This gap can lead to confusion when using matrix manipulations in a stranded transformer. To address this challenge, we propose a novel spectrum-space attention mechanism. The key idea is to merge semantic and noise information in Fourier space, rather than Euclidean space. This allows for the separation and blending of components based on frequency-affinity in different spectrums. Formally, consider \(c^{0}\) is the deepest feature embedding of Condition Model and \(e\) is that of Diffusion Model. We first transfer \(c^{0}\) and \(e\) to the Fourier space, denoted as \(F(c^{0})\) and \(F(e)\), respectively. Note that the feature maps are all patchlized and liner projected in accordance with the standard vision transformer method. Then we compute an affinity weight map over Fourier space taking \(e\) as the \(query\) and \(c^{0}\) as the \(key\), as represented by the following equation:
\[\mathcal{M}=a\,sin(w(F(c^{0})\mathcal{W}^{q})(F(e)\mathcal{W}^{k})^{T}), \tag{4}\]
where \(\mathcal{W}^{q}\) and \(\mathcal{W}^{k}\) are the learnable \(query\) and \(key\) weights in Fourier space. We then employ a periodic active function to limit the representation spectrum in Fourier space, as a substitute of standard activation applied in Euclidean space
in standard self-attention. In the implementation, we use a sine function \(sin\) with learnable amplitude \(a\) and frequency \(w\) as the constraint.
The affinity map is then transferred back to Euclidean space using inverse fast Fourier transform (IFFT) and applied to condition features in \(value\),
\[f=F^{-1}(M)(c^{0}w^{v}), \tag{5}\]
where \(W^{v}\) is the learnable value weights. We then apply the time embedding to an AdaIN normalization following the classic diffusion implementation [15], which normalizes the feature and then expands it using scale and shift parameters learned from the time embedding. This makes the transformer adaptive to the step information. We also use a Multi-layer Perceptron (MLP) to further refine the attention result, obtaining the final feature \(\tilde{c}^{0}\). The following attention module is symmetric to the first one, using the combined feature \(\tilde{c}^{0}\) as the query and noise embedding \(e\) as the key and value, in order to transform the segmentation features to the noise domain. The transformed feature \(c^{1}\) will serve as the condition embedding for the next block.
## 3 Experiments
### Dataset
We conduct the experiments on total four different medical image segmentation datasets. One dataset is used to verify the general segmentation performance, which is Multi-Organ Segmentation in Abdominal CT Images. We use public AMOS2022 [12] dataset, which we employ 200 multi-contrast abdominal CT from AMOS 2022 with sixteen anatomies manually annotated for abdominal multi-organ segmentation. The other three datasets are used to verify the model performance on multi-modal images, which are the optic-cup segmentation from fundus images, the brain tumor segmentation from MRI images, and the thyroid nodule segmentation from ultrasound images. The experiments of glaucoma, thyroid cancer and melanoma diagnosis are conducted on REFUGE-2 dataset [4], BraTs-2021 dataset [1] and DDTI dataset [17], which contain 1200, 2000, 8046 samples, respectively. The datasets are publicly available with segmentation labels. Train/validation/test sets are split following the default settings of the dataset.
### Implementation Details
The primary architecture of MedSegDiff is a modified ResUNet [26], which we implement using a ResNet encoder followed by a UNet decoder. The specific network configuration can be found in [25]. All experiments were conducted using the PyTorch platform and trained/tested on 4 NVIDIA A100 GPUs. All images were uniformly resized to a resolution of 256\(\times\)256 pixels. The networks were trained in an end-to-end manner using the AdamW [14] optimizer with a batch size of 32. The initial learning rate was set to 1 \(\times 10^{-4}\).
### Main Results
To verify the general medical image segmentation performance, we compare MedSegDiff-V2 with SOTA segmentation methods on multi-organ segmentation dataset AMOS2022. The quantitative results are shown in Table 1. In the table, we compare with the segmentation methods which are widely-used and well-recognized in the community, including the CNN-based method nnUNet [10], the transformer-based methods TransUNet [2], UNetr [8], Swin-UNetr [7] and the diffusion based method EnsDiff [23], MedSegDiff [25]. We also compare with a simple combination of diffusion and transformer model. We replace the UNet model in MedSegDiff to TransUNet and denoted it as 'MedSegDiff + TransUNet' in the table. We evaluate the segmentation performance by Dice score. The compared methods are all implemented with their default setting.
As seen in Table 1, advanced network architectures and sophisticated designs are crucial for achieving good performance. With regards to network architecture, well-designed transformer-based models such as Swin-UNet outperform the carefully designed CNN-based model, nnUNet. The diffusion-based model MedSegDiff again outperforms the transformer-based models on most of the organs. However, network architecture alone is not the sole determining factor for performance. For example, the well-designed CNN-based model nnUNet considerably outperforms the transformer-based model TransUNet and UNetr in the table. This is also true for diffusion-based models. We can see that a straightforward adoption of the diffusion model for medical image segmentation, i.e., EnsDiff, achieves an unsatisfied performance. A simple combination of transformer and diffusion model, i.e., MedSegDiff+TransUNet, obtains even worse performance than the standard MedSegDiff. This is because the transformer is more sensitive to adaptive conditions and extracts more delicate semantic features that diverge from the diffusion backbone. By introducing anchor condition and SS-Former in MedSegDiff-V2, the diffusion + transformer model overcomes these challenges and shows superior performance. We compare it with diffusion-based models, i.e., EnsDiff and MedSegDiff, using the same ensemble times (all set to five times), and it produces more stable and accurate results as shown in the table.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c c c} \hline \hline Methods & \multicolumn{1}{c}{Splecen R. Kid} & \multicolumn{1}{c}{L. Kid} & \multicolumn{1}{c}{Gall.} & \multicolumn{1}{c}{Eso.} & \multicolumn{1}{c}{Liver} & \multicolumn{1}{c}{Stom.} & \multicolumn{1}{c}{Aorta} & \multicolumn{1}{c}{IVC} & \multicolumn{1}{c}{Panc.} & \multicolumn{1}{c}{RAG} & \multicolumn{1}{c}{Lug.} & \multicolumn{1}{c}{Blad.} & \multicolumn{1}{c}{Pros.} & \multicolumn{1}{c}{Avg} \\ \hline TransUNet & 0.881 & 0.928 & 0.919 & 0.813 & 0.740 & 0.973 & 0.832 & 0.919 & 0.841 & 0.713 & 0.638 & 0.565 & 0.685 & 0.748 & 0.692 & 0.792 \\ Baseline (EnsDiff) & 0.905 & 0.918 & 0.904 & 0.732 & 0.723 & 0.947 & 0.738 & 0.915 & 0.838 & 0.704 & 0.677 & 0.618 & 0.715 & 0.673 & 0.680 & 0.779 \\ UNetr & 0.926 & 0.936 & 0.918 & 0.785 & 0.702 & 0.969 & 0.788 & 0.893 & 0.828 & 0.732 & 0.717 & 0.554 & 0.658 & 0.683 & 0.722 & 0.784 \\ Swin-UNetr & 0.959 & 0.960 & 0.949 & 0.894 & 0.827 & **0.979** & 0.899 & 0.944 & 0.899 & 0.828 & 0.791 & 0.745 & 0.817 & **0.875** & 0.841 & 0.880 \\ nnUNet & 0.951 & 0.962 & 0.939 & 0.889 & 0.843 & 0.962 & 0.870 & 0.958 & 0.865 & 0.835 & 0.801 & **0.768** & 0.835 & 0.832 & 0.836 & 0.876 \\ MedSegDiff & 0.963 & 0.965 & 0.953 & 0.917 & 0.846 & 0.971 & 0.906 & 0.952 & 0.918 & 0.854 & 0.803 & 0.751 & 0.819 & 0.868 & 0.855 & 0.889 \\ + TransUNet & 0.941 & 0.932 & 0.921 & 0.934 & 0.813 & 0.946 & 0.867 & 0.921 & 0.880 & 0.821 & 0.793 & 0.528 & 0.788 & 0.813 & 0.837 & 0.762 \\ \hline MedSegDiff-V2 & **0.971** & **0.969** & **0.964** & **0.932** & **0.864** & 0.976 & **0.934** & **0.968** & **0.925** & **0.871** & **0.815** & 0.762 & **0.827** & 0.873 & **0.871** & **0.901** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The comparison of MedSegDiff-V2 with SOTA segmentation methods over AMOS dataset evaluated by Dice Score. Best results are denoted as **bold**.
Figure 2 presents a qualitative comparison of MedSegDiff-V2 and other competitive methods. It can be observed that MedSegDiff-V2 segments more accurately on parts that are difficult to recognize by the human eye. Due to its ability to benefit from the superior generation capability of the diffusion model and the semantic representation capability of the transformer, it can generate segmentation maps with precise and accurate details, even in low-contrast or ambiguous areas.
We also compare our method to state-of-the-art (SOTA) segmentation methods proposed for three specific tasks with different image modalities. The main results are presented in Table 2. In the table, ResUnet [26] and BEAL [21] are used for optic disc and cup segmentation, TransBTS [22] and EnsemDiff [23] are used for brain tumor segmentation, and MTSeg [5] and UltraUNet [3] are used for thyroid nodule segmentation. We also compare to general medical image segmentation methods on these three datasets. The segmentation performance is evaluated using the Dice score and IoU.
As seen in Table 2, MedSegDiff-V2 outperforms all other methods on three different tasks, showcasing its ability to generalize to various medical segmentation tasks and image modalities. Compared to the UNet-based MedSegDiff, it improves by 2.0% on Optic-Cup, 1.9% on Brain-Tumor, and 3.9% on Thyroid Nodule in terms of the Dice score, illustrating the effectiveness of the transformer-based backbone. Additionally, when compared to MedSegDiff with TransUNet, it overcomes compatibility issues and significantly improves performance on all three tasks, demonstrating the effectiveness of the proposed anchor condition and SS-Former.
Figure 2: The visual comparison with SOTA segmentation models.
### Ablation Study
We conducted a comprehensive ablation study to verify the effectiveness of the proposed anchor conditioning and SS-Former. The results are shown in Table 3, where Anc.Cond. denotes anchor conditioning. We evaluate the performance using the Dice score (%) on all three tasks. The models were run five times for ensemble. From the table, we can see that Anc.Cond. significantly improves the vanilla diffusion model, with an improvement of 2.4% on thyroid nodule segmentation, 1.6% and 1.8% respectively. SS-Former learns the interaction between noise and semantic features with a vision transformer-based architecture, further improving the segmentation results. It promotes MedSegDiff-V2 by over 1% on all three tasks and achieves new state-of-the-art performance.
## 4 Conclusion
In this paper, we enhance the diffusion-based medical image segmentation framework by incorporating the transformer mechanism into the original UNet back
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Optic-Cup} & \multicolumn{2}{c|}{Brain-Turmor} & \multicolumn{2}{c}{Thyroid} & \multicolumn{1}{c}{Nodule} \\ \hline \hline & Dice & IoU & Dice & IoU & Dice & IoU \\ \hline ResUnet & 80.1 & 72.3 & - & - & - & - \\ BEAL & 83.5 & 74.1 & - & - & - & - \\ \hline TransBTS & - & - & 87.6 & 78.3 & - & - \\ EnsemDiff & - & - & 88.7 & 80.9 & - & - \\ \hline MTSeg & - & - & - & - & 82.3 & 75.2 \\ UltraUNet & - & - & - & - & 84.5 & 76.2 \\ \hline UNetr & 83.2 & 73.3 & 87.3 & 80.6 & 81.7 & 73.5 \\ Swin-UNetr & 84.3 & 74.5 & 88.4 & 81.8 & 83.5 & 74.8 \\ nnUNet & 84.9 & 75.1 & 88.2 & 80.4 & 84.2 & 76.2 \\ TransUNet & 85.6 & 75.9 & 86.6 & 79.0 & 83.5 & 75.1 \\ MedsegDiff & 85.9 & 76.2 & 88.9 & 81.2 & 84.8 & 76.4 \\ MedsegDiff+TransUNet & 82.1 & 72.6 & 86.1 & 78.0 & 79.2 & 71.4 \\ \hline MedSegDiff-v2 & **87.9** & **80.3** & **90.8** & **83.4** & **88.7** & **81.5** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison of MedSegDiff with SOTA segmentation methods. Best results are denoted as **bold**.
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline Anc.Cond. SS-Former & \multicolumn{2}{c}{OpticCup} & \multicolumn{2}{c}{BrainTumor ThyroidNodule} \\ \hline & & 84.6 & 88.2 & 84.1 \\ ✓ & & 86.2 & 89.4 & 86.5 \\ ✓ & ✓ & **87.9** & **90.8** & **88.7** \\ \hline \hline \end{tabular}
\end{table}
Table 3: An ablation study on anchor conditioning and SS-Former. Dice score(%) is used as the metric.
bone, called MedSegDiff-V2. We propose an anchor condition to ensure the stability of the model and a novel SS-Former architecture to learn the interaction between noise and semantic features. The comparative experiments were conducted on 18 organs and 4 medical image segmentation datasets with different image modalities and our model outperformed previous state-of-the-art methods. As the first transformer-based diffusion model for medical image segmentation, we believe MedSegDiff-V2 will serve as a benchmark for future research. |
2302.08799 | Wizard of Errors: Introducing and Evaluating Machine Learning Errors in
Wizard of Oz Studies | When designing Machine Learning (ML) enabled solutions, designers often need
to simulate ML behavior through the Wizard of Oz (WoZ) approach to test the
user experience before the ML model is available. Although reproducing ML
errors is essential for having a good representation, they are rarely
considered. We introduce Wizard of Errors (WoE), a tool for conducting WoZ
studies on ML-enabled solutions that allows simulating ML errors during user
experience assessment. We explored how this system can be used to simulate the
behavior of a computer vision model. We tested WoE with design students to
determine the importance of considering ML errors in design, the relevance of
using descriptive error types instead of confusion matrix, and the suitability
of manual error control in WoZ studies. Our work identifies several challenges,
which prevent realistic error representation by designers in such studies. We
discuss the implications of these findings for design. | Anniek Jansen, Sara Colombo | 2023-02-17T10:43:34Z | http://arxiv.org/abs/2302.08799v1 | # Wizard of Errors: Introducing and Evaluating Machine Learning Errors in Wizard of Oz Studies
###### Abstract.
When designing Machine Learning (ML) enabled solutions, designers often need to simulate ML behavior through the Wizard of Oz (WoZ) approach to test the user experience before the ML model is available. Although reproducing ML errors is essential for having a good representation, they are rarely considered. We introduce Wizard of Errors (WoE), a tool for conducting WoZ studies on ML-enabled solutions that allows simulating ML errors during user experience assessment. We explored how this system can be used to simulate the behavior of a computer vision model. We tested WoE with design students to determine the importance of considering ML errors in design, the relevance of using descriptive error types instead of confusion matrix, and the suitability of manual error control in WoZ studies. Our work identifies several challenges, which prevent realistic error representation by designers in such studies. We discuss the implications of these findings for design.
Wizard of Oz, Machine Learning, Machine Learning Errors, User Experience Design, User Experience Analysis, Interaction Design, Computer Vision, Prototyping Methods +
Footnote †: journal: Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Ac Accepted on Accepted Ac Accepted on Accepted on Accepted Ac Accepted on Ac Accepted on Accepted Ac Accepted on Ac Accepted on Accepted Ac Accepted on Accepted Ac Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted Ac Ac Accepted on Accepted Ac Accepted on Accepted Ac Accepted on Accepted Ac Accepted on Ac Accepted Ac Accepted on Accepted Ac Accepted on Accepted Ac Accepted on Accepted Ac Ac Accepted on Accepted Ac Ac Accepted on Accepted Ac Ac Accepted on Accepted Ac Ac Accepted on Accepted Ac Ac Accepted on Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted Ac Ac Accepted on Accepted Ac Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Ac Accepted on Ac Accepted Ac Ac Accepted Ac Ac Ac Accepted on Ac Accepted Ac Ac Ac Accepted Ac Ac Accepted on Ac Ac Accepted Ac Ac Ac Accepted Ac Ac Ac Accepted Ac Ac Ac Accepted Ac Ac Ac Ac Accepted Ac Ac Ac Accepted Ac Ac Ac Accepted Ac Ac Ac Accepted Ac Ac Ac Ac Accepted Ac Ac Ac Ac Accepted Ac Ac Ac Ac Accepted Ac Ac Ac Ac Ac Accepted Ac
Previous studies have shown that ML errors can greatly impact the UX (Bott and Laviola, 2017; Laviola, 2017; Laviola, 2017), the mental models users form to interact with the system, and their trust in the ML model (Kang et al., 2018). Different error types could differently influence the UX, e.g. categorizing a Golden Retriever as a Labrador might be perceived as an understandable ML error by users, while mistaking a dog for a bird might cause more frustration and distrust in the ML-based system. Considering different error types and testing their effects on the UX early on would inform design decisions to ensure that the system can 'fail gracefully' (Kang et al., 2018), for instance giving users the possibility to report an error, or providing information on the types of errors that might occur, and their causes.
In this paper, we introduce Wizard of Errors (WoE), a web interface that facilitates the inclusion of ML errors in WoZ studies. This interface can be used in WoZ studies where the wizard can recognize the ground truth, i.e. the correct outcome of the ML prediction, from the input data. This happens, for instance, in object recognition, where the designer is able to recognize an object correctly, the same way an ML model is expected to do. We tested the WoE prototype with 10 design students to investigate if designers can successfully mirror ML errors during a WoZ study, in order to have a more representative simulation of the UX in the early phases of a design process. This study aims to contribute to the current discussion on designing with ML, by showing how ML errors can be included and tested in WoZ studies, and by uncovering designers' challenges and misconceptions in mirroring an ML model behavior.
## 2. Machine learning failures and errors
ML errors can occur in many forms and can have different origins. In this paper, we focus on errors that may occur during the user's interaction with ML-enabled solutions due to an incorrect ML prediction. During the remainder of this paper, the term _ML error_ will be used to describe such types of errors.
A standard method to report ML errors is a confusion matrix. Here, errors are described as False Positives (FPs) and False Negatives (FNs). However, the confusion matrix, especially for a multi-class classification model (where the prediction is not a binary yes/no, but is based on three or more labels) (Kolmogorov, 1955) is hard to interpret for ML non-experts since the terminology is confusing and the reading direction and structure are unintuitive (Kolmogorov, 1955). To overcome these issues and make errors more understandable to designers, we adopted the terminology introduced by Bott and Laviola (2017) where they differentiate between four types of errors. The definition of the errors is tailored to their use case - classifying handwritten mathematical equations, but it can be generalized into the following definitions:
* _Segmentation error_: incorrect segmentation of the data that results in an incorrect prediction (e.g. recognizing a face in a photo in the background of a photo instead of the person in the foreground)
* _Similarity error_: incorrect prediction that is somewhat related to the correct answer (e.g. recognizing sugar in a photo as salt)
* _Wild errors_: incorrect prediction that appears to have no relationship with the correct answer (e.g. recognizing sugar in a photo as a carrot)
* _No-recognition error_: failing to give a prediction (e.g. a face recognition system not responding when a face appears in front of the camera)
These descriptive error types can be used to create an error repository for an ML model, which contains all possible error options for each correct label. Such a repository can then be used in a WoZ
Figure 1. Prediction interface with explanations at the left-hand side
study. Compared to the confusion matrix, these four error types are expected to be easier to understand and to better support designers in testing the users' response to different types of errors.
## 3. The Wizard of Errors Interface
To enable the inclusion of the four ML error types in WoZ studies, we developed _Wizard of Errors_ (WoE). WoE is a web interface, which aims to help designers test the effects of different ML errors on users' experience while interacting with an ML-enabled system. The WoE application can be connected to different prototyping platforms (e.g. Processing, JavaScript, Python, Arduino) to perform WoZ studies. During the study, the WoE enables designers to simulate the predictions of supervised ML models by selecting either correct or incorrect labels (i.e. ML errors). For each correct label, four incorrect labels can be selected, based on the four different types of errors. Designers can also set an ML model target accuracy, so they can adjust the number of errors dynamically throughout the test, in order to reach the desired accuracy they intend to assess.
The WoE application consists of two elements: (i) _the error repository_; (ii) _The WoE interface_. The error repository needs to be created by the designer in advance, as part of the WoZ setup, and it consists of a table that includes all the possible correct labels, and the corresponding incorrect labels - one for each of the four error types (see Table 1). Once the error repository is uploaded to the WoE interface, the wizard-designer can decide to select a target accuracy, which will guide them in selecting the right amount of ML errors (see Figure 1(a)).
The WoE interface is used during the WoZ study to simulate different errors during the user's interaction with the ML-enabled system (see Figure 1 for an overview of the system). The WoE interface is controlled by the wizard-designer, who can insert ML errors during the WoZ test, in order to determine how users react to potential ML errors. The interface allows designers to test three elements that potentially affect the overall UX: (i) the accuracy of the ML model; (ii) the types of ML errors that can occur; (iii) the confidence score for the predictions (a number between 0-1 or 0-100% that indicates the probability associated to that label. A high confidence score does not mean that the label is correct, just that the machine associates that label to the input with a high probability).
## 4. Method
To assess if designers could mirror the behavior of an ML model in a WoZ study, we tested the WoE interface with 10 design students. We aimed to investigate if designers would be able to simulate the ML behavior, including its errors, through the WoE interface, and what issues they would encounter. As a design case for our test, we used a smart kitchen countertop inspired by the IKEA concept kitchen (Kal
### Procedure
The study consisted of three parts and was conducted entirely online through a video-conference platform. The study protocol complied with the university Ethical Review Board procedures and all data were managed in accordance with GDPR regulations.
_Part1- pre-experiment interview._ For each subject, a pre-experiment semi-structured interview and a short survey were used to get insights into the participant's experience with and knowledge of ML errors.
_Part II - the experiment._ The design case of the smart countertop was introduced to the participant, together with the aim of the WoZ study, i.e. simulating the behavior of an ML system for image recognition, including its errors. After describing the WoE interface, participants were asked to set the target accuracy score to 50% and to start the video. The video showed an ingredient being poured into a bowl, and the task of the participant was to classify that ingredient, to simulate the ML prediction. After the ingredient was added, the video paused. The participant was then asked to (i) select the ground truth from the upper list (see Figure 1), (ii) choose the confidence score for their prediction, and (iii) simulate the ML prediction by selecting either the correct label or one of the ML errors. If the correct label was chosen, a check mark would appear next to the ingredient in the video. In case of an error, a cross mark would be shown. This was meant to help the participant keep track of the number of correct predictions and errors they had simulated during the study. After recording the prediction, the video would continue by showing the next ingredient. The same procedure was repeated for 12 ingredients in total.
The WoE interface also showed the real-time accuracy score, based on the number of (in)correct predictions made up to that point. This allowed the participant to adjust their behavior over time, in order to reach the pre-defined target accuracy, i.e. make more correct predictions if the accuracy score was below the target accuracy score, or more incorrect predictions if the accuracy score was above the target. Once the video ended, the participant was asked to follow the same procedure, watching the video again, this time with a target accuracy score set to 70%. Participants were asked to think aloud during the whole study, to explain the reasoning behind their choices and actions. All participants' interactions with WoE (i.e. logs) were recorded for subsequent analysis.
_Part III - post-experiment interview._ At the end of the experiment, a semi-structured interview was conducted and participants filled out a post-experiment survey. The interview and the survey were aimed to assess the participant's understanding of the error types, their view on how different error types could influence the UX, and what error they found most difficult to make.
## 5. Preliminary Results
Results stem from a thematic analysis of qualitative data (i.e. audio recordings and interviews), as well as from the analysis of quantitative data (i.e. Likert scales from the survey and logs of the WoE system). Since the sample was small and homogeneous, we see these results as interesting initial findings that need to be validated through a larger study.
### Machine Learning errors
Participants indicated that before working with ML, they were unaware of possible ML errors, but they became familiar with them during their first project. They assessed errors as increasingly important throughout the design process (Ideation phase: M=2.9, SD= 1.5; Realization phase: M=6.4, SD= 0.5). This assessment barely changed after the experiment (Ideation phase: M=3.0, SD= 1.7; Realization phase: M=6.5, SD= 0.5).
The descriptive types of ML errors used in this experiment were new to all participants. While most ML errors were clear, mainly due to their self-explanatory names, the segmentation error was more difficult to understand and only two participants could correctly define it afterwards. During the study, the participants could read the explanation by hovering over the buttons, but this option was hardly used.
Moreover, participants expected the different error types to have different impacts on the UX. While the similarity error was seen by most as less harmful because it was expected to be understandable by humans, the other errors were considered to have a more negative impact on the UX. Participants expected these errors to decrease the trust in the model or elicit frustration. _But if I put a carrot on the table and it is like "On strawberry" then I would be like, this is a stupid system if cannot even recognize such a simple thing then I must just work horribly"_ (P2).
### Mirroring Machine Learning
The main focus of the study was to evaluate if participants could mirror the behavior of an ML model by simulating ML errors. While all participants had worked with ML before and trained models themselves, their behavior did not match with that of an ML model. Several misconceptions surfaced during the experiment, as explained below. Nevertheless, the WoE interface was successful in encouraging wizards to make ML errors and to reach the target accuracy score, although they were hesit to make ML errors in the beginning (for \(accuracy=50(\%):M=59.10,SD=10.47\); for \(70(\%):M=68.72,SD=9.67\)). Participants claimed that WoE was a useful tool to test the interactions in an early stage: "_I think it would be helpful to explore the opportunities and limitations of ML in combination with a specific project. Normally this is difficult to do because in an early stage of a project a full system is often not working already_." (P6). However, they also indicated that it was hard
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline ID & correctAnswer & segmentationError & similarityError & wildError & noRecognitionError \\ \hline
0 & oats & cinnamon & flour & carrots & null \\
1 & flour & salt & oats & maple syrup & null \\ \hline \end{tabular}
\end{table}
Table 1. A selection from the error repository
to assess a realistic number of errors to include (P9) and that they would like to have something that could be applicable even earlier in the design process, to evaluate the potential impact of errors (P3, P4 and P5).
Logical errors are easier to make.The ML errors made during the experiment showed that participants mainly selected similarity errors and hardly any wild errors (see Figure 3). From the think-aloud transcripts, it became clear that participants based their choices mainly on the label associated to each error type and not the error type itself (e.g. choosing the'sugar' label for salt, instead of choosing a similarity error). By using labels, participants often selected an understandable or logical error from a human perspective. They also expressed it felt unnatural to send an unrelated error. As participant 9 stated: _'For me as a wizard, a no-recognition error would be weird to make because I know what it is and rationalizing why it would not recognize it at all would be a bit weird, in my opinion.'_
Projecting personal knowledge.Similarly to the tendency to select logical errors, participants based the behavior of the ML model on their own knowledge. This was mainly visible when coconut oil was shown in the video, and 85% of the predictions were an ML error - while the mean for all other ingredients, excluding coconut oil, was 31.8%. As participant 7 put it: _'I myself have never used coconut oil before, so I am not really sure if it is an oil or if it looks like this.'_
The tendency to rely on their own knowledge could also be seen in participants' assumption that common ingredients (in their food culture) are easier to recognize: _'If there are ingredients that are exotic, then it might be easier to make [a no-recognition error]'_ (P1). Moreover, they did not consider that ingredients can exist in different shapes and forms. Eggs and carrots were seen as very easy to classify for the machine because they had a distinctive shape, but participants did not take into account that they could exist in many forms and shapes, which would make it harder for the ML model to recognize them correctly.
Higher confidence score for correct answers.To check if participants gave higher confidence scores to correct predictions, a linear regression was calculated. A significant regression was found (\(F(1,238)=35.618,p<.001\)), with an \(R^{2}\) of \(.130\) and an \(R^{2}_{Adjusted}\) of \(.127\), indicating that the error type (correct vs incorrect) is a significant predictor for the confidence score (\(\beta=.361,t(238)=5.968,p<.001\)) and that the confidence score increased when a correct prediction was selected. This differs from a real ML model, where a confidence score can be equally high for incorrect predictions.
ML model self-awareness.Finally, participants expected the model to be aware that it was making a mistake and mentioned that it would be useful if it would indicate to the final user what type of error it was making: _'I think for the user it is very helpful to know that there is an error, and they know what kind of error it is.'_ (P3). A possible explanation for this misconception might be that the participants had only trained and tested an ML model and not deployed it live, since in training and testing supervised classification models the correct label is available and makes it possible to detect errors - although not the error types.
## 6. Discussion
The preliminary results presented in the previous section provide insights into what aspects of mirroring an ML model are difficult for wizard-designers. These findings have implications both for designing WoZ studies for ML applications, and for designing with ML in general.
When designing WoZ studies to test user experiences and interactions based on ML, one needs to take into account the fact that wizards will not be able to ignore their human rationale, therefore their behavior will not be a good representation of the ML behavior. Our study showed that the WoE interface and the ML errors can already contribute to a better representation of ML models in WoZ studies and can trigger designers to reflect on the importance of testing different types of ML errors. However, it also is clear that this is not sufficient, as designers' misconceptions prevent them from properly mirroring ML models. One way to overcome this limitation is to recommend wizards certain error types during the use of the WoE interface, for instance in case certain error types are more rarely selected, compared to others.
Another option is to allow designers to pre-define how many errors for each type will be sent by the WoE interface, and let the interface randomly assign those errors, while asking designers only to select the ground truth. While limiting designers' ability to adjust the number and type of errors in real-time, this would make the study more realistic.
Figure 3. Overview of the predictions for each ingredient
Next to that, designers not only need to test different error types, but also vary on the number and the moment of occurrence to explore the effect of all these factors on the user's experience and their trust in the ML predictions. While this study only focused on using WoE in the early phase of a design process, ML errors can also be introduced during other phases. For instance, in the ideation and conceptualization phase, cards with the error types, potential consequences and options for adjusting the design can help to consider errors from the beginning. When designing a UX wireframe, the designer can go over each step and consider what would happen if each type of error occurred, to check if their interface is able to fail gracefully (Krishnan et al., 2017) or if it needs to be adjusted so that it can fail gracefully.
## 7. Limitations and Future Work
A limitation of the WoE interface, and WoZ testing with ML in general, is that it is only applicable in cases where the humans are able to make the correct predictions - i.e. recognize the ground truth, during the study. However, this still leaves out a considerable number of ML models within supervised learning. This study only covered object recognition, but we expect the error types to be also transferable to other subsections of computer vision and potentially also to other types of supervised classification. Further research should explore and validate the use of ML in these applications.
Another limitation of this study is that it was conducted with students from one Industrial Design faculty, which could give a biased view on how design students, and designers in general, approach and use ML. Therefore, we consider these findings as preliminary results and future studies should be conducted with a larger and more diverse sample to confirm their validity. Furthermore, the WoE interface should be tested in a WoZ study with actual users, to gather more insights on designers' behaviors in a real context.
## 8. Conclusion
Using ML in prototypes during the early phases of a design process is challenging. In this study, we introduced Wizard of Errors (WoE), a prototyping tool for conducting WoZ studies on ML-enabled interactions. The WoE interface facilitates the inclusion of ML errors into WoZ studies, since these are essential for UX assessment but are currently rarely included. In WoE we used four descriptive error types instead of the confusion matrix and evaluated with design students if ML errors are important to consider and how relevant the four error types are in comparison to the confusion matrix. Moreover, during a WoZ simulation, we evaluated if it is possible for a designer to simulate an ML model in terms of ML error behavior. The error types showed to have potential to be used during WoZ studies, but we also identified several challenges that still prevent the designer from realistic error representation in WoZ. In this study, we designed and tested the WoE interface to embed ML errors in WoZ studies, and provide preliminary knowledge that can help both design researchers and practitioners in this field to consider ML errors as a regular component of the design process for ML-enabled solutions.
|
2304.03224 | On the renormalization group fixed point of the two-dimensional Ising
model at criticality | We analyze the renormalization group fixed point of the two-dimensional Ising
model at criticality. In contrast with expectations from tensor network
renormalization (TNR), we show that a simple, explicit analytic description of
this fixed point using operator-algebraic renormalization (OAR) is possible.
Specifically, the fixed point is characterized in terms of spin-spin
correlation functions. Explicit error bounds for the approximation of continuum
correlation functions are given. | Tobias J. Osborne, Alexander Stottmeister | 2023-04-06T16:57:28Z | http://arxiv.org/abs/2304.03224v1 | # On the renormalization group fixed point of the two-dimensional Ising model at criticality
###### Abstract
We analyze the renormalization group fixed point of the two-dimensional Ising model at criticality. In contrast with expectations from tensor network renormalization (TNR), we show that a simple, explicit analytic description of this fixed point using operator-algebraic renormalization (OAR) is possible. Specifically, the fixed point is characterized in terms of spin-spin correlation functions. Explicit error bounds for the approximation of continuum correlation functions are given.
_Introduction.--_The statistical mechanics of classical lattice systems continue to present fascinating and remarkable physics. The stochastic geometry exhibited by models as fundamental and elementary as the Ising model [1] exhibits a beautiful structure whose active study persists to the current day [2]. Most intriguing here is the critical phenomena of the model as it approaches a phase transition [3]. Applications of the Ising model and its generalisations range from superconductivity [4], fault-tolerant quantum computation [5], high energy physics [6], to genetics [7; 8] and the social sciences [9] and beyond. The two-dimensional case of the Ising model is one of the most well-studied systems in statistical physics, with nearly 80 years of history dating back at least to 1944, with the celebrated work of Lars Onsager [10], who solved the the model on a square lattice in the absence of external magnetic field. This solution is the cornerstone of much of modern statistical physics, and thereby the Ising model has become the benchmark for analytic and numerical methods alike.
During the past decade tensor networks [11] have risen to prominence as a powerful tool to study complex systems. These have a rich history originating in the works of Kadanoff [12; 13] and Wilson [14; 15], the density matrix renormalization group [16], and branching out into a multitude of methods with a wide variety of applications from 2D systems through to models with anyonic excitations. One fascinating area of such works applies modern tensor-network techniques to classical models of statistical physics. This was arguably revolutionized by the tensor renormalization group (TRG) of Levin-Nave [17] having a wide range of applications [18; 19; 20; 21], which has been refined in various forms, in particular to deal with entanglement of local degrees of freedom such as tensor entanglement-filtering renormalization group [22; 23], high-order tensor renormalization group [24; 25; 26; 27; 28; 29], tensor network renormalization (with or without positivity) [30; 31; 32]. Here impressive numerical results suggest the general applicability of the TRG, and relatives such as tensor network renormalization, as a general purpose method for investigating partition functions of classical lattice models. Although the TRG does flow to a fixed point off criticality - i.e., an infinite bond dimension is required to express the fixed-point tensor - it is still useful for the study of critical phenomena. The goal of explicitly computing fixed-point tensors for critical systems - closely related to the approximation of continuum limits - is still an outstanding challenge for tensor-network methods.
The desire for an explicit RG capable of describing the continuum limit of lattice discretizations of quantum field theories has led to the recent development of operator algebraic renormalization (OAR) [33; 34; 35; 36; 37; 38; 39; 40]. This emerging RG method is closely related to tensor network methods such as the multi-scale entanglement renormalization ansatz (MERA) [41; 42; 43], and has enjoyed notable recent successes in the computational and analytic approximation of a variety of quantum field theories, from conformal field theories to higher-dimensional models. It is an intriguing open question to determine whether OAR is applicable in the context of classical criticality and, if so, whether it can furnish any information about the fixed-point tensor at phase transitions.
In this Letter we demonstrate that OAR is capable of exactly representing critical points of classical lattice models. To do this we generalize OAR to apply to partition functions of classical lattice models and analytically compute the action of the OAR group on the transfer operator of the 2D Ising model. We obtain thereby an explicit and analytic representation of the fixed-point tensor. In accordance with expectations arising in previous TRG studies we find that this tensor requires an infinite bond dimension.
Basics of 2d Ising.The two-dimensional anisotropic Ising model on a \(N\times M\) square lattice with periodic boundary conditions can be naturally formulated as a tensor network (see Fig. 1), i.e. its canonical partition,
\[Z_{MN} =\sum_{\{\sigma\},\{\mu\}}\prod_{k=-N}^{N-1}\prod_{j=-M}^{M-1}A_{\mu_{j,k}\mu_{j+1,k}\sigma_{j,k}\sigma_{j,k+1}}, \tag{1}\]
is given in terms of the tensor,
\[A_{\mu\mu^{\prime}\sigma\sigma^{\prime}} =\delta_{\mu,\sigma}e^{K_{1}\mu\mu^{\prime}}e^{K_{2}\sigma\sigma ^{\prime}}, \tag{2}\]
with \(\mu,\mu^{\prime},\sigma,\sigma^{\prime}\in\{\pm 1\}\) as well as horizontal and vertical coupling constants \(K_{1},K_{2}\). Spin-spin and other correlation functions are conveniently expressed using the horizontal transfer matrix \(V_{M}\) (see Fig. 2) naturally given in the \(\sigma^{(3)}\)-basis [44; 45]:
\[\langle e_{\sigma}\!,\!V_{M}e_{\sigma}\rangle =\!\!\sum_{\{\mu\}}\!\!\!\prod_{j=-M}^{M-1}\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(e_{\sigma}\!=\!\otimes_{j=-M}^{M}\!e_{\sigma_{j}}\), \(\sigma_{j}^{(3)}e_{\sigma_{j}}\!=\!\sigma_{j}e_{\sigma_{j}}\). As an operator on the Hilbert space \(\mathcal{H}_{M}=\otimes_{j=-M}^{M}\mathbb{C}^{2}\), associated with each row of the lattice, the transfer matrix \(V_{M}\) takes the form,
\[V_{M}\!=\!C(2K_{2})^{\frac{M}{2}}\!e^{K_{1}\sum_{j=-M}^{M-2}\sigma_{j}^{(3)}( \sigma_{j+1}^{(3)})}e^{K_{1}^{*}\sum_{j=-M}^{M-1}\sigma_{j}^{(1)}}, \tag{4}\]
where \(\tanh(K_{2}^{*})=e^{-2K_{2}}\) and \(C(K_{2})=2\sinh(2K_{2})\), which decomposes into operators associated with vertical couplings, \(V_{M}^{(1)}=(2\sinh(2K_{2}))^{\frac{M}{2}}e^{K_{2}^{*}\sum_{j=-M}^{M-1}\sigma_{ j}^{(1)}}\), and horizontal coupling respectively \(V_{M}^{(3)}=e^{K_{1}\sum_{j=-M}^{M-1}\sigma_{j}^{(3)}\sigma_{j+1}^{(3)}}\). While the partition function is given by the trace of the horizontal transfer matrix, \(Z_{MN}=\operatorname{tr}(V_{M})\), the correlation functions are more naturally expressed using the symmetrized transfer matrix,
\[V_{M}^{(\text{sym})}\!=\!\big{(}V_{M}^{(3)}\big{)}^{\frac{1}{2}}V_{M}^{(1)} \big{(}V_{M}^{(3)}\big{)}^{\frac{1}{2}}, \tag{5}\]
which results in:
\[\langle\sigma_{j_{1},k_{1}}...\sigma_{j_{n},k_{n}}\rangle\!=\!\tfrac{1}{Z_{MN }}\operatorname{tr}\!\left(\!\big{(}V_{M}^{(\text{sym})}\big{)}^{N}\sigma_{j_ {1}k_{1}}^{(3)}...\sigma_{j_{n}k_{n}}^{(3)}\!\right), \tag{6}\]
where \(\sigma_{jk}^{(3)}\!=\!\big{(}V_{M}^{(\text{sym})}\big{)}^{k}\sigma_{j}^{(3)} \big{(}V_{M}^{(\text{sym})}\big{)}^{-k}\).
_OAR for 2d Ising._ Exploiting the operator-algebraic structure of the transfer matrix formulation, we can apply OAR to analyze the large-scale behavior of correlation functions: \(V_{M}^{(\text{sym})}\) is a positive, trace-class operator on \(\mathcal{H}_{M}\) inducing a quasi-free Gibbs state, \(\rho_{MN}=\frac{1}{Z_{MN}}\big{(}V_{M}^{(\text{sym})}\big{)}^{N}\), on the quantum spin chains given in terms of the Pauli algebra \(\mathcal{P}_{M}\!=\!\otimes_{j=-M}^{M-1}\!M_{2}(\mathbb{C})\). By the Jordan-Wigner transform [46], \(a_{j}\!=\!\big{(}\prod_{-M\leq l<j}\sigma_{l}^{(1)}\big{)}\frac{1}{2}(\sigma_ {j}^{(3)}\!+\!i\sigma_{j}^{(2)}\big{)}\), the latter is isomorphic to the algebra of complex fermions \(\mathfrak{A}_{M}=\mathfrak{A}_{\mathrm{CAR}}(\mathfrak{h}_{M})\) with one-particle Hilbert space \(\mathfrak{h}_{M}\!=\!\ell^{2}(M_{M})\), \(\Lambda_{M}=\{-M,...,M-1\}\)[47]. We define the renormalization group transformation[48], \(\mathcal{E}:\mathcal{S}_{2M}\!\to\!\mathcal{S}_{M}\), that coarse grains states on the chain of twice the length, \(\mathcal{S}_{2M}\), to those on the given length, \(\mathcal{S}_{M}\), by its dual quantum channel, \(\alpha:\mathfrak{A}_{M}\!\to\!\mathfrak{A}_{2M}\):
\[\operatorname{tr}(\mathcal{E}(\rho)A)\!=\!\operatorname{tr}(\rho\alpha(A)), \quad\rho\in\mathcal{S}_{2M},\,A\in\mathfrak{A}_{M}. \tag{7}\]
The dual quantum channel is naturally given by an isometry [36], \(R:\mathfrak{h}_{M}\!\to\!\mathfrak{h}_{2M}\):
\[\alpha(a(\xi)\!)\!=\!a(R(\xi)),\,\,\,R(\xi)_{j^{\prime}}\!=\!\sum_{j=-M}^{M-1} \!\xi_{j}\sum_{n\in\mathbb{Z}}h_{n}\delta_{2j,j^{\prime}-n} \tag{8}\]
for \(\xi\in\mathfrak{h}_{M}\) and \(a(\xi)\!=\!\sum_{j=-M}^{M-1}\tilde{\xi}_{j}a_{j}\). The coefficients \(h_{n}\) are given by the low-pass filter of a real, orthonormal, compactly supported scaling function \(s\!\in\!C^{r}(\mathbb{R})\), satisfying the scaling equation \(s(x)\!=\!\sum_{n\in Z}h_{n}\frac{2^{\frac{1}{2}}}{s}(2x\!-\!n)\) (appropriately periodized to comply with the boundary conditions) [49]. The renormalization group transformation takes a particularly simple form in momentum space,
\[R(\hat{\xi})_{\theta^{\prime}}\!=\!2^{\frac{1}{2}}m_{0}(\theta^{\prime})\hat{ \xi}_{2\theta^{\prime}},\,\,m_{0}(\theta^{\prime})\!=\!\tfrac{1}{\sqrt{2}}\! \sum_{n\in\mathbb{Z}}h_{n}e^{-i\theta^{\prime}n}, \tag{9}\]
where \(\hat{\xi}_{\theta}\!=\!\sum_{j=-M}^{M-1}e^{-i\theta j}\xi_{j}\) for \(\theta\!\in\!\frac{\pi}{M}\{-M,...,M\!-\!1\}\). In this way, we realize (discrete) renormalization group flow lines within the state space \(\mathcal{S}_{M}\) by,
\[\rho_{MN}^{(m)}\!=\!\mathcal{E}^{m}(\rho_{(M+m)N}),\qquad 0\!\leq\!m\!<\!\infty, \tag{10}\]
using the Gibbs state \(\rho_{\mathfrak{h}N}\) as an input. On the Pauli algebra \(\mathcal{P}_{M}\), the coarse graining takes the form: \(\mathcal{E}(\,\cdot\,)=\operatorname{ptr}(U_{M}^{*}(\cdot)U_{M})\), where \(\operatorname{ptr}\) is the partial trace with respect to the natural embedding \(\mathcal{H}_{\frac{M}{2}}\subset\mathcal{H}_{M}\), and \(U_{M}\) is a unitary parametrized by the low-pass filter \(h_{n}\) which coincides with the wavelet disentangler in [50; 51] (see the appendix for further details). Fig. 3 illustrates how (10) can be interpreted in terms of TNR which is dual to the construction of a MERA as we will further explain in [52].
_Infinite volume formulation._ We can avoid additional complications in the discussion of the renormalization group fixed point due to boundary conditions, necessary for the algebras \(\mathcal{P}_{M}\), \(\mathfrak{A}_{M}\) at finite \(M\) and \(N\) by passing to an infinite volume formulation, i.e. \(M,N\!\to\!\infty\): First, we observe that imposing the asymptotic scaling conditions,
\[K_{1}\!\sim\!\beta t^{(3)}N^{-1},\qquad\qquad K_{2}^{*}\!\sim\!\beta t^{(1)}N^{ -1}, \tag{11}\]
for \(\beta,t^{(3)},t^{(1)}>0\) for \(N\!\to\!\infty\), provides a Gibbs state, \(\rho_{MN}\overset{N\to\infty}{\to}\rho_{M}=\frac{1}{Z_{M}}e^{-\beta H_{M}}\), of the transverse-field Ising Hamiltonian at inverse temperature \(\beta\)[54] as a consequence of Trotter's product formula [53],
\[H_{M}\!=\!-\!\!\sum_{j=-M}^{M-1}\!\big{(}t^{(3)}\sigma_{j}^{(3)}\sigma_{j+1}^{(3) }\!+\!t^{(1)}\sigma_{j}^{(1)}\big{)}. \tag{12}\]
Second, we note that the definition of the dual quantum channels \(\alpha:\mathfrak{A}_{M}\to\mathfrak{A}_{2M}\) is compatible with taking the
Figure 2: Illustration of the Horizontal transfer matrix \(V_{M}\) associated with the tensor \(A\).
Figure 1: Illustration of the partition function \(Z_{MN}\) in (a) as a two-dimensional tensor network built from the local tensor \(A\) in (b). Dashed lines indicate contractions due to periodic boundary conditions.
infinite volume limit, \(\varinjlim_{M}\mathfrak{A}_{M}=\cup_{M}\mathfrak{A}_{M}=\mathfrak{A}\), in the sense of quasi-local algebras [55], which leads to a description of the limit \(M\to\infty\) in terms of the fermion algebra, \(\mathfrak{A}=\mathfrak{A}_{\mathrm{CAR}}(\mathfrak{b})\), with one-particle space \(\mathfrak{h}=\ell^{2}(\mathbb{Z})\), and the renormalization group transformation, \(\alpha:\mathfrak{A}\to\mathfrak{A}\), defined by the analogue of (8). The dynamics on \(\mathfrak{A}\) is determined by the Hamiltonian \(H\), formally given by (12) for \(M\to\infty\), which is still well-defined as a derivation on strictly local elements of \(\mathfrak{A}\)[55]. In this limit, the Gibbs states \(\rho_{M}\) provide quasi-free KMS-states \(\omega_{\beta}:\mathfrak{A}\to\mathbb{C}\) determined by the two-point function:
\[\omega_{\beta}(a(\xi)a^{\dagger}(\eta))\!=\!\langle\xi,\!C^{(1)}_{\beta}\eta \rangle,\omega_{\beta}(\!a^{\dagger}(\xi)a^{\dagger}(\eta))\!=\!\langle\bar{ \xi},\!C^{(2)}_{\beta}\eta\rangle. \tag{13}\]
The covariance operators \(C^{(1)}_{\beta}\), \(C^{(2)}_{\beta}\) have momentum-space kernels,
\[C^{(1)}_{\beta}(\theta,\theta^{\prime})\!=\!\pi\big{(}1\!+\!\Re \big{(}\tfrac{z_{\theta}}{|z_{\theta}|}\big{)}\!\tanh(\beta|z_{\theta}|)\big{)} \delta(\theta\!-\!\theta^{\prime}), \tag{14}\] \[C^{(2)}_{\beta}(\theta,\theta^{\prime})\!=\!-i\pi\Im\big{(} \tfrac{z_{\theta}}{|z_{\theta}|}\big{)}\!\tanh(\beta|z_{\theta}|)\delta( \theta\!-\!\theta^{\prime}),\]
where \(z_{\theta}=t^{(1)}-e^{i\theta}t^{(3)}\). The expressions remain meaningful in the limit \(\beta\to\infty\) providing a ground state of \(H\) on \(\mathfrak{A}\). Evaluating the renormalization group flow (10) results in sequences of renormalized states \(\omega^{(m)}_{\beta}=\omega_{\beta}\circ\alpha^{m}\) which are quasi-free by construction and, thus, determined by their two-point functions:
\[\omega^{(m)}_{\beta}(a(\xi)a^{\dagger}(\eta))\!=\!\langle R^{m}( \xi),C^{(1)}_{\beta}R^{m}(\eta)\rangle\!=\!\tfrac{1}{4\pi}\!\int_{-\pi}^{\pi}\! \!d\theta\big{(}1\!+\!\Re\big{(}\tfrac{z_{\theta}}{|z_{\theta}|}\big{)}\! \tanh(\beta|z_{\theta}|)\big{)}2^{m}\Big{(}\prod_{n=0}^{m-1}|m_{0}(2^{n}\theta )|^{2}\Big{)}\overline{\hat{\xi}_{2^{n}\theta}}\hat{\eta}_{2^{n}=\theta}, \tag{15}\] \[\omega^{(m)}_{\beta}(a^{\dagger}(\xi)a^{\dagger}(\eta))\!=\! \langle R^{m}(\bar{\xi}),C^{(2)}_{\beta}R^{m}(\eta)\rangle\!=\!-\tfrac{i}{4 \pi}\!\int_{-\pi}^{\pi}\!\!d\theta\Im\big{(}\tfrac{z_{\theta}}{|z_{\theta}|} \big{)}\!\tanh(\beta|z_{\theta}|)2^{m}\Big{(}\prod_{n=0}^{m-1}|m_{0}(2^{n} \theta)|^{2}\Big{)}\hat{\xi}_{-2^{n}\theta}\hat{\eta}_{2^{n}=\theta}.\]
Fixed points and admissible scaling limits are determined by analyzing the convergence of (15) for \(m\to\infty\) under suitable renormalization conditions imposed on the couplings \(t^{(1)},t^{(3)}\) and the inverse temperature \(\beta\).
The fixed point at criticality.In the quantum spin-chain formulation, the critical line corresponds to equal couplings \(t^{(3)}=t^{(1)}=t\) in the Hamiltonian (12) in the limit \(\beta\to\infty\), which is equivalent to \(K_{1}\!\approx\!K_{2}^{*}\) (at large \(N\!\gg\!1\) by (11)), i.e. the critical line of the two-dimensional Ising model given by the tensor (2) corresponding to the well-know critical coupling \(K=K_{1}=K_{2}=\frac{1}{2}\ln(1+\sqrt{2})\) in the isotropic case. In view of (15), we have \(z_{\theta}\!=\!t(1-e^{i\theta})\) and \(|z_{\theta}|^{2}=4t^{2}\sin(\frac{1}{2}\theta)^{2}\). Using the change of variables \(k\!=\!2^{m}\theta\) and noting that \(\tanh(\beta|z_{\theta}|)\stackrel{{\beta\to\infty}}{{\to}}\!1\!- \!\delta_{\theta,0}\), we find:
\[\omega^{(m)}_{\beta=\infty}(a(\xi)a^{\dagger}(\eta))\!=\!\tfrac{1}{4\pi}\! \int_{-2^{m}\pi}^{2^{m}\pi}\!\!dk\,\big{(}1\!+\!\tfrac{1\!-\!\cos(2^{-m}k)}{2 |\sin(\frac{1}{2}\!-\!m\!-\!k)|}\big{)}\big{(}\prod_{n=1}^{m}|m_{0}(2^{-n}k)|^{ 2}\big{)}\overline{\hat{\xi}_{k}}\hat{\eta}_{k}\stackrel{{ m\to\infty}}{{\to}}\!\tfrac{1}{4\pi}\!\int_{-\infty}^{ \infty}\!\!dk\,|\hat{s}(k)|^{2}\overline{\hat{\xi}_{k}}\hat{\eta}_{k}, \tag{16}\] \[\omega^{(m)}_{\beta=\infty}(a^{\dagger}(\xi)a^{\dagger}(\eta))\!=\! \tfrac{i}{4\pi}\!\int_{-2^{m}\pi}^{2^{m}\pi}\!\!dk\,\tfrac{\sin(2^{-m}k)}{2 |\sin(\frac{1}{2}\!-\!m\!-\!k)|}\Big{(}\prod_{n=1}^{m}|m_{0}(2^{-n}k)|^{2} \Big{)}\hat{\xi}_{-k}\hat{\eta}_{k}\stackrel{{ m\to\infty}}{{\to}}\!\tfrac{i}{4\pi}\!\int_{- \infty}^{\infty}\!\!dk\,\operatorname{sign}(k)|\hat{s}(k)|^{2}\hat{\xi}_{-k} \hat{\eta}_{k},\]
by Lebesgue's dominated convergence theorem applied to \(\prod_{n=1}^{m}m_{0}(2^{-n}k)\stackrel{{ m\to\infty}}{{\to}}\hat{s}(k)\)[49], see also [36, Lem. 3.7] for an adapted decay estimate for \(m_{0}\). By passing to the
self-dual chiral Majorana fields, \(\psi_{\pm|j}=e^{\pm i\frac{\pi}{4}}a_{j}+e^{\mp i\frac{\pi}{4}}a_{j}^{\dagger}\), we recognize that the limits in (16) are the vacuum two-point functions of the \(c=\frac{1}{2}\) free-fermion conformal field theories (CFTs) of the two chiral halves of the critical Ising fixed point:
\[\omega(\psi_{\pm}(\xi\!\ast\!s)\psi_{\pm}(\eta\!\ast\!s)^{\!\dagger}) = \!\!\frac{1}{\!\!\!\int_{-\infty}^{\infty}}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(t^{(3)}\overset{m\rightarrow\infty}{\rightarrow}t\). As before, the scaling function \(s\) controls the resolution at which the continuum quantum field is probed.
Discussion.We have presented an explicit description of the critical fixed point of the two-dimensional classical Ising model using OAR which may be understood as a Wilson-Kadanoff RG scheme dual to tensor-network methods. In particular, if OAR is applied to density matrices given in terms of transfer matrices of classical lattice systems, it is operationally dual to a (thermal) MERA derived from TNR [43]. Our explicit representation of the critical fixed point relies on an implementation of OAR using wavelet methods that was previously introduced in [35; 36; 37; 38], and the duality with TNR is manifestly exhibited by the unitary defining the coarse-graining channel \(\mathcal{E}\) (see (26) in the appendix), which directly corresponds to the exact disentangler of Evenbly and White for the ground state of the Ising quantum chain [50; 51]. In our construction of the scaling limit, a particularly important role is played by the scaling function associated with a given low-pass filter, as this function controls the resolution at which the fixed-point tensor is probed at unit scale - either in terms of fermionic or spin-spin correlation functions. As a direct consequence of this feature we explicitly observe a universal large-scale behavior independent of the specific choice of scaling functions. Another important advantage of our method over other approaches such as the exact MERA is the provision of explicit, provable error bounds on the approximation of correlation functions for sufficiently regular scaling functions that are independent of the design problem of Hilbert-pair wavelets [60]. Such error bounds allow for a direct understanding of the simulation of QFTs/CFTs by quantum computers [37]. We have exhibited a direct correspondence of the critical fixed point with the vacuum (or Neveu-Schwarz) sector of the Ising CFT with an explicit formula for the two-point functions of the self-dual chiral Majorana field (see (17)). By our method, it is possible obtain fixed points corresponding to other sectors, e.g., the Ramond sector, by working in a finite-volume setting including different, e.g., anti-periodic, boundary conditions, which will be discussed elsewhere [61]. In addition, we are planning to further clarify the relation of our construction of the scaling limit of the Ising model with previously known results about the Ising QFT/CFT - specifically via spin-spin correlation functions [62; 63; 64; 65; 66] and the explicit construction of the spin field operator [67].
###### Acknowledgements.
This work was supported, in part, by the Quantum Valley Lower Saxony, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 274200144 - SFB 1227, and under Germany's Excellence Strategy EXC-2123 QuantumFrontiers 390837967. AS was in part supported by the MWK Lower Saxony within the Stay Inspired program.
|
2305.14471 | CGCE: A Chinese Generative Chat Evaluation Benchmark for General and
Financial Domains | Generative chat models, such as ChatGPT and GPT-4, have revolutionized
natural language generation (NLG) by incorporating instructions and human
feedback to achieve significant performance improvements. However, the lack of
standardized evaluation benchmarks for chat models, particularly for Chinese
and domain-specific models, hinders their assessment and progress. To address
this gap, we introduce the Chinese Generative Chat Evaluation (CGCE) benchmark,
focusing on general and financial domains. The CGCE benchmark encompasses
diverse tasks, including 200 questions in the general domain and 150 specific
professional questions in the financial domain. Manual scoring evaluates
factors such as accuracy, coherence, expression clarity, and completeness. The
CGCE benchmark provides researchers with a standardized framework to assess and
compare Chinese generative chat models, fostering advancements in NLG research. | Xuanyu Zhang, Bingbing Li, Qing Yang | 2023-05-23T18:54:15Z | http://arxiv.org/abs/2305.14471v1 | # CGCE: A Chinese Generative Chat Evaluation Benchmark
###### Abstract
Generative chat models, such as ChatGPT and GPT-4, have revolutionized natural language generation (NLG) by incorporating instructions and human feedback to achieve significant performance improvements. However, the lack of standardized evaluation benchmarks for chat models, particularly for Chinese and domain-specific models, hinders their assessment and progress. To address this gap, we introduce the Chinese Generative Chat Evaluation (CGCE) benchmark, focusing on general and financial domains. The CGCE benchmark encompasses diverse tasks, including 200 questions in the general domain and 150 specific professional questions in the financial domain. Manual scoring evaluates factors such as accuracy, coherence, expression clarity, and completeness. The CGCE benchmark provides researchers with a standardized framework to assess and compare Chinese generative chat models, fostering advancements in NLG research.
## 1 Introduction
GPT-based language models, such as ChatGPT [1] and GPT-4 [1], have sparked a remarkable revolution in the realm of natural language generation (NLG). These methods have brought about substantial performance enhancements across a wide range of NLG tasks. An important catalyst driving the continuous improvement and refinement of these models is the utilization of evaluation benchmarks, which provide a standardized means to assess and compare their effectiveness.
However, the current landscape of language evaluation benchmarks predominantly revolves around traditional NLU or NLG tasks for language models, with benchmarks like GLUE and SuperGLUE playing a prominent role. Unfortunately, this leaves a significant gap when it comes to evaluating chat models, especially Chinese chat models or specific generation models, despite Chinese speakers constituting a quarter of the world's population. The absence of a comprehensive evaluation benchmark presents a considerable obstacle for researchers in assessing the performance and capabilities of their Chinese generative chat models. Without a standardized benchmark, gauging the quality and effectiveness of these models becomes a daunting task, hindering progress and advancements in the field.
To address this critical gap and facilitate the growth of Chinese language research, we introduce the Chinese Generative Chat Evaluation Benchmark (CGCE) for general and financial domains. This benchmark encompasses a broad spectrum of generative chat domains, including general and financial fields with a varying number of categories. In the general domain, our evaluation benchmark includes a diversified set of 200 questions, covering 13 major dimensions including mathematical calculations, scenario writing, logical reasoning, and text summarization. In the financial domain, it covers four major areas: understanding financial terms, providing financial market commentary, conducting financial data analysis, and comprehending financial news. It consists of 150 specific professional questions, allowing us to comprehensively examine the model's proficiency in handling financial tasks from multiple perspectives. Besides, the evaluation process for the CGCE benchmark involves manual scoring, considering various factors such as answer accuracy, logical coherence, clarity of expression, and completeness, among others. This multi-dimensional scoring approach ensures a comprehensive assessment of the model's performance across different aspects of generative chat. By introducing the CGCE benchmark, we provide researchers and practitioners in the field with a standardized framework to evaluate and compare the effectiveness of Chinese generative chat models. This benchmark serves as a valuable resource for |
2304.14758 | Spectral interactions between strings in the Higgs background | We derive the exact form of the spectral interaction of two strings mediated
by a constant scalar field using methods derived from noncommutative geometry.
This is achieved by considering a non-product modification of the Connes-Lott
model with two-dimensional manifolds. The analogy with the latter construction
justifies the interpretation of the scalar field as being of Higgs type.
Working in dimension two requires the use of the spectral zeta function instead
of the Wodzicki residue techniques applicable to four-dimensional models. In
the latter case, an analogous non-product geometry construction leads, for
specific choices of metrics, to the so-called "doubled geometry models", which
can be thought of as a spectral modification of the Hassan-Rosen bimetric
theory. We find that in dimension two, the interaction term depends explicitly
on zweibeins defining the Dirac operators and only in some special cases can
they be expressed solely using the metrics. The computations can be performed
analytically for an arbitrary choice of zweibeins defining geometry on the two
strings. | Arkadiusz Bochniak, Andrzej Sitarz | 2023-04-28T11:03:53Z | http://arxiv.org/abs/2304.14758v1 | # Spectral interactions between strings in the Higgs background
###### Abstract
We derive the exact form of the spectral interaction of two strings mediated by a constant scalar field using methods derived from noncommutative geometry. This is achieved by considering a non-product modification of the Connes-Lott model with two-dimensional manifolds. The analogy with the latter construction justifies the interpretation of the scalar field as being of Higgs type. Working in dimension two requires the use of the spectral zeta function instead of the Wodzicki residue techniques applicable to four-dimensional models. In the latter case, an analogous non-product geometry construction leads, for specific choices of metrics, to the so-called "doubled geometry models", which can be thought of as a spectral modification of the Hassan-Rosen bimetric theory. We find that in dimension two, the interaction term depends explicitly on zweibeins defining the Dirac operators and only in some special cases can they be expressed solely using the metrics. The computations can be performed analytically for an arbitrary choice of zweibeins defining geometry on the two strings.
Noncommutative Geometry, Bimetric Theory, Spectral Action
## 1 Introduction
The spectral methods of noncommutative geometry [1; 2] can be successfully applied to describe both General Relativity [3; 4] and Standard Model of particle physics together with its potential extensions [5; 6; 7; 8]. The fundamental object in noncommutative geometry, a spectral triple, is motivated by the observation made by A. Connes that for any sufficiently regular Riemannian manifold \(M\) that allows spinor fields, one can associate a triple consisting of an algebra \(A=C^{\infty}(M)\) of smooth functions on \(M\) represented on a Hilbert space \(H=L^{2}(S)\) of square-integrable spinors and the Dirac operator given locally as \(\mathcal{D}=i\gamma^{\mu}(\partial_{\mu}+\omega_{\mu})\), where \(\omega\) is the spin connection on \(M\) and \(\gamma\)'s are the usual generators of the associated Clifford algebra. By Connes' reconstruction theorem [2; 9], any suitably regular triple \((A,H,\mathcal{D})\), with an abelian algebra \(A\), is of the above _canonical_ form. This motivates the notion of a spectral triple as a basic object that generalizes the notion of geometry to objects that are discrete spaces, fractals, or that are described by algebras, which are no longer assumed to be commutative. The data of a spectral triple consists of a \(*\)-algebra \(A\) represented (faithfully) on a Hilbert space \(H\) on which \(\mathcal{D}\) is as an essentially self-adjoint operator having compact resolvent, such that its commutators with elements of the algebra \(A\) are bounded. A set of possible additional structures, like grading and real structure, can be incorporated into
this picture and additional compatibility conditions between all these elements can be assumed. For a more detailed discussion see e.g. [10; 11].
Out of the above spectral data, one can compute the so-called spectral action [12] which is essentially a functional in \(\mathcal{D}\). By spectral action principle, its leading terms in heat kernel expansion, [13] describe the physical model obtained from the geometry associated with a given spectral triple. This generalizes Einstein's idea of equivalence between geometry and physics (gravity). Indeed, the computation of the (leading terms of) the spectral action for the canonical spectral triple associated with a given manifold leads to the Hilbert-Einstein action. This picture was expanded to include Yang-Mills gauge theories within the framework of almost-commutative geometry when the spectral triple is a product of the canonical one and another one with both an algebra and a Hilbert space being finite-dimensional [1; 2]. Appropriate choice of the finite part (closely related to the gauge group one is interested in) allowed for a formulation of the Standard Model of particle physics within this framework and to study its properties as well as symmetries [14; 15; 16].
However, the almost-commutative framework seems not to be the final answer for all physically relevant questions. In particular, it is very restrictive in the gravitational sector and it leads (in its bare version) to some unphysical behavior in particle physics models. Several non-product modifications of almost-commutative geometries were therefore proposed. Some of them are formulated at the level of Kasparov modules [17; 18], whereas others are direct modifications motivated by potential applications both for the Standard Model [19; 20] as well as for modified gravity theories. In [21], an approach based on the straightforward modification of the Connes-Lott model [5] was proposed. The basic idea was to replace the almost-commutative geometry \(M\times\mathbb{Z}_{2}\) by a direct sum of two copies of \(M\) but considered with different metrics on each summand. This so-called doubled geometry model was further studied in the case of Friedmann-Lemaitre-Robertson-Walker metrics in [22] and for other classes of metrics also in [23]. Since the resulting action contains interactions between the two metrics it is natural to ask about potential relations to Hassan-Rosen bimetric gravity theories [24; 25]. We have discussed partially this aspect in [22] and continued in [26], where the picture of two four-dimensional branes interacting effectively by Higgs-like potential was introduced.
Let us briefly summarize main steps in the construction of the doubled models with two copies of the spin manifold \(M\) of dimension \(d>0\). As the Dirac operator, we take
\[\mathcal{D}=\begin{pmatrix}\mathcal{D}_{g}&\gamma\Phi\\ \gamma\Phi^{*}&\mathcal{D}_{h}\end{pmatrix}, \tag{1}\]
where \(\mathcal{D}_{g}\) (resp. \(\mathcal{D}_{h}\)) is the standard Dirac operator corresponding to the Riemannian manifold \((M,g)\) (resp. \((M,h)\)), \(\Phi\) is a (constant) field and \(\gamma\) is taken so that \(\gamma^{2}=\pm 1\) and it anticommutes with the Dirac operators on both _sheets_. The computation of all relevant terms of the spectral action is, however, dimension dependent. The leading term can be expressed using the Wodzicki residue of \(|\mathcal{D}|^{-d}\) and will give the sum of volumes of each copy of \(M\) (with respect to the metric \(g\) and \(h\), respectively). However, the next term, which will contain the sum of integrated scalars of curvature for each copy, will also have an interaction term between the two metric manifolds. In the case of \(d=2\) the explicit computation of these terms necessitates the use of the zeta function [27; 28].
The tools to compute the corresponding terms of spectral action are based on the calculus of pseudodifferential operators. We first need to find the symbols of the operator \(\mathcal{D}^{2}\) by replacing each derivative \(\partial_{j}\) with a formal variable \(i\xi_{j}\). Let \(\mathfrak{a}_{k}\) denotes the part of this symbol that is homogeneous in \(\xi^{\prime}_{\mu}s\) with the homogeneity degree \(k\). In the next step, we compute the symbols \(\mathfrak{b}_{\bullet}\) for \(\mathcal{D}^{-2}\)[29], and it turns out that only two of them (\(\mathfrak{b}_{0}\) and \(\mathfrak{b}_{2}\)) are enough to obtain two leading terms of the spectral action.
Motivated by the fact that this approach can produce spectral interactions of branes, we focus in this paper on the case of two-dimensional branes, the strings. Our model does not assume any ambient space in which the strings propagate as the action is fully intrinsic and arises from the scalar field that links the two strings. Another possible interpretation is that of a "thick string" where the resulting action is the interaction of its boundaries (surfaces) in the Higgs background.
We concentrate on the situation where the action comes from the generalized Dirac operator
that involves the Higgs field. Thus, we work with metrics given in terms of zweibeins. In section 2 we formulate the two-dimensional Riemannian doubled geometry model in terms of zweibeins and derive the resulting effective action using spectral methods. Then, in section 3 we briefly discuss a reformulation of this model in terms of the corresponding metrics. Finally, in section 4 we shortly discuss the Lorentzian formulation, interpretation, and present an outlook.
## 2 Formulation in terms of zweibeins
For a two-dimensional Riemannian spin\({}_{c}\) manifold with a given metric \(g\) defined in terms of a zweibein \(e^{\mu}_{a}\) (with an inverse metric \(g^{\mu\nu}=e^{\mu}_{a}e^{\nu}_{b}\delta^{ab}\)) we consider the Dirac operator of the form
\[\mathcal{D}_{g}=i\sigma^{a}e^{\mu}_{a}\partial_{\mu}-\frac{1}{2}\sigma^{a} \partial_{\mu}(e^{\mu}_{a}), \tag{2}\]
with both \(a,b=1,2\) and \(\mu=1,2\), and the Pauli matrices \(\sigma^{a}\) satisfying \(\{\sigma^{a},\sigma^{b}\}=2\delta^{ab}I\). Let
\[K=\begin{pmatrix}a_{1}&d_{1}\\ c_{1}&b_{1}\end{pmatrix},\]
be the matrix corresponding to the zweibein \(e^{\mu}_{a}\), i.e. with entries \(K^{\mu}_{a}:=e^{\mu}_{a}\). Since there is no one-to-one correspondence between metrics and zweibeins (a zweibein determines the metric but a metric can correspond to a class of zweibeins), in this paper, we treat the zweibein \(e^{\mu}_{a}\), rather than the metric \(g\), as a more fundamental object.
In the doubled model we denote the second metric by \(h\), the second zweibein by \(f^{\mu}_{a}\) and its corresponding matrix by \(L\).
Since we are interested mostly in the form of the interaction potential we can omit terms with derivatives of zweibeins. Under this assumption, the Dirac operator for the doubled geometry model takes the form
\[\mathcal{D}=i\sigma^{a}A^{\mu}_{a}\partial_{\mu}+\sigma F, \tag{3}\]
with
\[A^{\mu}_{a}=\begin{pmatrix}K^{\mu}_{a}&0\\ 0&L^{\mu}_{a}\end{pmatrix},\quad F=\begin{pmatrix}0&\Phi\\ \Phi^{*}&0\end{pmatrix}, \tag{4}\]
where \(\Phi\) is assumed to be a constant, and \(\sigma=\chi\sigma^{3}\) with \(\chi\) chosen s.t. \(\sigma^{2}=\kappa=\pm 1\). We can now easily compute the operator \(\mathcal{D}^{2}\), and its symbols read
\[\mathfrak{a}_{2} =\begin{pmatrix}(K\xi)^{2}&0\\ 0&(L\xi)^{2}\end{pmatrix}=\begin{pmatrix}\|\xi\|_{g}^{2}&0\\ 0&\|\xi\|_{h}^{2}\end{pmatrix}, \tag{5}\] \[\mathfrak{a}_{1} =[F,A^{\mu}_{a}]\sigma^{a}\sigma\xi_{\mu},\] \[\mathfrak{a}_{0} =\kappa F^{2},\]
where \(\xi=\begin{pmatrix}\xi_{1}\\ \xi_{2}\end{pmatrix}\) is the vector of symbols of derivatives (i.e. \(\partial_{j}\leftrightarrow i\xi_{j}\) for \(j=1,2\)).
The computation of the symbols of \(\mathcal{D}^{-2}\) simplifies for the derivative-free case and we get [29]:
\[\mathfrak{b}_{0}=(\mathfrak{a}_{2}+1)^{-1},\qquad\mathfrak{b}_{2}=-\mathfrak{ b}_{0}\mathfrak{a}_{0}\mathfrak{b}_{0}+\mathfrak{b}_{0}\mathfrak{a}_{1} \mathfrak{b}_{0}\mathfrak{a}_{1}\mathfrak{b}_{0}. \tag{6}\]
We remark that the above form of the \(\mathfrak{b}_{0}\) symbol is a consequence of the fact that the manifold is two-dimensional [27; 28].
To proceed with the computation of the spectral action, we first notice that
\[\mathrm{Tr}\ \mathrm{Tr}_{CI}(\mathfrak{b}_{0}^{2}) \tag{7}\] \[=2\left(\frac{1}{((K\xi)^{2}+1)^{2}}+\frac{1}{((L\xi)^{2}+1)^{2}}\right)\]
and
\[\int\frac{d^{2}\xi}{((K\xi)^{2}+1)^{2}}=\frac{\pi}{\det(K)}, \tag{8}\]
where the trace \(\mathrm{Tr}\) is over the discrete degrees of freedom and \(\mathrm{Tr}_{CI}\) is the trace over the Clifford algebra. Then, the contribution to the action from this term is simply
\[2\pi\left(\frac{1}{\det(K)}+\frac{1}{\det(L)}\right),\]
and gives the standard cosmological constant terms for the two metrics.
For the next term in the action, we obtain first,
\[\mathrm{Tr}\ \mathrm{Tr}_{CI}(-\mathfrak{b}_{0}\mathfrak{a}_{0} \mathfrak{b}_{0}) \tag{9}\] \[=-2\kappa|\Phi|^{2}\left[\frac{1}{\left((K\xi)^{2}+1\right)^{2}}+ \frac{1}{\left((L\xi)^{2}+1\right)^{2}}\right]\] \[=-2\pi\kappa|\Phi|^{2}\left(\frac{1}{\det(K)}+\frac{1}{\det(L)} \right),\]
which is just a correction to the previous part.
Finally, the integrand of the remaining part of the potential is,
\[\begin{split}&\text{Tr Tr}_{Cl}(\mathfrak{b}_{0}\mathfrak{a}_{1} \mathfrak{b}_{0}\mathfrak{a}_{1}\mathfrak{b}_{0})\\ &=-2\kappa\mathfrak{b}_{0}[F,A^{\mu}_{a}]\mathfrak{b}_{0}[F,A^{ \nu}_{b}]\mathfrak{b}_{0}\delta^{ab}\xi_{\mu}\xi_{\nu},\end{split} \tag{10}\]
which gives
\[\begin{split}&\text{Tr Tr}_{Cl}(\mathfrak{b}_{0}\mathfrak{a}_{1} \mathfrak{b}_{0}\mathfrak{a}_{1}\mathfrak{b}_{0})\\ &=2\kappa|\Phi|^{2}\det(\mathfrak{b}_{0})\text{Tr}(\mathfrak{b}_ {0})((K-L)\xi)^{2},\end{split} \tag{11}\]
and, as a result, we obtain the following integration formula
\[\begin{split} 2\kappa|\Phi|^{2}\int d^{2}\xi&\left( \frac{((K-L)\xi)^{2}}{((K\xi)^{2}+1)^{2}((L\xi)^{2}+1)}\right.\\ &\left.+\frac{((K-L)\xi)^{2}}{((K\xi)^{2}+1)((L\xi)^{2}+1)^{2}} \right).\end{split} \tag{12}\]
Let us denote the above contribution (12) to the potential by \(V_{1}\). In order to compute this integral, notice that the second term is just obtained from the first one by exchanging \(K\) with \(L\). So, for a moment we concentrate on the first term only. Let then \(S\) be an orthogonal matrix that diagonalizes \((K^{-1})^{T}L^{T}LK^{-1}\) to \(\Lambda^{T}\Lambda\), i.e.
\[S^{-1}(K^{-1})^{T}L^{T}LK^{-1}S=\Lambda^{T}\Lambda, \tag{13}\]
and take \(P=K^{-1}S\). Here we have used the fact that \((K^{-1})^{T}L^{T}LK^{-1}\) have positive eigenvalues.
Then changing the coordinates according to \(P\eta=\xi\) we obtain
\[\begin{split}&\det(K^{-1}S)\int d^{2}\eta\frac{((S-LK^{-1}S) \eta)^{2}}{(\eta^{2}+1)^{2}((\Lambda\eta)^{2}+1)}\\ &\equiv\det(K^{-1}S)\int d^{2}\eta\frac{(W\eta)^{2}}{(\eta^{2}+1) ^{2}((\Lambda\eta)^{2}+1)},\end{split} \tag{14}\]
where we have introduced \(W=S-LK^{-1}S\).
As the denominator is symmetric with respect to \(\eta\), only the parts with \(\eta_{1}^{2},\eta_{2}^{2}\) in the numerator will enter, so the considered part of the potential reads
\[\begin{split}& V_{1}=2\kappa|\Phi|^{2}\det(K^{-1}S)\\ &\times\int d^{2}\eta\left((W_{11}^{2}+W_{21}^{2})\eta_{1}^{2}+( W_{22}^{2}+W_{12}^{2})\eta_{2}^{2}\right)\\ \times&\left(\frac{1}{(\eta^{2}+1)^{2}\left((\Lambda _{1}\eta_{1})^{2}+(\Lambda_{2}\eta_{2})^{2}+1\right)}\right.\\ &\left.+\frac{1}{(\eta^{2}+1)\left((\Lambda_{1}\eta_{1})^{2}+( \Lambda_{2}\eta_{2})^{2}+1\right)^{2}}\right),\end{split} \tag{15}\]
where we used the fact that \((W\eta)^{2}=\eta^{T}W^{T}W\eta\) is symmetric with respect to the interchange \(K\leftrightarrow L\). This can be written as
\[\begin{split} 2\kappa|\Phi|^{2}\det(K^{-1}S)&\left[(W_{11}^{2}+W_{2 1}^{2})I_{1}(1,1,\Lambda_{1},\Lambda_{2})\right.\\ &+\left.(W_{22}^{2}+W_{12}^{2})I_{1}(1,1,\Lambda_{2},\Lambda_{1}) \right.\\ &+\left.(W_{11}^{2}+W_{21}^{2})I_{1}(\Lambda_{1},\Lambda_{2},1,1) \right.\\ &+\left.(W_{22}^{2}+W_{12}^{2})I_{1}(\Lambda_{2},\Lambda_{1},1,1 )\right],\end{split} \tag{16}\]
where
\[\begin{split}& I_{1}(a,b,c,d)\\ &=\int_{\mathbb{R}^{2}}\frac{\xi_{1}^{2}d\xi_{1}d\xi_{2}}{(a^{2} \xi_{1}^{2}+b^{2}\xi_{2}^{2}+1)^{2}(c^{2}\xi_{1}^{2}+d^{2}\xi_{2}^{2}+1)}\\ &=\frac{c}{a}\frac{\pi}{(c^{2}-a^{2})(bc+ad)}+\frac{\pi}{(c^{2}- a^{2})^{3/2}(b^{2}-d^{2})^{1/2}}\\ &\times\left[\arcsin\left(\frac{a\sqrt{b^{2}-d^{2}}}{\sqrt{b^{2}c^ {2}-a^{2}d^{2}}}\right)\right.\\ &\left.-\arcsin\left(\frac{c\sqrt{b^{2}-d^{2}}}{\sqrt{b^{2}c^{2}- a^{2}d^{2}}}\right)\right].\end{split} \tag{17}\]
Since
\[\begin{split} I_{1}(1,1,\Lambda_{1},\Lambda_{2})+I_{1}(\Lambda_{1},\Lambda_{2},1,1)\\ &\hskip 14.226378pt=\frac{\pi}{\Lambda_{1}+\Lambda_{2}}\,\frac{1}{ \Lambda_{1}},\end{split} \tag{18}\]
we finally get
\[\begin{split} V_{1}&=\frac{2\pi\kappa|\Phi|^{2}\det(K^{- 1}S)}{\Lambda_{1}+\Lambda_{2}}\\ &\times\left(\frac{W_{11}^{2}+W_{21}^{2}}{\Lambda_{1}}+\frac{W_{22 }^{2}+W_{12}^{2}}{\Lambda_{2}}\right)\\ &=\frac{2\pi\kappa|\Phi|^{2}\det(K^{-1}S)}{\text{Tr}(\Lambda)} \text{Tr}(W^{T}W\Lambda^{-1}).\end{split} \tag{19}\]
But since \(S\) is an orthogonal matrix, this expression can be further reduced to
\[2\pi\kappa|\Phi|^{2}\frac{\text{Tr}(W^{T}W\Lambda^{-1})}{\text{Tr}(\Lambda)\det (K)}. \tag{20}\]
Let us now express (20) in terms of invariants of the matrix \(X=LK^{-1}\). First, by (13) we have
\[\text{Tr}(\Lambda)=\text{Tr}((X^{T}X)^{1/2}). \tag{21}\]
Next, again by (13) and the fact that \(X^{T}X\) is positive (hence \(\Lambda^{T}\Lambda\) has positive eigenvalues), we have
\[\begin{split}\Lambda^{-1}&=(S^{-1}X^{T}XS)^{-1/2}=S ^{-1}(X^{T}X)^{-1/2}S\\ &=S^{T}(X^{T}X)^{-1/2}S,\end{split} \tag{22}\]
and therefore
\[\begin{split} W^{T}W\Lambda^{-1}&=S^{T}(X^{T}X)^{-1 /2}S\\ &-S^{T}X(X^{T}X)^{-1/2}S\\ &-S^{T}X^{T}(X^{T}X)^{-1/2}S\\ &+S^{T}(X^{T}X)^{1/2}S,\end{split} \tag{23}\]
so that
\[\begin{split}\text{Tr}(W^{T}W\Lambda^{-1})&=\text{ Tr}((X^{T}X)^{-1/2})\\ &-\text{Tr}(X(X^{T}X)^{-1/2})\\ &-\text{Tr}(X^{T}(X^{T}X)^{-1/2})\\ &+\text{Tr}((X^{T}X)^{1/2}).\end{split} \tag{24}\]
Moreover, since \(X^{T}X\) has only positive eigenvalues, we also have
\[\begin{split}(X(X^{T}X)^{-1/2})^{T}&=((X^{T}X)^{-1 /2})^{T}X^{T}\\ &=\left[\left((X^{T}X)^{1/2}\right)^{T}\right]^{-1}X^{T}\\ &=\left[\left((X^{T}X)^{T}\right)^{1/2}\right]^{-1}X^{T}\\ &=(X^{T}X)^{-1/2}X^{T}\end{split} \tag{25}\]
and therefore,
\[\begin{split}\text{Tr}\left(X^{T}(X^{T}X)^{-1/2}\right)& =\text{Tr}\left((X(X^{T}X)^{-1/2})^{T}\right)\\ &=\text{Tr}(X(X^{T}X)^{-1/2}),\end{split} \tag{26}\]
so that
\[\begin{split}\text{Tr}(W^{T}W\Lambda^{-1})&=\text{ Tr}((X^{T}X)^{-1/2})\\ &-2\,\text{Tr}(X(X^{T}X)^{-1/2})\\ &+\text{Tr}((X^{T}X)^{1/2}).\end{split} \tag{27}\]
As a result,
\[\begin{split}& V_{1}=\frac{2\pi\kappa|\Phi|^{2}}{\det(K)}\cdot \frac{1}{\text{Tr}((X^{T}X)^{1/2})}\\ &\times\left[\text{Tr}((X^{T}X)^{-1/2})-2\,\text{Tr}(X(X^{T}X)^{ -1/2})\right.\\ &\left.+\text{Tr}((X^{T}X)^{1/2})\right].\end{split} \tag{28}\]
Since for the \(2\times 2\) matrix \((X^{T}X)^{1/2}\) we have
\[\begin{split}\text{Tr}((X^{T}X)^{-1/2})&=\frac{ \text{Tr}((X^{T}X)^{1/2})}{\det((X^{T}X)^{1/2})}\\ &=\frac{\text{Tr}((X^{T}X)^{1/2})}{|\det(X)|},\end{split} \tag{29}\]
this expression can be further rewritten as
\[\begin{split}& V_{1}=\frac{2\pi\kappa|\Phi|^{2}}{\det(K)}\\ &\times\left(1+\frac{1}{|\det(X)|}-2\frac{\text{Tr}(X(X^{T}X)^{- 1/2})}{\text{Tr}((X^{T}X)^{1/2})}\right).\end{split} \tag{30}\]
In order to proceed further, we use the fact [30] that for a \(2\times 2\) positive matrix \(Y\) we have
\[\sqrt{Y}=\frac{Y+\sqrt{\det(Y)}I}{\sqrt{\operatorname{Tr}(Y)+2\sqrt{\det(Y)}}}, \tag{31}\]
and hence
\[\operatorname{Tr}(\sqrt{Y})=\sqrt{\operatorname{Tr}(Y)+2\sqrt{\det(Y)}}. \tag{32}\]
Furthermore,
\[\begin{split}& Y^{-1/2}=\sqrt{Y^{-1}}\\ &=\frac{Y^{-1}+\sqrt{\det(Y^{-1})}I}{\sqrt{\operatorname{Tr}(Y^ {-1})+2\sqrt{\det(Y^{-1})}}}\\ &=\frac{Y^{-1}+\frac{1}{\sqrt{\det(Y)}}I}{\sqrt{\frac{ \operatorname{Tr}(Y)}{\det(Y)}+\frac{2}{\sqrt{\det(Y)}}}}\\ &=\frac{\sqrt{\det(Y)}Y^{-1}+I}{\sqrt{\operatorname{Tr}(Y)+2 \sqrt{\det(Y)}}}.\end{split} \tag{33}\]
Therefore,
\[\operatorname{Tr}(XY^{-1/2})=\frac{\sqrt{\det(Y)}\operatorname{Tr}(XY^{-1})+ \operatorname{Tr}(X)}{\sqrt{\operatorname{Tr}(Y)+2\sqrt{\det(Y)}}} \tag{34}\]
and
\[\frac{\operatorname{Tr}(XY^{-1/2})}{\operatorname{Tr}(Y^{1/2})}=\frac{\sqrt{ \det(Y)}\operatorname{Tr}(XY^{-1})+\operatorname{Tr}(X)}{\operatorname{Tr}(Y )+2\sqrt{\det(Y)}}. \tag{35}\]
But in our case \(Y=X^{T}X\), so that \(XY^{-1}=X(X^{T}X)^{-1}=XX^{-1}(X^{T})^{-1}=(X^{T})^{-1}\) and hence \(\operatorname{Tr}(XY^{-1})=\operatorname{Tr}(X^{-1})=\frac{\operatorname{Tr}( X)}{\det(X)}\). Furthermore, \(\sqrt{\det(Y)}=|\det(X)|\) and therefore
\[\frac{\operatorname{Tr}(XY^{-1/2})}{\operatorname{Tr}(Y^{1/2})}=\frac{\left(1 +\operatorname{sgn}(\det(X))\right)\operatorname{Tr}(X)}{\operatorname{Tr}( X^{T}X)+2|\det(X)|}. \tag{36}\]
For \(\det(X)<0\) this term vanishes identically. On the other hand, for \(\det(X)>0\) we get
\[\begin{split}& V_{1}=\frac{2\pi\kappa|\Phi|^{2}}{\det(K)}\\ &\times\left(1+\frac{1}{\det(X)}-\frac{4\operatorname{Tr}(X)}{ \operatorname{Tr}(X^{T}X)+2\det(X)}\right).\end{split} \tag{37}\]
Observe that the first two terms are, in fact, again of the same type as (9), however, with an opposite sign. Therefore, in the end, the additional term proportional to the volumes of the manifolds that is proportional to \(|\Phi|^{2}\) vanishes due to this cancellation. What remains as an effective interaction between the zweibeins on the two worldsheets is
\[V_{\text{int}}=-\frac{8\pi\kappa|\Phi|^{2}}{\det(K)}\left(\frac{\operatorname{ Tr}(X)}{\operatorname{Tr}(X^{T}X)+2\det(X)}\right). \tag{38}\]
This can be further simplified using the fact that
\[\det(X)=\frac{1}{2}\left(\operatorname{Tr}(X)^{2}-\operatorname{Tr}(X^{2}) \right).\]
Note that although the expression above uses \(K\) and \(X\) it can be equivalently rewritten using \(L\) and \(X^{-1}\) as it is completely symmetric.
We remark that the above cancellation was possible due to the fact that we are working in two dimensions. Indeed, the first term in (37) has its form because for a \(2\times 2\) matrix the trace of its inverse can be expressed in terms of the trace of the original matrix and its determinant, as it was done in Eq.(29). As a result, in two dimensions, the contribution to the action from \(\mathfrak{b}_{2}\) does not affect the effective cosmological constant, in contrast to dimension four [22]. Moreover, the interaction is non-zero only if \(\det(X)\) is positive, i.e., both \(K\) and \(L\) have determinants of the same signs.
Before we discuss the dependence of the above term on the metrics let us consider a special case, of diagonal zweibeins.
### A special case: diagonal constant zweibeins
We consider now the diagonal case, i.e. with \(K=\operatorname{diag}(a_{1},b_{1})\) and \(L=\operatorname{diag}(a_{2},b_{2})\). In such a
situation we have
\[X=\text{diag}\left(\frac{a_{2}}{a_{1}},\frac{b_{2}}{b_{1}}\right),\]
so that
\[\begin{split}\text{Tr}(X)=\frac{a_{1}b_{2}+a_{2}b_{1}}{a_{1}b_{1}},\quad\text{Tr}(X^{T}X)=\frac{a_{1}^{2}b_{2}^{2}+a_{2}^{2}b_{1}^{2}}{a_{1}^{2 }b_{1}^{2}},\\ \text{det}(X)=&\frac{a_{2}b_{2}}{a_{1}b_{1}},\end{split} \tag{39}\]
and therefore if we assume that \(\text{det}(X)>0\) then
\[V_{\text{int}}=-\frac{8\pi\kappa|\Phi|^{2}}{a_{1}b_{2}+a_{2}b_{1}}. \tag{40}\]
## 3 Formulation in terms of metrics
The derivation of the interaction terms between the two worldsheets was expressed in the previous section directly in terms of zweibeins, which define the Dirac operators. However, it is interesting to see which part of it can be rewritten explicitly in terms of the metrics. In this section we discuss this possibility, using the metrics \(g,h\) on the two layers of the doubled model.
First of all, observe that if a zweibein is given by the matrix
\[K=\begin{pmatrix}a_{1}&d_{1}\\ c_{1}&b_{1}\end{pmatrix}, \tag{41}\]
then we have
\[g^{-1}=\begin{pmatrix}a_{1}^{2}+d_{1}^{2}&a_{1}c_{1}+b_{1}d_{1}\\ a_{1}c_{1}+b_{1}d_{1}&b_{1}^{2}+c_{1}^{2}\end{pmatrix}, \tag{42}\]
and the metric
\[\begin{split} g=&\frac{1}{(a_{1}b_{1}-c_{1}d_{1})^{2}}\\ &\times\begin{pmatrix}b_{1}^{2}+c_{1}^{2}&-(a_{1}c_{1}+b_{1}d_{1})\\ -(a_{1}c_{1}+b_{1}d_{1})&a_{1}^{2}+d_{1}^{2}\end{pmatrix}.\end{split} \tag{43}\]
Therefore,
\[\text{det}(K)=a_{1}b_{1}-c_{1}d_{1}=\sqrt{\text{det}(g^{-1})}, \tag{44}\]
and similarly for the matrix \(L\) and the metric \(h\). Next,
\[X=\sqrt{\text{det}(g)}\begin{pmatrix}a_{2}b_{1}-c_{1}d_{2}&a_{1}d_{2}-a_{2}d_ {1}\\ b_{1}c_{2}-c_{1}b_{2}&a_{1}b_{2}-d_{1}c_{2}\end{pmatrix} \tag{45}\]
and
\[\begin{split}\text{det}(X)=&\frac{\text{det}(L)}{\text{det}( K)}=\frac{\sqrt{\text{det}(h^{-1})}}{\sqrt{\text{det}(g^{-1})}}\\ =&\sqrt{\text{det}(gh^{-1})}=\text{det}\left(\sqrt{gh^{-1}} \right).\end{split} \tag{46}\]
Therefore, the only obstruction to rephrasing the action in terms of metrics rather than zweibeins is given by the term that contains a trace of the matrix \(X\) as well as the trace of \(X^{T}X\).
If the zweibeins are such that these quantities are expressible in terms of invariants of \(g\), \(h\), and/or \(gh^{-1}\), then the full action can be unambiguously written on the level of metrics. However, in the most general case, we are dealing with interactions between zweibeins rather than metrics. This is in contrast to the standard bimetric theory, which was formulated from the beginning using metrics. The vielbein version was shown [31] to be dynamically equivalent to the usual bimetric theory, however, that is not necessarily true for multi-metric models (see [32]). Of course, this is a consequence of the assumed model of interactions between the two vielbeins and the basic structure where spinor fields and vielbeins are fundamental objects. It is interesting whether there exists a model of the full Dirac operator, where the effective potential depends only on the metrics.
### A special case: diagonal constant zweibeins
In the case of diagonal zweibeins, from the form of the principal symbol, we easily read that the metrics are such that their inverses are of the form \(\text{diag}(a^{2},b^{2})\), i.e., they are also diagonal. In this case \(\sqrt{\text{det}(g)}=\frac{1}{a_{1}b_{1}}\) and \(\sqrt{\text{det}(h)}=\frac{1}{a_{2}b_{2}}\).
Let us consider the matrix \(\mathbb{X}=\sqrt{g^{-1}h}\). Its eigenvalues are \(x=\frac{a_{1}}{a_{2}}\) and \(y=\frac{b_{1}}{b_{2}}\), and therefore,
\[\begin{split} V_{\mathrm{int}}&=-8\pi\kappa|\Phi|^{2} \sqrt{\det(g)}\underbrace{\frac{xy}{x+y}}_{V\left(\sqrt{g^{-1}h}\right)}\\ &=-8\pi\kappa|\Phi|^{2}\sqrt{\det(h)}V\left(\sqrt{h^{-1}g} \right).\end{split} \tag{47}\]
## 4 Lorentzian formulation and conclusions
Since the derivation and applicability of the spectral methods rely on a purely Riemannian framework, the final action describes a Riemannian model. However, we can use the standard Wick rotation argument to make contact with the Lorentzian formulation. The procedure is as follows. Since we started with the modified Connes-Lott model, the effective interaction originating from the field \(\Phi\) has the character of a Higgs-like one. The two-dimensional Riemannian manifold we started with is now replaced by a \((1+1)\)-dimensional string, and we end up with a model of two Lorentzian strings interacting via a Higgs-like field. The interaction is given at the level of corresponding zweibeins for these two strings. This is the two-dimensional generalization of the picture with four-dimensional branes we discussed previously in [26] to make a comparison between doubled geometry models [21; 22] and Hassan-Rosen bimetric gravity theories [24].
Contrary to the four-dimensional situation, in dimension two we compute the zeta function rather than a residue, and the resulting potential can be computed analytically for generic zweibeins. The resulting interaction is non-zero only if the matrices corresponding to the zweibeins have determinants of the same signs, and is then expressed in terms of the invariants of the matrix \(X=LK^{-1}\) with parts of the result expressible only in terms of metrics. This is analogous to the formulation of bimetric gravity in terms of vielbeins and metrics [31] and similar to certain choices for vielbeins that ensure the existence of real square root \(\sqrt{g^{-1}h}\)[33]. We postpone the detailed discussion of this aspect for future research.
Acknowledgments.A. B. thanks S.F. Hassan for insightful discussions and for stating the question that motivated this study. The authors acknowledge the support of the National Science Centre, Poland, Grant No. 2020/37/B/ST1/01540. Competing interests.The authors declare no competing interests.
Author contributions.Both authors contribute equally to the work as well as to the writing of the manuscript.
Data availability.All data generated or analyzed during this study are included in the manuscript.
|
2307.12679 | An Estimator for the Sensitivity to Perturbations of Deep Neural
Networks | For Deep Neural Networks (DNNs) to become useful in safety-critical
applications, such as self-driving cars and disease diagnosis, they must be
stable to perturbations in input and model parameters. Characterizing the
sensitivity of a DNN to perturbations is necessary to determine minimal
bit-width precision that may be used to safely represent the network. However,
no general result exists that is capable of predicting the sensitivity of a
given DNN to round-off error, noise, or other perturbations in input. This
paper derives an estimator that can predict such quantities. The estimator is
derived via inequalities and matrix norms, and the resulting quantity is
roughly analogous to a condition number for the entire neural network. An
approximation of the estimator is tested on two Convolutional Neural Networks,
AlexNet and VGG-19, using the ImageNet dataset. For each of these networks, the
tightness of the estimator is explored via random perturbations and adversarial
attacks. | Naman Maheshwari, Nicholas Malaya, Scott Moe, Jaydeep P. Kulkarni, Sudhanva Gurumurthi | 2023-07-24T10:33:32Z | http://arxiv.org/abs/2307.12679v1 | # An Estimator for the Sensitivity to Perturbations of Deep Neural Networks
###### Abstract
For Deep Neural Networks (DNNs) to become useful in safety-critical applications, such as self-driving cars and disease diagnosis, they must be stable to perturbations in input and model parameters. Characterizing the sensitivity of a DNN to perturbations is necessary to determine minimal bit-width precision that may be used to safely represent the network. However, no general result exists that is capable of predicting the sensitivity of a given DNN to round-off error, noise, or other perturbations in input. This paper derives an estimator that can predict such quantities. The estimator is derived via inequalities and matrix norms, and the resulting quantity is roughly analogous to a condition number for the entire neural network. An approximation of the estimator is tested on two Convolutional Neural Networks, AlexNet and VGG-19, using the ImageNet dataset. For each of these networks, the tightness of the estimator is explored via random perturbations and adversarial attacks.
## I Introduction
Deep neural networks (DNNs) achieve high accuracy in many machine learning tasks such as object recognition, speech recognition, and natural language processing. Many kernels in deep learning, particularly in convolutional neural networks (CNNs) are designed for computer vision tasks and are dominated by computation. For example, AlexNet [1] has nearly 62.5 million parameters, 0.65 million neurons, 2.3 million weights (4.6 MB of storage), and requires 666 million MACs per 227x227 image (13 kMACs/pixel); whereas VGG-16 [2] possesses 14.7 million weights (29.4 MB of storage) and requires 15.3 billion MACs per 224x224 image (306KMACs/pixel) [3]. The trend in current architectures is towards networks with more layers, which require more storage and computation per pixel.
Lowering bit-widths ("Quantization") in both training and inference is therefore advantageous because it permits faster computation and reduced power use, particularly when the underlying hardware can natively support reduced precision. However, higher performance must be balanced by the need for accuracy, particularly in safety-critical systems such as autonomous driving and health-care solutions where failure can be catastrophic. While many CNN architectures are resilient to lower precision, no rigorous and general result exists that provides _a-priori_ estimates of the accuracy impact from reduced precision. Similarly, inference is known to be more amenable to reduced precision than training, but no theoretical result exists explaining this observation [4, 5].
Noting that modern DNNs are governed by computational arithmetic, and so are amenable to the tools of numerical analysis, the approach presented in this paper develops an estimator that predicts the impact of small perturbations to the inputs of a neural network on its output. This analysis must be performed on a network-by-network basis. This estimator can further be used to predict the minimal input precision required for that particular neural network such that the network does not become less stable from a perturbation to its input due to quantization.
The paper is organized as follows. Related work in reduced precision analysis for neural networks is discussed in Subsection I-A. Section II derives an estimator of the condition number for a single neuron as a simple example, and then the analysis is expanded to a multi-layer neural network. Section III discusses the techniques used to test the estimator with the aid of adversarial perturbation generation methods. Section IV presents the estimator results applied to two canonical CNN architectures, AlexNet and VGG-19 using both random perturbations and adversarial attack techniques to generate sample perturbations. Section V concludes the work.
### _Related work_
Recent research efforts have shown that neural networks are amenable to reduced precision. Prior DNN precision research work can be broadly classified into four categories. In the first, quantization techniques are explored for different neural network parameters during training and inference to compress the network with minimal loss of accuracy. The second is an empirical exploration of quantization for neural network parameters across different layers to achieve the same accuracy as the full precision model. The third is based on the derivation of theoretical bounds for neural networks with limited precision. The last category of prior work, which most closely relates to this paper, uses of the tools of numerical linear algebra such as matrix norms to study the properties of DNNs. The following
four paragraphs summarize the related works in each of the four categories respectively.
(1) Training or inference with binary or ternary weights and activations can achieve comparable accuracy to full precision networks [6, 7, 8, 5]. [9] tabulated various observations of different precision settings for weights and activations and also demonstrated how two precision can be used for gradients at training time. Recently, [10] proposed 2-bit Quantized Neural Networks (QNNs) using techniques to individually target weight and activation quantizations which achieve higher accuracy than previous quantization schemes. [11] explored reducing the numerical precision of weights and biases for different recurrent neural networks (RNNs) empirically and concluded that weight binarization techniques are of limited use for RNNs while ternarization schemes yield similar accuracy to the baseline versions. All these methods help compress the size of the neural networks; however, they are unable to predict the impact of quantization _a-priori_.
(2) [12] studied per layer quantization and observed that the tolerance of CNNs to reduced precision data varies not only across networks, but also within layers. They proposed an empirical method to find a low precision configuration for a network while maintaining high accuracy. However, this method requires simulations and does not guarantee maintaining accuracy. [13] is an extension of the previous work, where the authors proposed a method called Proteus which analyzes a given DNN implementation and maintains the native precision of the compute engine by converting to and from a fixed-point reduced precision format used in memory. This enables using different representation per layer for neuron activations and weights. [14] presented a learning scheme to enable heterogeneous allocation of precision across layers for a fixed precision budget. The scheme is based on stochastic exploration for the DNN to determine an optimal precision configuration and also leads to favorable regularization. However, the optimal value of the precision budget is again not known beforehand.
(3) [15] derived theoretical bounds on the misclassification rate in the presence of limited precision. This work establishes bounds that limit misclassification after quantizing activations and weights to a fixed-point format from floating point. However, this work only bounds the accuracy loss between floating-point and fixed-point. [16] studied the impact of limited numerical precision on neural network training and the impact of rounding scheme in determining network's behavior during training. It shows that 16-bit fixed-point representation incurs little accuracy degradation by using stochastic rounding but does not study the precision requirements during inference. [17] explored training at reduced precision but is mainly limited to linear models.
(4) This paper leverages the tools of numerical linear algebra, particularly stability analysis, to establish general bounds that can be imposed on the precision. Such an approach is not without precedent, and investigations of the properties imposed on a network by the measure of the weight matrix can be at least traced back to [18], who showed that the generalization performance of a well-trained neural network with small training error depends on the magnitude of the weights, not the number. Reasoning about the generalization ability of a neural network in terms of the size, or norm, of its weight vector is called norm-based capacity control and [19] evaluated this for feed-forward neural networks. Along with capacity, they also investigated the convexity and characterization of the neural networks. [20] evaluated the spectral complexity of the neural networks using the Lipschitz constant (i.e., the product of the spectral norms of their weight matrices). [21] showed that the Fisher-Rao norm provides an estimate of the size of the network weights and associates this with the trained network's generalization capacity, which may be related to a network's susceptibility to perturbation or adversarial attack. Another recent work, [22], shows that error amplifications are a mode by which quantized models are prone to adversarial attacks. This empirical study also proposes a Defensive Quantization (DQ) method which controls the Lipschitz constant of the network during quantization. In contrast, this paper uses the measure of the weight matrix to study the stability of the neural networks to reduced precision and derive the precision requirements based on the estimator presented in Section II.
## II Stability Bounds on Forward Propagation
In numerical analysis, the condition number of a system is a measure of the change in the output value of a function or a network for a small change in the input argument. If the condition number of a system is \(\kappa(A)=10^{k}\), then up to k digits of accuracy may be lost on top of the loss of precision from arithmetic methods. This work derives an analogue of the condition number for neural networks as,
\[\kappa=\left(\frac{\left\|\delta_{y}\right\|/\left\|y\right\|}{\left\|\delta_ {x}\right\|/\left\|x\right\|}\right). \tag{1}\]
Where \(x\) and \(y\) are the input and output to the network, respectively, and \(\delta_{x}\) and \(\delta_{y}\) are small perturbations to these quantities. The resulting quantity \(\kappa\) estimates the susceptibility of a network to perturbations. We now derive \(\kappa\) for a single neuron to demonstrate the methodology and generate intuition, and then generalize to an n-layer network.
### _Single Neuron Estimator Derivation_
Consider first a single neuron, neglecting bias. Each input (\(x_{1},x_{2},\ldots,x_{n}\)) is multiplied by an associated weight (\(\theta_{1},\theta_{2},\ldots,\theta_{n}\)). The results are then passed through an activation function, \(f\) to produce the output, \(y\). This is expressed as,
\[y=f(\sum_{i}\theta_{i}x_{i})=f(\theta_{1}x_{1}+\theta_{2}x_{2}+\cdots+\theta_ {n}x_{n}).\]
Consider a common and representative activation function, the rectified linear unit (ReLU),1
Footnote 1: The results shown subsequently apply to other common activation functions, and are detailed in Appendix A.
\[f(z)=\begin{cases}z,&\text{for }z>0\\ 0,&\text{for }z\leq 0\end{cases}\]
which, due to its simplicity, has the added appeal of making subsequent computations more straightforward. Notice that ReLU is simply \(\text{max}(0,z)\).
In the rest of this paper, we will use the shorthand \(\theta\,x\) to indicate multiplication of a matrix of weights \(\theta\) by a matrix \(x\). Additionally, if \(f\) is a scalar defined function and \(x\) is a matrix, we will use the shorthand \(f(x)\) to indicate \(f\) applied to each entry of \(x\). Then, the output of a single layer neural net satisfies,
\[\|y\|=\|f(\theta x)\|=\|\text{max}(0,\theta x)\| \tag{2}\]
where \(\|\|\) denotes the generalized norm. To analyze the stability of the single layer neural network, we introduce a small perturbation, \(\delta_{x}\), to the inputs. The resulting perturbation to the outputs is defined as \(\delta_{y}\). Then,
\[\delta_{y}:=f(\theta(x+\delta_{x}))-f(\theta x),\]
\[\|\delta_{y}\|=\|f(\theta(x+\delta_{x}))-f(\theta x)\|\\ =\|f(\theta x+\theta(\delta_{x}))-f(\theta x)\|\,. \tag{3}\]
By the triangle inequality,
\[\|f(\theta x+\theta(\delta_{x}))-f(\theta x)\|\leq\\ \|f(\theta x+\theta(\delta_{x})-\theta x)\|=\|f(\theta(\delta_{x }))\|\,. \tag{4}\]
Using Equations (2), (3) and (4),
\[\|\delta_{y}\|\leq\|f(\theta(\delta_{x})\|\leq\|\theta\delta_{x}\|\leq\|\theta \|\,\|\delta_{x}\|\,, \tag{5}\]
\[\frac{\|\delta_{y}\|}{\|\delta_{x}\|}\leq\|\theta\|\,.\]
This result indicates that the amplification of a perturbation to the input of a single layer neural network is bounded by the norm of the weight matrix. However, this quantity is only significant relative to the overall magnitude of the data. For example, what if the network \(f(\theta x)\) shrinks the magnitude of every input? Then, even though the quantity \(\frac{\|\theta\delta_{y}\|}{\|\delta_{x}\|}\) is small, it may be a large perturbation relative to the magnitude of the initial quantity, \(\frac{\|y\|}{\|x\|}\). This intuition drives the motivation that the pertinent quantity to be studied is therefore,
\[\frac{\|\delta_{y}\|}{\|\delta_{x}\|}/\bigg{(}\frac{\|y\|}{\|x\|}\bigg{)}= \bigg{(}\frac{\|\delta_{y}\|}{\|\delta_{x}\|}/\left\|y\right\|\bigg{)}.\]
This is the quantity that was introduced in Equation (1). Our estimator is the maximal value of this quantity,
\[\tilde{\kappa}=\max_{x\neq 0}\kappa=\max_{x\neq 0}\bigg{(}\frac{\|\delta_{y}\| \,/\,\|y\|}{\|\delta_{x}\|}\bigg{)}. \tag{6}\]
Unfortunately, this estimator cannot be computed exactly unless it is known that \(\|y\|\geq C>0,\forall\,x\neq 0\) for some positive constant \(C\). Therefore, the estimator \(\tilde{\kappa}\) is approximated by calculating a set of \(\kappa\)'s for many perturbations \(\delta_{x}\) and inputs \(x\).
### _Multi-layer Network Estimator Derivation_
The analysis of the previous section is now generalized to many-layer networks. Stability and rounding error concerns become more complicated when considering additional layers because each layer could introduce an error from rounding, which is amplified in the subsequent layers. The multi-layer case has the form,
\[\frac{\|\delta_{y}\|}{\|\delta_{x}\|}\leq\prod_{j=0}^{i-1}\|\theta_{n-j}\|\,. \tag{7}\]
Where \(\|\theta_{i}\|\) is the norm of the weight matrix of layer \(i\) from the input side. Considering perturbations at every single layer due to rounding error for a simple feedforward neural network with n-layers results in,
\[\frac{\|\delta_{y}\|}{\|\delta_{x}\|}\leq\sum_{i=1}^{n}\left(\prod_{j=0}^{i-1} \|\theta_{n-j}\|\right). \tag{8}\]
This is the product of all current and previous weight matrices, summed over each layer. The above equation indicates that the deeper layers cause the perturbations to grow because they amplify the rounding errors from previous layers. To aid in intuition, Figure 1 shows the computation of \(\frac{\|\delta_{y}\|}{\|\delta_{x}\|}\) for a simple three layer neural network.
It is interesting to note that many special layers, such as dropout, batchnorm, skip connections, etc., have the effect of reducing the magnitude of the weights. These layers may act as a mechanism to reduce the condition number of the overall network, and in doing so, render the overall network more stable and less susceptible to perturbations.
## III Testing the Estimator with Adversarial Perturbations
The estimator presented in Equation (8) provides an upper-bound of the susceptibility of a network to a perturbation in input. A natural question is: how tight is this bound to perturbations in the network? If the estimator is not tight, it will overestimate the impact of perturbations. To explore the tightness of the estimator presented previously, we seek to minimize the quantity, \(\|\delta_{x}\|\). In particular, the smallest perturbations \(\|\delta_{x}\|\) that in turn maximizes the quantity \(\|\delta_{y}\|\). To
Fig. 1: Example calculation of estimator in Equation (8) for a 3-layer neural network. The first layer contributes to only one term, while the second layer contributes to two terms, and the last layer is a product of all the previous terms of the estimator.
efficiently generate such perturbations, we leverage the existing body of work on adversarial perturbations. We next discuss the techniques we use to generate these perturbations in Subsection III-A, and then present the results of these perturbations in Subsection III-B.
### _Techniques to Generate Adversarial Perturbations_
Adversarial attacks are methods designed to alter the solution or classifier output of a learning system. For CNNs, an adversarial perturbation, \(\Delta(\mathbf{x};\hat{k})\), refer to small perturbations \(\mathbf{r}\) added to an input image \(\mathbf{x}\) such that the network classifier \(\hat{k}(\mathbf{x})\) changes, leading to misprediction. This is formally presented in Equation (9) as,
\[\Delta(\mathbf{x};\hat{k})=\underset{\mathbf{r}}{\text{min}}\,\|\mathbf{r}\|\,\text{ subject to }\hat{k}(\mathbf{x}+\mathbf{r})\neq\hat{k}(\mathbf{x}). \tag{9}\]
Existing adversarial attacks are used as they represent the best-known methods to reliably produce misclassification from small perturbations in input. Existing techniques also have the advantage of prior peer-review, making the results more accepted and broadly accessible to the community. Based on these criteria, this work focused on the common adversarial attack, DeepFool, which we discuss below.
[23] proposed the untargeted attack technique known as DeepFool. This method is based on an iterative linearization of the classifier to generate minimal perturbations sufficient to change the classification label. Initially, it is assumed that the neural networks are completely linear, with classifiers separated by hyperplanes. Since neural networks are non-linear, the linearization process is iterated until the classification index changes. In this work, the iterator was observed to converge in less than four iterations for most images. The process of generating the perturbations is computationally inexpensive and this is therefore an effective technique to generate small adversarial perturbations.
[24] presents an extension of DeepFool which generates a small, image-agnostic perturbation vector which causes misclassification on a large set of images across a wide variety of classifiers. This means that an image-specific perturbation vector need not be generated, and a universal perturbation when added to different input images can cause misclassification with probability of about 70% across different networks. This is particularly interesting from the perspective of this paper, because our method directly provides a bound on the magnitude of the change in the output of the network caused by the universal perturbation. In this way, we expect our estimator to apply to a wide range of classifiers impacted by the universal perturbation generated by DeepFool.
### _Generated DeepFool Perturbations_
This analysis used pre-trained models of two standard CNNs designed for object recognition tasks on the ILSVRC2012 ImageNet dataset [25]: AlexNet and VGG-19. These networks and the associated weights were taken from the BVLC Caffe2 models publicly available via ONNX [26]. All the analysis was performed on an AMD Radeon Pro Vega Frontier Edition.
Figure 2 shows the magnitudes of DeepFool perturbations generated for AlexNet and VGG-19 measured relative to the original image, \(\|\delta_{x}\|\), over the norm of the input image, \(\|x\|\). Since this quantity is much less than 1, the results are plotted by the reciprocal on a logarithmic scale, or \(\log(\frac{\|x\|}{\|\delta_{x}\|})\). The x-axis spans different images, and the y-axis details the resulting perturbation size. For this analysis, 16,500 test images were used. This sampled ImageNet with 20 randomly chosen images from each of the randomly chosen 825 classes out of 1000. The ordering on these plots is such that 20 images from one class are represented adjacent to each other. Notice that the data points corresponding to the generated perturbations for images of a given class are closely located, showing that there exists correlations in the data. This is not unexpected, as images from a common classifier share characteristics that also impact \(\|x\|\), such as common pixel color. In turn, this implies that it is easier to generate minimal adversarial perturbations for some classes than others, and that images of the same classifier have similar sensitivities to perturbations.
The smallest perturbations relative to the image are of the order of \(10^{-8}\) for AlexNet and \(10^{-7}\) for VGG-19. On average, the perturbations are of the order of \(10^{-3}\) for the two networks. Note that all these generated perturbations, when added to the original image, lead to misclassification by the network. Anecdotally, a few of the resulting perturbed images were visually inspected by humans, and did not have any noticeable artifacts or distortions.
## IV Results
This section compares the condition number (calculated with Equation (6)) to the generalized small perturbation vectors from the previous section. Note that the quantities \(\|\delta_{y}\|\,/\,\|\delta_{x}\|\) and \(\|\delta_{y}\|\) are computed as the magnitudes of the element-wise difference of the pre-final output layer (i.e. the layer before the softmax layer between the original and perturbed scenarios). We then compare the estimator to empirical data generated via adversarial attacks and random noise.
Fig. 2: Relative magnitudes of generated DeepFool perturbations that resulted in misclassification. These images indicate that the generated DeepFool perturbations \(\delta_{x}\) are indeed significantly smaller than the magnitude of the original image, \(x\). Notice that the perturbation sizes range several orders of magnitude.
### _DeepFool perturbations_
The quantity \(\kappa\) presented in Equation (6) was computed for AlexNet and VGG-19 using the generated DeepFool perturbations applied to each of the 16500 images discussed in Section III-B. The \(\kappa\) values are shown in Figure 3. The maximum \(\kappa\) value created by applying the DeepFool perturbations to inputs to AlexNet is about 651, the average image has a maximum \(\kappa\) value of about 100, and all images have a maximum \(\kappa\) value of at least of 12. For VGG-19, the maximum \(\kappa\) value is about 825, the average image has a maximum \(\kappa\) value of about 117, and all images have a maximum \(\kappa\) value of at least 16.
Table I shows the precision requirements for input representations of AlexNet and VGG-19 based on the mean, maximum and minimum value of \(\kappa\). The maximum value of \(\kappa\) gives maximum amount a small perturbation to the input of the neural network amplified by the network.
In practice, the error in the output of the network can be expected to be \(\epsilon\tilde{\kappa}\), where \(\epsilon\) is machine epsilon and \(\tilde{\kappa}\) is defined in Equation (6). Thus, \(\tilde{\kappa}\) gives an estimate of the minimum precision that should be used with a particular neural network. The minimum digits required can be calculated as \(log_{10}(\kappa)\). The minimum number of bits is calculated as, \(\lceil log_{2}(\kappa)\rceil+1\).
For the worst-case generated perturbations, ten bits of precision could be lost for both AlexNet and VGG-19 and hence, 11 bits are required for input representations at a minimum. For the mean scenario, eight bits of precision are required for both networks, whereas the minimum \(\kappa\) says that AlexNet requires five bits of precision while VGG-19 requires six bits. This shows that moving to _"INT8"_ (eight-byte integer precision), a commonly supported precision in most processors, for representing inputs to these networks may make them more likely to misclassify inputs.
### _Random perturbations_
The DeepFool perturbations are carefully constructed to cause misclassification by the network. It is expected that more indiscriminate sources of perturbations, such as rounding error or noisy data, will have lesser impact on the output of neural networks. To test this, 16,500 perturbations of same magnitude as the DeepFool perturbations, but with random directions, were generated. Those perturbations were added those to the same set of images used in Section IV-A and again the maximum value of \(\kappa\) was estimated for each image. In this case, the \(\kappa\) values observed for these random perturbations are much smaller than from adversarial perturbations. For example, the mean \(\kappa\) is reduced from approximately 101 to less than 2 in AlexNet, and from approximately 116 to 3. This suggests that for applications where the consequences of infrequent misclassification are not severe, lower precision than \(11\) bits may suffice. However, networks more aggressively quantized in this manner are likely more susceptible to adversarial attacks [22].
## V Conclusions and Future Work
In this paper, an estimator was derived which can predict the sensitivity of a neural network to perturbations. These estimators can help in estimating the minimum precision requirements for input representations of various neural networks. We show the results for two widely studied CNN architectures, AlexNet and VGG-19, across both adversarial attacks and random perturbations. The estimator can be used to guide decisions of precision support required for hardware when designing deep learning accelerators, thus enabling energy-efficient edge computing where power consumption is a major bottleneck. At the software level, the estimator can be used to estimate the maximum evaluation error required for certification, validation, or quantification of uncertainty. At the algorithmic level, the estimator enables the design of efficient DNN architectures which can be resistant to noise or adversarial attacks.
Future work should consider introducing perturbations at each layer to mimic perturbing the weights. Further investigations should also be performed to extend the estimator to the precision requirements for activations, weights, and biases for each layer. Finally, the estimator should be extended to a wide variety of neural networks such as Recurrent Neural
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Network** & **AlexNet** & **VGG-19** \\ \hline
**Mean \(\kappa\)** & 101.78 & 116.84 \\
**Minimum Digits** & 2.01 & 2.07 \\
**Minimum Bits** & 8 & 8 \\ \hline
**Maximum \(\kappa\)** & 651.06 & 824.53 \\
**Minimum Digits** & 2.81 & 2.92 \\
**Minimum Bits** & 11 & 11 \\ \hline
**Minimum \(\kappa\)** & 12.00 & 16.37 \\
**Minimum Digits** & 1.08 & 1.21 \\
**Minimum Bits** & 5 & 6 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Precision requirements for input representations based on DeepFool perturbations.
Fig. 3: \(\kappa\) estimator values. The X-axis is the image number and Y-axis shows the the maximum value of \(\kappa\) created using the adversarial perturbations discussed in Section III-B for each image. All of the 16,500 data are rendered as blue circles. The red triangle indicates the perturbation that resulted in the smallest change in output to the network. The yellow square indicates the perturbation that caused the largest change in the network output. These plots are similar, indicating that the networks have similar susceptibility to perturbations in input. For either network, the range of results spans many orders of magnitude, indicating a wide range of possible impact from a perturbation.
Networks (RNNs), Reinforcement Learning (RL), Generative Adversarial Networks (GANs), etc. to consider and characterize the stability of these architectures.
_Acknowledgments:_ AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
|
2310.11967 | Take the aTrain. Introducing an Interface for the Accessible
Transcription of Interviews | aTrain is an open-source and offline tool for transcribing audio data in
multiple languages with CPU and NVIDIA GPU support. It is specifically designed
for researchers using qualitative data generated from various forms of speech
interactions with research participants. aTrain requires no programming skills,
runs on most computers, does not require an internet connection, and was
verified not to upload data to any server. aTrain combines OpenAI's Whisper
model with speaker recognition to provide output that integrates with the
popular qualitative data analysis software tools MAXQDA and ATLAS.ti. It has an
easy-to-use graphical interface and is provided as a Windows-App through the
Microsoft Store allowing for simple installation by researchers. The source
code is freely available on GitHub. Having developed aTrain with a focus on
speed on local computers, we show that the transcription time on current mobile
CPUs is around 2 to 3 times the duration of the audio file using the
highest-accuracy transcription models. If an entry-level graphics card is
available, the transcription speed increases to 20% of the audio duration. | Armin Haberl, Jürgen Fleiß, Dominik Kowald, Stefan Thalmann | 2023-10-18T13:45:47Z | http://arxiv.org/abs/2310.11967v1 | # Take the aTrain. Introducing an Interface for the
###### Abstract
aTrain is an open-source and offline tool with an easy-to-use graphical interface for transcribing audio data in multiple languages with CPU and NVIDIA GPUs. It is designed for researchers using qualitative data generated from various forms of speech interactions. aTrain requires no programming skills, runs on most computers, does not require an internet connection, and was verified not to upload data to any server. aTrain combines OpenAI's Whisper model with speaker recognition to provide output that integrates with MAXQDA and ATLAS.ti. It is provided through the Microsoft Store for simple installation. The source code is freely available on GitHub. Having developed aTrain with a focus on speed on local computers, we show that the transcription time on current mobile CPUs is around 2 to 3 times the duration of the audio file using highest-accuracy transcription models. With an entry-level graphics card, transcription speed increases to 20% of the audio duration.
keywords: transcription, local, Whisper, AI, machine learning, qualitative research, interview transcription, qualitative data analysis
_JEL:_ C65, C88, Z19
## 1 Introduction
"Transcribing interviews is time-consuming and potentially costly work. It can be facilitated by using a transcribing machine that has a foot pedal and earphones".
(Seidman, 2013, p. 118)
In the ten years after the foot pedal was hailed as the state of the art for effectively transcribing interviews, the tools available have developed rapidly with the rise of artificial intelligence (AI). While the transcription of an interview of one hour requires up to six hours of manual work (Bell et al., 2018), advances in AI-based tools have sped up this process, significantly reducing the necessary transcription work to a fraction compared to manual transcription. An example of such an AI-based transcription model is Whisper, developed by OpenAI in 2023 (Radford et al., 2023). Whisper uses a transformer model (Vaswani et al., 2017) to facilitate fast automated transcriptions with accuracy and robustness comparable to human transcribers (Radford et al., 2023). However, speed and accuracy of the transcription are not the only criteria relevant to qualitative researchers. When comparing the available automated transcription tools, Wollin-Giering et al. (2023) recently outlined several criteria for the comparison of automatic transcription tools: data protection, accuracy, time spent, and costs.
Although open-source models such as Whisper perform generally well in all these criteria (Wollin-Giering et al., 2023), they still lack ease of use and rely on command-line interfaces. This can be a barrier for researchers without programming knowledge, especially when installed locally and operated on their own devices. They also lack output formats that integrate into common QDA software such as MAXQDA or ATLAS.ti. In addition to open-source solutions such as Whisper, there are several paid subscription tools from commercial providers such as Trint, Descript, or Sonix (see e.g., Liyanagunawardena, 2019; Wollin-Giering et al., 2023) that provide sufficient ease of use for researchers. The problem with some of these existing tools is their cloud-native nature, which requires users to upload their interviews to external servers for transcription. Especially in the context of the EU's data protection legislation (GDPR), this practice of uploading potentially sensitive
personal information to cloud services requires explicit consent from interviewees (European Parliament and Council of the European Union, 2016), which they may not be willing to give. Also from the perspective of ethics committees approving qualitative studies, privacy preserving technologies are demanded.
To help researchers use high-quality open source tools for local interview transcription, we developed a tool providing accessible transcription of interviews: aTrain.
We developed aTrain as a free, open-source, encapsulated, self-installing, and completely offline alternative to existing solutions. The following article presents the main features and details of the technical implementation and benchmarks of the transcription time on different computer configurations. We alleviate three main issues in using free open-source automated transcription on local computers: long runtime, difficulty in setup and use, and integration in the qualitative data analysis software workflow.
To reduce runtime issues and improve on other open source solutions, we make use of the faster-whisper framework, which reduces runtime on CPUs by the factor of 4 to 5 (Klein, 2023), thus making local transcription on typical business notebooks feasible. In our benchmarks, we found that mid-range business notebooks, as are common in university settings, allow one to tran
Figure 1: aTrain user interface
scribe one hour of audio with the highest accuracy whisper models in only about 2 and a half to four hours (depending on the hardware). Additionally, aTrain also supports CUDA-enabled NVIDIA graphics cards, drastically reducing the time required to transcribe interviews.
Regarding ease of use, we offer a Microsoft Store deployment for Windows users and a simple graphical user interface (Figure 1, thus eliminating the barriers of command line setup and use of Whisper. Furthermore, we also offer an export format that allows integration in MAXQDA and ATLAS.ti to allow researchers to continue directly working in established software tools after transcription with aTrain (see Figure 2.5).1
Footnote 1: To the best of our knowledge, one other open source tool has been developed to alleviate difficulty in setup and use: noScribe. However, the issues of long runtime on CPUs and lack of GPU support have been pointed out as a trade-off that must be made when using this solution (Wollin-Giering et al., 2023), and that we hope to alleviate.
Finally, addressing integration in QDA software workflow, we provide an output format for syncing audio with transcript passages in the widely used software tools MAXQDA and ATLAS.ti.
## 2 Main Features
This section provides an overview of the features that the current version of aTrain offers its users. For an in-depth description of the technical implementation of these features, refer to later chapters.
### Local installation
To simplify the installation process of aTrain as much as possible, we developed a MSIX package of the application and provide it through the Microsoft Store (see e.g., Microsoft, 2021). The installation does not require administrator rights on the local machine and should even work on systems that are centrally administrated, for example, by university IT departments. aTrain is available through the following link to the Microsoft Store:
apps.microsoft.com/store/detail/atrain/9N15Q44SZNS2.
The source code can be found in the GitHub repository under
github.com/BANDAS-Center/aTrain.
### Automated transcription
The central feature of aTrain is its ability to automatically transcribe audio and video files (e.g., mp3, mp4, wav and most others) containing speech.2 After selecting the input file from the local hard drive, aTrain will use its internal ML models to transcribe the contained speech and output the text, including time stamps. There is also an option to specify the ML models which should be used for transcription, which vary in accuracy and speed of transcription.
Footnote 2: For a full list of supported formats, see ffmpeg.org/ffmpeg-formats.html
It should be noted that in qualitative research, several distinct forms of transcriptions are differentiated, with varying level of detail and the in- and exclusion of phonetic components. A systematic discussion of the different systems and how AI-based translation relates to them is presented by Wollin-Giering et al. (2023) who conclude that currently all automated systems omit phonetic information. This excludes certain types of qualitative analysis methods that focus on or include phonetic information. Instead, they provide a reproduction of the text approaching a verbatim transcription of what was said, but omitting information like pauses or filler words. However, while the system on which we built our tool, Whisper, was found to be the most accurate and detailed (Wollin-Giering et al., 2023), it is essential that the researcher reviews the transcript and compares it with the original recording. We highly recommend that researchers using aTrain consult Wollin-Giering et al. (2023) for a systematic review and discussion of current limitations, capabilities, and advantages of automated transcription tools.
### Speaker detection
aTrain additionally offers speaker detection based on Pyannote.Audio (see Bredin et al., 2020), assigning text passages to the corresponding speakers. Users can either set the number of speakers in a recording, or let aTrain automatically detect the number of speakers. We recommend specifying the speaker count, as we found that it improves the accuracy of the clustering algorithm used for speaker detection.
### Multi-language Support
The inputted recordings can contain speech in any of the 57 languages available in the underlying ML models used by aTrain (Radford et al., 2023).
While aTrain can generally transcribe speech in any of these languages, the quality of the transcriptions is generally better for the languages that were predominant in the underlying training data sets(Radford et al., 2023). Additionally, currently only a single language can be processed for the complete interview.3
Footnote 3: One workaround would be to split your audio file into segments based on language manually, using an audio or video editing tool. This would only be feasible, of course, for larger segments with the same language.
In addition to transcription, translation of audio material into English is also possible by setting the language option to English for non-English source material. Translation to other languages is not supported.
### Output Formats
The transcribed text will be exported from aTrain as a text file that contains timestamps for each text segment and optionally information about the corresponding speakers. The following files are provided:
* A plain text file containing the transcripts including timestamps and, if selected, speaker information.
* A plain text file containing the transcripts without timestamps and, if selected, speaker information.
* A version of the plain text file formatted for QDA software import.
* A json file containing the complete raw-transcript information, allowing users to construct their own output formats.
To ensure easy integration into existing research workflows, we also provide an output format that was designed for seamless import into the commonly used QDA software tools ATLAS.ti and MAXQDA. This then allows to sync the audio or video file to the corresponding points in the transcript, thus playing the corresponding audio or video file by clicking the corresponding text in the transcript.
### Offline Execution and Data Privacy
aTrain is designed to run completely offline without the need for an internet connection. Thus, both the transcription model and the data set operate
in memory and do not leave the local machine. The same is true for the transcribed text. This means that no data are sent to any server, which ensures data privacy and supports GDPR compliance.
The version of aTrain referenced in this article and its components have been reviewed with regard to them sharing data online by one of the authors with a programming background. To ensure that updates to the used components do not introduce the transfer of GDPR-related data in updates, we include all components in the provided installer. This results in a large installer size of around 12.8 GB.
## 3 Technology and Programming
### Hard- and Software Requirements
We tested aTrain on various computer configurations to ensure that it runs on different hardware as well as on more powerful systems with a CUDA-enabled Nvidia GPU.4 While we expect aTrain to also work on similar and less powerful hardware, we do not have data to definitively support this claim.
Figure 2: Example of a transcript with synced audio in MAXQDA 2022.
aTrain can run on a CPU or on a CUDA-enabled NVIDIA GPU, the latter requiring the installation of the CUDA toolkit (NVIDIA, 2023). While a CUDA-enabled NVIDIA GPU is not required to run aTrain, it significantly improves the speed of transcriptions and speaker detection, as reported in Section 4. Currently, aTrain is only available for computers running Windows 11. There are no additional software dependencies or requirements.
### Software Packages and Architecture
The general architecture of the aTrain desktop application is based on web technologies using the Python programming language and the flask web framework (Pallets Projects, 2021). The user interface was built with HTML, CSS and JavaScript utilizing libraries such as tailwindCSS (Tailwind Labs, 2023), daisyUI (Saadeghi, 2023), alpine (Porzio, 2023) and htmx (Big Sky Software, 2023).
The main pipeline used for transcription and speaker detection includes several ML models and data processing steps. An important goal in composing this pipeline was to reduce the time needed for transcriptions and speaker detection and to limit the number of ML models included in the final MSIX installer. The final pipeline is depicted in Figure 3.
The pipeline is initiated once the user provides a speech recording to the system, which is first converted to the WAV format using a Python binding for the ffmpeg library (Kroenig, 2019). Subsequently, the resulting WAV file is used as input for the Whisper transcription model (Radford et al., 2023) specified by the user. aTrain uses the faster-whisper implementation of Whisper, which is optimized for speed and memory efficiency, which reduces the transcription time on CPUs by a factor of 4 to 5 (Klein, 2023). The output of the Whisper model contains the transcribed text segments of the speech recording, including time stamps. To enable additional speaker detection, aTrain utilizes a modified version of the Pyannote.Audio speaker detection
Figure 3: Transcription pipeline with corresponding inputs, tools and outputs
pipeline as described in Bredin et al. (2020). An alignment function developed by Bain et al. (2023) is ultimately used to combine the outputs of the transcription and speaker detection models into the final output.
### Source code, collaboration and license
We developed aTrain using Git as the version control system and share the source code through GitHub under github.com/BANDAS-Center/aTrain. With this, we also want to encourage collaboration with other developers. For future versions of aTrain, we would greatly appreciate the input of other developers and qualitative researchers to expand its features.
aTrain is published under an adaptation of the MIT license, where we ask users to cite this paper when using aTrain for academic or other publications.
## 4 Transcription Duration on Different Whisper Models
To enable a comparison of aTrain against existing implementations, we want to provide crucial performance metrics. While we did not develop any ML models ourselves in this project, we did use and combine them in a novel way. Important performance indicators for automatic transcription tools include factors such as transcription accuracy and processing time (see e.g., Wollin-Giering et al., 2023). The accuracy of aTrain's transcriptions and speaker detection depends entirely on the underlying models developed by Radford et al. (2023) and Bredin et al. (2020). Accuracy benchmarks and error rates for those models were already documented in the respective papers and are therefore not duplicated in this publication. In general, however, a recent comparison between various commercial and open source solutions found Whisper to be the most accurate tool available, for both a German and an English sample interview (Wollin-Giering et al., 2023).
A main performance indicator is the total processing time that results from aTrain's transcription and speaker detection pipeline. To test this processing time, we used aTrain to transcribe an audio book which was chosen from the freely available Librispeech corpus (Panayotov et al., 2015). To emulate the typical length of interviews in qualitative research, we opted to use a recording of Hans Christian Andersen's fairy tail 'The Snow Queen', which was narrated as an audiobook in 1 hour, 13 minutes, and 38 seconds (see LibriVox, 2006; Panayotov et al., 2015).
This audiobook was transcribed with all available Whisper models from the faster-whisper implementation (Klein, 2023) and with speaker detection
activated (Bredin et al., 2020). Additionally, we ran the transcriptions on three different computers to test the processing time on different CPUs and also on a CUDA-enabled NVIDIA GPU. The exact specifications of the hardware used during testing and benchmarking are listed in Table 1.
The processing time is recorded by aTrain using timestamps generated at the beginning and end of every transcription and displayed as the total duration after the transcription is complete.
Figure 4 shows the processing time of each transcription relative to the length of the speech recording. In this relative processing time (RPT), a transcription is considered'real time' when the recording length and the processing time are equal. Subsequently, faster transcriptions lead to an RPT below 1 and slower transcriptions to an RPT time above 1.
In alignment with previous expectations, the RPT increases significantly with the size of the Whisper model used. Up to the small Whisper model, the RPT was consistently below 1 for every computer tested. Larger models resulted in RPTs ranging between 1 and 3.5 when running on CPU, while the test machine using a CUDA-enabled NVIDIA GPU (comparable to a 2023 entry-level GPU) could run transcriptions with an RPT around 0.2 even on the largest models. Using standard hardware without GPU (12th gen Intel, Ryzen 7) is nevertheless feasible when running on the medium
\begin{table}
\begin{tabular}{c c c c c}
**Machine** & **Year** & **CPU** & **RAM** & **GPU** \\ \hline Dell Latitude & 2023 & **Intel i5-1245U** & 16GB & CPU integrated \\ Lenovo Thinkpad & 2022 & **AMD Ryzen 7 PRO 6850U** & 32GB & CPU Integrated \\ Lenovo Legion & 2019 & Intel i7-8750H & 16GB & **Nvidia RTX 2070 MaxQ 8GB** \\ \hline \end{tabular}
\end{table}
Table 1: Hardware specifications of our experiments. Please note that the computing device used in the benchmarks is highlighted in bold.
Whisper model, in which we found a good balance between processing time and transcription quality. However, also the use of large models without a GPU is possible; especially on potent multicore CPUs, where we observed an RPT of around 2 for the large Whisper models. Note that buying an entry-level gaming notebook with a dedicated GPU to transcribe interviews with five times and more real-time speed would cost around the same as a paid automated transcription single-user license starting at around 600\(\in\) (e.g., [https://app.trint.com/plans](https://app.trint.com/plans), accessed on 3 October 2023).
## 5 Conclusion
In the paper on hand, we introduce the open-source tool aTrain, which enables the transcription (including speaker recognition) of audio recordings. The software components in aTrain were reviewed by one of the authors with a Computer Science background to ensure that it runs completely on the local computer on which it is installed and does not send any data to online servers, thus helping researchers maintain data privacy requirements
Figure 4: Duration of transcription on different hardware
arising from ethical guidelines or to comply with legal requirements such as the GDPR.
It comes with an easy-to-use graphical interface, runs on both CPUs and CUDA-enabled NVIDIA GPUs, and is available through the Microsoft Store on Windows computers. aTrain accepts most common audio and video formats as input and produces a transcript for analysis using the Whisper model introduced by OpenAI, providing best-in-class accuracy among current commercial and open-source automated audio transcription solutions. It also provides an output format for import into the QDA software solutions MAXQDA and ATLAS.ti, allowing researchers to play the audio recording corresponding to text passages of the transcript with a single click.
We hope that this tool enables researchers who use audio recordings of various forms of interviews, either as their main data source or as a supplement to more quantitative oriented studies like laboratory experiments, to save both time and monetary resources in transcribing their audio recordings.
|
2306.12550 | On Evaluation of Document Classification using RVL-CDIP | The RVL-CDIP benchmark is widely used for measuring performance on the task
of document classification. Despite its widespread use, we reveal several
undesirable characteristics of the RVL-CDIP benchmark. These include (1)
substantial amounts of label noise, which we estimate to be 8.1% (ranging
between 1.6% to 16.9% per document category); (2) presence of many ambiguous or
multi-label documents; (3) a large overlap between test and train splits, which
can inflate model performance metrics; and (4) presence of sensitive
personally-identifiable information like US Social Security numbers (SSNs). We
argue that there is a risk in using RVL-CDIP for benchmarking document
classifiers, as its limited scope, presence of errors (state-of-the-art models
now achieve accuracy error rates that are within our estimated label error
rate), and lack of diversity make it less than ideal for benchmarking. We
further advocate for the creation of a new document classification benchmark,
and provide recommendations for what characteristics such a resource should
include. | Stefan Larson, Gordon Lim, Kevin Leach | 2023-06-21T20:32:22Z | http://arxiv.org/abs/2306.12550v1 | # On Evaluation of Document Classification using RVL-CDIP
###### Abstract
The RVL-CDIP benchmark is widely used for measuring performance on the task of document classification. Despite its widespread use, we reveal several undesirable characteristics of the RVL-CDIP benchmark. These include (1) substantial amounts of label noise, which we estimate to be 8.1% (ranging between 1.6% to 16.9% per document category); (2) presence of many ambiguous or multi-label documents; (3) a large overlap between test and train splits, which can inflate model performance metrics; and (4) presence of sensitive personally-identifiable information like US Social Security numbers (SSNs). We argue that there is a risk in using RVL-CDIP for benchmarking document classifiers, as its limited scope, presence of errors (state-of-the-art models now achieve accuracy error rates that are within our estimated label error rate), and lack of diversity make it less than ideal for benchmarking. We further advocate for the creation of a new document classification benchmark, and provide recommendations for what characteristics such a resource should include.
## 1 Introduction
Within the document understanding research area, the RVL-CDIP dataset Harley et al. (2015) has emerged as the primary benchmark for evaluating and comparing document classifiers. RVL-CDIP is composed of 16 document type categories, including resume, letter, invoice, etc. Its large volume of training data--320,000 samples--facilitates benchmarking state-of-the-art deep learning and transformer-based architectures. While initially released as a computer vision benchmark in 2015, more recent state-of-the-art models now incorporate image, text, and page layout modalities. For instance, recent tri-modal models like DocFormer Appalaraju et al. (2021), ERNIE-Layout Peng et al. (2022), LayoutLMv3 Huang et al. (2022), and Bi-VLDoc Luo et al. (2022) now achieve classification accuracies ranging in the mid- to high-90s, with Bi-VLDoc reporting a state-of-the-art of 97.12% on the RVL-CDIP test set. This is a large improvement over earlier image-centric work, and we chart this improvement in Figure 1.
As model performance on RVL-CDIP improves, it becomes increasingly important to ensure that further gains are meaningful with respect to the classification task. This concern has been raised by prior work that has found that benchmark evaluation datasets often contain substantial amounts of label errors or noise (e.g., Northcutt et al., 2021), substantial overlap between test and train data (e.g., Elangovan et al., 2021; Sogaard et al., 2021), and data collection artifacts that cause models to overfit to spurious cues (e.g., Gururangan et al., 2018; McCoy et al., 2019). Therefore, we cast a critical eye to the RVL-CDIP benchmark to answer: _Is RVL-CDIP still suitable for effectively measuring the performance of document classifiers?_
In doing so, we first observe a lack of clear label or annotation guidelines provided with the original introduction of RVL-CDIP. Therefore, we create verifiable label guidelines for the 16 RVL-CDIP categories. With these guidelines, we are then able to conduct a review of the data, and we find that
Figure 1: Model accuracy on RVL-CDIP by year and modality. The horizontal dashed line represents our estimated label error rate for RVL-CDIP’s test set.
-label errors account for an estimated 8.1% of data RVL-CDIP's test split, a rate greater than the current state-of-the-art model accuracy error rate, indicating that contemporary high-performing models are overfitting to noise. We also observe relatively high rates of documents that have ambiguous or multiple valid labels, which is problematic given RVL-CDIP is a single-label classification benchmark. Additionally, we also observe a large overlap between test and train data splits, where there are (near-) duplicate documents seen in both train and test splits, as well as documents that share common templates. Lastly, our review of RVL-CDIP data uncovered a surprisingly large amount of sensitive personally-identifiable information, particularly in the resume category, where we found 7.7% of documents contained US Social Security numbers.
We argue that the characteristics that we observe make RVL-CDIP an unattractive benchmark for training and evaluating document classifiers. We end with recommendations for what qualities a new document classification benchmark should have.
## 2 Related Work
This section discusses related work in two areas: (1) prior work in document classification on RVL-CDIP, and (2) prior work on analyzing datasets.
### RVL-CDIP and Document Classification
The RVL-CDIP corpus has been used as a benchmark for document classification since its introduction by Harley et al. (2015), who used it to evaluate convolutional neural network (CNN) image classifiers on the dataset's document images. Most immediate follow-up work followed Harley et al. (2015) and explored different image-based CNN models, as done in Csurka et al. (2016); Afzal et al. (2017); Tensmeyer and Martinez (2017); Das et al. (2018); Ferrando et al. (2020). Just relying on image features is limited, as much of a document's "essence" is informed by its textual content. Therefore, more recent work has incorporated the textual modality, including Audebert et al. (2019) and Dauphinee et al. (2019).
Even more recent work has capitalized on the transformer model architecture, often combining vision transformers with large transformer-based language models (and often combining these with a third modality based on page layout derived from detected optical character recognition (OCR) regions) as in LayoutLMv1 Xu et al. (2020), LayoutLMv2 Xu et al. (2021), LayoutLMv3 Huang et al. (2022), DocFormer Appalaraju et al. (2021), TILT Powalski et al. (2021), and ERNIE-layout Peng et al. (2022). These more recent transformer-based models have achieved state-of-the-art accuracy scores on RVL-CDIP, the most recent being Luo et al. (2022)'s Bi-VLDoc, which achieves a reported accuracy of 97.12% on RVL-CDIP. (For a listing of models benchmarked on RVL-CDIP since Harley et al. (2015), see Table 1.)
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Model (Reported by)** & **Modality** & **Accuracy** \\ \hline Bi-VLDoc Luo et al. (2022) & I, T, L & 97.12 \\ ERNIE-Layout-Large Peng et al. (2022) & I, T, L & 96.27 \\ UDP-Dual Tang et al. (2022) & I, T, L & 96.22 \\ DocFormer-base Apulrajraj et al. (2014) & I, T, L & 96.17 \\ Structural ML-Merge Li et al. (2021) & T, L & 96.08 \\ UDOP Tang et al. (2022) & I, T, L & 96.00 \\ LayoutLMv3-Merge Huang et al. (2022) & I, T, L & 95.93 \\ LLIT-Mue Wang et al. (2022) & T, L & 95.68 \\ LayoutLMv4-Merge Xu et al. (2021) & I, T, L & 95.65 \\ BROS-base Wang et al. (2022) & T, L & 95.8 \\ TILT-large Powalski et al. (2021) & I, T, L & 95.52 \\ DocFormer-large Apulrajraj et al. (2021) & I, T, L & 95.50 \\ LayoutLMv4-Mase Huang et al. (2022) & I, T, L & 95.44 \\ Donnt Kim et al. (2021) & I, T & 95.30 \\ Wide-Reader Large Bai et al. (2022) & I, T, L & 95.26 \\ Punn et al. (2022) & T, L & 95.25 \\ TILT-base Porushida et al. (2021) & I, T, L & 95.25 \\ LayoutLMv4-M2-None Xu et al. (2021) & I, T, L & 95.25 \\ UDC-carfa Gu et al. (2021) & I, T, L & 95.05 \\ Wide-Reader-base Dai et al. (2021) & I, T, L & 94.91 \\ StructText-Y-Large Yu et al. (2021) & I, T, L & 94.62 \\ LayoutLMv1-base Xu et al. (2020) & I, T, L & 94.43 \\ LayoutLMv1-Merge Xu et al. (2020) & I, T, L & 94.42 \\ MATKLN (Décheli et al. 2022) & I, T, L & 94.20 \\ DocCLAssitter-xl Satisflhah et al. (2022) & I & 94.17 \\ DocCLAssitter-large Satisflhah et al. (2022) & I & 94.15 \\ DocCLAssitter-base Cai et al. (2022) & I & 94.00 \\ UDCG (et al.201) & I, T, L & 93.96 \\ Legerformer-base Phan et al. (2022) & T & 93.85 \\ Longformer-large Pham et al. (2022) & T & 93.73 \\ MGDox Wang et al. (2022) & I, T, L & 93.64 \\ Desmaur Dhri et al. (2022) & I & 93.60 \\ Bigbird-base Pham et al. (2022) & T & 93.48 \\ Pannarik et al. (2022) & I, T, L & 93.36 \\ BiBi-large Pham et al. (2022) & T & 93.34 \\ Multimodal Ensemble Quaphine et al. (2019) & I, T & 93.07 \\ StellDoc Li et al. (2021) & T, L & 92.81 \\ LadderNet Satisflhah and Nandi (2019) & I & 92.77 \\ Zangor et al. (2021) & I & 92.70 \\ DT-Large Li et al. (2022) & I & 92.69 \\ VLCDoC (Bakkali et al., 2022) & I & 92.64 \\ InceptionResNeV2 Xu et al. (2021) & I & 92.63 \\ EfficientWert (Fernando et al., 2020) & I & 92.31 \\ Region Ensemble Qua et al. (2019) & I & 92.21 \\ DT-Thase Li et al. (2022) & I & 92.11 \\ MAE-base Li et al. (2022) & I & 91.42 \\ Stacked CNN Single Des et al. (2018) & I & 91.11 \\ BERT-Base Li et al. (2022) & I & 91.09 \\ VGG-16 Afzal et al. (2017) & I & 90.97 \\ Cunkar et al. (2016) & I & 90.70 \\ ResNet-101 Li et al. (2022) & I & 90.65 \\ Aalbert et al. (2019) & I & 90.60 \\ ResNet-50 (Arlan et al., 2017) & I & 90.40 \\ RoBERTa-large Li et al. (2021a) & T & 90.11 \\ RoBERTa-base Li et al. (2021a) & T & 90.06 \\ BERT-large Li et al. (2021a) & T & 89.92 \\ BERT-base Li et al. (2021a) & T & 89.81 \\ Tensmeyer and Martinez (2017) & I & 89.31 \\ GoogLeNet (Arlan et al., 2017) & I & 89.02 \\ AlexNet (Arlan et al., 2017) & I & 88.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model accuracy on RVL-CDIP for various image (I), text (T), and layout-based (L) document classification models, ordered by reported score. Models incorporating multiple modalities typically outperform uni-modal models.
Despite these high scores, recent work has exposed gaps in models trained on RVL-CDIP. In particular, recent work has found that models trained on RVL-CDIP perform poorly on out-of-distribution data (Larson et al., 2022) and perturbed in-distribution data (Saifullah et al., 2022). We also mention that RVL-CDIP is often used as a pre-training dataset, where models are first pre-trained on RVL-CDIP (and perhaps others) and then evaluated on other downstream tasks or datasets (e.g., Nguyen et al. (2021); Kanchi et al. (2022)). Other datasets like FUNSD are subsets of RVL-CDIP (Jaume et al., 2019).
### Analysis of Datasets
Prior work has investigated the presence of label and annotation errors and corpus quality in NLP and image datasets. This work includes Abedjan et al. (2016); Radenovic et al. (2018); Muller and Markert (2019); Pleiss et al. (2020); Northcutt et al. (2021); Kreutzer et al. (2022); Ying and Thomas (2022); Chong et al. (2022). One common conclusion is that the utility of a benchmark evaluation dataset is lessened if the label error and/or ambiguity rate is close to- or exceeds model prediction error rate. This has been observed for various datasets, such as ATIS (Bechet and Raymond, 2018; Niu and Penn, 2019), and the CNN/Daily Mail reading comprehension task (Chen et al., 2016).
Orthogonal to label errors, prior work has also observed non-trivial overlap between test and train splits in datasets on which natural language processing and computer vision models are evaluated (e.g., Finegan-Dollak et al., 2018; Allamanis, 2019; Barz and Denzler, 2020; Lewis et al., 2021; Wen et al., 2022; Croft et al., 2023). Such work often argues that non-trivial amounts of overlap between test and train data can lead to "inflated" performance scores, as overlapping data can reward a model's ability to memorize training data (Elangovan et al., 2021), and to under-estimate out-of-sample error (Sogaard et al., 2021). Evidence of this can also be found in the multitude of studies that report lower model performance scores on newly-collected evaluation sets versus reported scores on benchmarks (e.g., Augenstein et al., 2017; Recht et al., 2019; Harrigian et al., 2020; Kim and Kang, 2022; Larson et al., 2022). In this paper, we investigate the presence of errors, ambiguous data, and overlapping test-train data for the RVL-CDIP benchmark dataset.
## 3 The RVL-CDIP Dataset
The RVL-CDIP dataset was introduced in Harley et al. (2015) as a benchmark for evaluating image-based classification and retrieval tasks.1 Since then, RVL-CDIP has primarily been used as a document type classification benchmark. RVL-CDIP consists of 400,000 document images distributed across 16 document type categories, listed in Table 2. Example documents from RVL-CDIP are shown in Figure 2. Documents in RVL-CDIP were sampled from the larger IIT-CDIP Test Collection, which itself is a snapshot of the voluminous Legacy Tobacco Documents Library (LTDL) collection -- at that time, LDTL contained approximately 7 million documents (Lewis et al., 2006).2 These documents were made publicly available as part of legal pro
Figure 2: Samples from the RVL-CDIP dataset.
\begin{table}
\begin{tabular}{l c} \hline \hline advertisement & memo \\ budget & news\_article \\ email & presentation \\ file\_folder & questionnaire \\ form & resume \\ handwritten & scientific\_publication \\ invoice & scientific\_report \\ letter & specification \\ \hline \hline \end{tabular}
\end{table}
Table 2: RVL-CDIP document type categories.
ceedings and settlements against several American tobacco and cigarette companies and organizations, and as such, the documents in RVL-CDIP are almost exclusively related to the tobacco industry.3
Footnote 3: For more background on the history of the litigation and documents, see Glantz et al. (1996); Ciresi et al. (1999); Tasker et al. (2022).
Most document images in RVL-CDIP capture the initial page of a document; some common exceptions appear to be charts and tables (these are typically labeled as scientific_report) as well as presentation slides (labeled as presentation). Additionally, almost all of the documents (that contain readable text) are in English, although we did find small amounts of documents in other languages (including German, Dutch, French, Spanish, Portuguese, Italian, Japanese, Chinese, Arabic, and Hebrew) as part of our review. Examples of non-English RVL-CDIP samples are displayed in Figure 11 in the Appendix.
There are 320,000 training, 40,000 validation, and 40,000 test samples, but Harley et al. (2015) provides no information on how the data was partitioned into these splits, so we assume it was done randomly for each of the 16 document categories. Harley et al. (2015) report that the 16 categories were chosen, in part, because these categories had ample representation (i.e., at least 25,000 samples) in IIT-CDIP. Unfortunately, we are unaware of any published guidelines, criteria, rules, or documentation defining or describing each of the 16 RVL-CDIP categories, nor is it clear who or what provided the initial category labels in IIT-CDIP (nor in LTDL).4 Thus, we describe how we developed label guidelines for each RVL-CDIP document type category in Section 3.1 below.
Footnote 4: Schmidt et al. (2002) and Tasker et al. (2022) indicate that type labels may have been ascribed to the documents by human workers employed at UCSF’s LTDL.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Category** & **Description** \\ \hline advertisement & Advertisements from print-form media like newspapers and magazines. Also a small amount of scripts for television or radio advertisements. A small amount of “ad order instructions” and “ad insertion” documents. \\ \hline budget & Includes various budget documents such as expense, spending, sales, cash, and accounting reports and forecasts; budgets; quotes and estimates; and income and bank statements. Also includes receipt-like documents such as political campaign contribution requests and other recepits, as well as checks and check stubs. \\ \hline email & Scanned images of printed emails. \\ \hline file\_folder & Scanned images of folders and binders. Folder scans are often characterized by vertically oriented text (indicating a folder label). A moderate amount of file folders in RVL-CDIP contain handwritten text or notes. Some scanned folders may be indistinguishable from blank pages. \\ \hline form & Form documents with form-like elements (e.g., lines or spaces for user-provided data entry). The form-like elements can appear empty or filled. \\ \hline handwritten & Includes handwritten documents like handwritten letters and scientific notes. \\ \hline invoice & Includes invoices, bills, and account statements. \\ \hline letter & Letters, often with letterhead and commonly with “Dear...” salutations. The distinction between letters and memos is often unclear in RVL-CDIP. \\ \hline memo & Memoranda or inter-office correspondence documents, often with clear “TO”, “FROM”, “SUBJECT” headings. \\ \hline news\_article & Includes news articles in the form of clippings from newspapers and other print-form news media, as well as a small amount of news articles from the web. \\ \hline presentation & Includes scanned images of presentation and overhead slides, transcripts of speeches and statements. Also includes a large amount of press releases. \\ \hline questionnaire & Includes customer surveys and questionnaires, as well as survey and questionnaire prompts for surveyors. Also includes questionnaires appearing to be part of legal proceedings and investigations. In RVL-CDIP, many questionnaires have a substantial amount of form-like elements. \\ \hline resume & Includes resumes, curricula vitate (CVs), biographical sketches, executive biographies (e.g., those written in third-person), a small amount of business cards. \\ \hline scientific\_pub & Mainly papers and articles from scientific journals and book chapters, but also includes book title pages. Also includes news articles from science newsletters. News articles from science newsletters are very similar to the news_article category. \\ \hline scientific\_rep & Includes bioassay, pathology, and test reports; charts, graphs, and tables; research reports (including progress reports), research proposals, abstracts, paper drafts. Many reports and abstracts bear similarities to scientific publications. Many test result documents are similar to documents in the specification category. \\ \hline specification & Data sheets (including safety data sheets); product, material, and test specifications. Also includes specification change reports. \\ \hline \hline \end{tabular}
\end{table}
Table 3: RVL-CDIP categories alongside our descriptions and notes.
### Establishing Label Guidelines
The RVL-CDIP dataset does not have a published list of descriptions, rules, or guidelines describing each of the 16 document type categories. We discuss an extensive analysis from which we develop such guidelines.
We established our list of guidelines by first sampling 1,000 documents from each of the 16 categories in the training set (for a total of 16,000 documents). We then reviewed these samples category by category. This review process helped us identify commonalities within each category, and helped us discover that many of the categories seem to have distinct groups of sub-types within them. For instance, we found that the resume category is largely composed of (1) resumes and curricula vitae, (2) "Biographical Sketch" documents (i.e., those required for grant applications for the National Institutes of Health (example shown in Figure 2(a)), (3) executive biographies, and (4) scanned business cards. Such cases reveal opportunities for refining and diversifying appropriate categories.
In another category, advertisement, we found samples mostly consisted of advertisements from print-form media like newspapers and magazines, as well as smaller amounts of scripts for television or radio advertisements. The advertisement category also included a small amount of document images identical to the one shown in Figure 2(b). We found that this "IMAGE NOT AVAILABLE" document appears mostly in the advertisement category, yet it is an example of a document that we do not include in our label guidelines for this category, as it is not at all faithful to the semantic nature of the advertisement category.
Our annotation guidelines are listed in Table 3, along with our notes and observations. It was occasionally necessary to review multiple document categories prior to establishing rules. This was the case with the budget and invoice categories, each of which included non-trivial amounts of scanned check images and contribution requests. (Examples of cases like these are displayed in Figures 21-23 in the Appendix.) For cases like these, we annotated these sub-types in the relevant categories in order to estimate their relative frequencies. We then would append our annotation guidelines accordingly; for instance, 8.8% of budget documents and 3.8% of invoice documents that we reviewed were check images, so our guidelines specify that the budget category consists of check images, while invoice does not. Ultimately, our goal with establishing such guidelines is to provide repeatable, verifiable criteria that faithfully reflect the semantic nature of each category.
## 4 Label Errors and Ambiguities in RVL-CDIP
Armed with better knowledge of what constitutes each of the 16 RVL-CDIP categories, we analyze the contents of the RVL-CDIP test set to estimate the amount of label errors and ambiguities found in this set.
We manually checked for errors in the RVL-CDIP test set by sampling 1,000 documents from each of the 16 categories (for a total of 16,000 documents). We used our label guidelines established in Section 3.1 to help us determine the validity of each of these 16,000 samples. We tracked several types of errors and ambiguities: (1) documents found in a category that clearly are mis-labeled and instead belong in a different RVL-CDIP category -- we refer to this error type as _mis-labeled_; (2) documents that do not appear to have a single clear RVL-CDIP label -- we refer to this label type as _unknown_; (3) documents that have mixed or multiple features that belong to at least two RVL-CDIP categories -- we refer to this type as _mixed_. Examples of documents exhibiting these error types can be seen in Figure 4. We point out a particularly interesting _mixed_ case: the first two _mixed_ examples are nearly identical, but the original label is news_article in one case but letter in the second. More examples are shown in the Appendix in Figures 12-14.
Findings.Our estimated error rates in the RVL-CDIP test set are shown in Table 4. We estimate that error rates (i.e., combined rates for _mis-labeled_ and _unknown_) range between 1.6% (in the case
Figure 3: Example “Biographical Sketch” resume document (a) and “IMAGE NOT AVAILABLE” document found mostly in advertisement.
of resume) and 16.9% (in the case of letter). The average of each category's error rates is 8.1%, which is higher than the classification accuracy error rates reported by many state-of-the-art models listed in Table 1. In some cases, the majority of a category's errors were mis-labels of a particular type. For instance, about 59% of the erroneous letter documents we reviewed were actually memo documents. Similarly, 74% of the erroneous invoice documents were actually budget documents. Lastly, roughly 1.7% of RVL-CDIP's test set is data that have multiple valid labels.
## 5 Overlap Between Test and Train Splits
Our analysis also reveals a substantial degree of undesirable overlap between train and test samples within RVL-CDIP. To measure this overlap, we use an approach similar to Larson et al. (2019) and Elangovan et al. (2021), which, for each test sample in each document type category, finds the maximally similar sample in the same document
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Category** & & & & \\ \hline advertisement & 1.9\% & 4.5\% & 6.4\% & 3.5\% \\ budget & 9.7\% & 4.1\% & 13.8\% & 1.5\% \\ email & 1.7\% & 8.3\% & 10.0\% & 0.4\% \\ form & 4.4\% & 6.4\% & 10.8\% & 0.5\% \\ file\_folder & 0.4\% & 3.1\% & 3.5\% & 1.9\% \\ handwritten & 2.5\% & 5.2\% & 7.7\% & 2.4\% \\ invoice & 9.7\% & 1.3\% & 11.0\% & 0.2\% \\ letter & 13.5\% & 3.4\% & 16.9\% & 0.5\% \\ memo & 2.0\% & 2.4\% & 4.4\% & 2.1\% \\ news\_article & 4.6\% & 2.5\% & 7.1\% & 0.4\% \\ presentation & 1.8\% & 4.9\% & 6.7\% & 1.0\% \\ questionnaire & 5.6\% & 7.3\% & 12.9\% & 6.9\% \\ resume & 0.2\% & 1.4\% & 1.6\% & 0.4\% \\ scientific\_pub. & 2.5\% & 1.8\% & 4.3\% & 0.0\% \\ scientific\_rep. & 4.6\% & 3.9\% & 8.5\% & 5.6\% \\ specification & 1.5\% & 1.9\% & 3.4\% & 0.4\% \\ \hline Average & 4.2\% & 3.9\% & 8.1\% & 1.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Estimated label error and multi-label rates in the RVL-CDIP test set.
Figure 4: Example errors and ambiguities. Top row: _unknown_, middle row: _mis-label_, bottom row: _mixed_.
type category's training split. We then average these maximum similarity scores together for each document category. That is, for each document category \(C\) in RVL-CDIP, we compute
\[\frac{1}{|test_{C}|}\sum_{b\in test_{C}}\max_{a\in train_{C}}sim(a,b)\]
where \(a\) and \(b\) are samples from category \(C\)'s train and test splits, respectively. We use CLIP (Radford et al., 2021) to extract a 512-dimension feature embedding from each sample, and use cosine similarity for \(sim(\cdot,\cdot)\). We note that this vector-based similarity technique is common practice in the image- and information retrieval (e.g., Babenko et al. (2014)).
Findings.Average and median of the maximum similarity scores for test-train pairs are shown in Table 5 for each RVL-CDIP category. Overall, we see a high degree of similarity across test and train data: mean scores range between 0.893 (advertisement) and 0.976 (email), with an average of 0.950. Ten of the 16 document categories have average scores at- or above 0.95. The median score for each category is larger than the mean in all cases, indicating a long tail in the distribution of scores. Indeed, we see this in Figure 10 (in Appendix), which charts the distribution of similarity scores for all test data in RVL-CDIP. Figure 5 shows three examples of test-train pairs with similarity scores ranging between 0.937 and 0.981. Two of the three pairs in Figure 5 seem to be near-duplicates, where there appear to be minor differences in scanning or noise artifacts between each document. In the third (invoice) example, we see that the two samples are distinct, yet both share a large degree of similarity because both use the same document template (e.g., invoices from the same company that are structurally and visually similar but that contain different "data"). We show more example pairs in Figures 15-18 in the Appendix.
To help better understand the similarity scores, we conduct an experiment where we categorize each similarity pair into one of the following: _duplicate_, if the test-train pair represents the same document; _template_, if both documents in a pair use the same document template; and _different_, for all other pairs. We annotated a sample of 1,086 similarity pairs with maximum similarity scores ranging between 0.93 and 1.0. A visualization of the relationship between maximal similarity score and match type is shown in Figure 6, where we
Figure 5: Example test-train pairs with corresponding maximum cosine similarity scores. These three example pairs show instances of near-duplicates (left and center) and documents that have highly similar structure (right).
Figure 6: Sampled subset of maximal similarity scores for test-train pairs with scores between 0.93 and 1.0.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Category** & **mean** & **median** \\ \hline advertisement & 0.893 & 0.903 \\ budget & 0.963 & 0.968 \\ email & 0.976 & 0.982 \\ form & 0.948 & 0.956 \\ file\_folder & 0.967 & 0.974 \\ handwritten & 0.945 & 0.952 \\ invoice & 0.962 & 0.966 \\ letter & 0.953 & 0.960 \\ memo & 0.957 & 0.961 \\ news\_article & 0.919 & 0.936 \\ presentation & 0.929 & 0.945 \\ questionnaire & 0.961 & 0.968 \\ resume & 0.965 & 0.967 \\ scientific\_pub. & 0.936 & 0.955 \\ scientific\_rep. & 0.950 & 0.961 \\ specification & 0.972 & 0.978 \\ \hline Average & 0.950 & 0.958 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean and median of maximum cosine similarity scores between train and test sets for each RVL-CDIP category.
observe that the likelihood of a pair being either a _duplicate_ or _template_ match increases with similarity score.
Considering the overall median maximal similarity score is 0.958, we can estimate a lower-bound for the rate of _duplicate_ and _template_ match pairs by scaling the proportion of documents above the median maximal score (i.e., half, or 0.5) by the fraction of _duplicate_ and _template_ matches above the median (0.958). This gives us \(0.5\times 0.641\), and therefore we estimate that at least 32% of samples from the RVL-CDIP test set have either a duplicate counterpart or a sample that shares a template layout in the training set. While there is generally no established acceptable number or percentage for test-train overlaps, prior work (e.g., Sogaard et al. (2021); Elangovan et al. (2021)) has argued that overlaps are undesirable, and that building generalizable, robust models entails evaluation against novel, unseen data points (e.g., Koh et al. (2021); Malinin et al. (2021); Larson et al. (2022)).
## 6 Presence of Sensitive Information
While reviewing samples from RVL-CDIP, we noticed that the resume category had a non-trivial quantity of documents that contain sensitive and personally-identifiable entities. Naturally, resumes typically contain a person's name and basic contact information (e.g., phone numbers or email addresses). However, we found a plethora of sensitive entities like citizenship and marital statuses, places and dates of birth, names of children and spouses, and national ID numbers like US Social Security and Canadian National ID numbers.
Out of a sample of 1,000 documents from the resume test set, we found that 7.7% contained a US Social Security Number. While we recognize that US Social Security numbers were not considered sensitive several decades ago (when many of the resume documents in RVL-CDIP were created), their presence in so many documents in a publicly accessible dataset5 is still striking, especially considering the coexistence of this entity type with others like person names, dates and places of birth, etc. In particular, malicious Social Security numbers are often connected with fraud and identity theft crimes in the USA. Moreover, the sensitive entities discussed in this section are considered highly sensitive under many state and national laws.6 Additionally, we found that 43.6% of the test resumes contain birth dates, 19.9% contain places of birth, 11.4% contain marital (or spousal or parental) statuses, and 8.9% contain citizenship statuses. Example documents containing sensitive PII can be seen in Figure 7.
Footnote 5: On 9 Feb. 2023, RVL-CDIP tallied “1,765 downloads last month” on the Hugging Face Datasets platform.
Footnote 6: For example, the State of Michigan’s Social Security Number Privacy Act (2004).
Given the presence of sensitive PII in RVL-CDIP, it is reasonable to wonder if sensitive PII also appears in datasets derived from RVL-CDIP, like FUNSD (Jaume et al., 2019). Similarly, we also wonder if sensitive PII appears in datasets that were derived from the larger IIT-CDIP or UCSF Industry Documents Library corpora, such as Tobacco-800 (Zhu et al., 2007; Zhu and Doermann, 2007), Tobacco-3482 (Kumar et al., 2014), DocVQA (Mathew et al., 2021), OCR-IDL (Biten et al., 2022), and TABME (Mungmeeprued et al., 2022). We will investigate this in future work.
## 7 Discussion and Recommendations
Given our findings concerning labeling errors, test/train overlap, and presence of sensitive information in the RVL-CDIP document classification benchmark, we discuss several concrete recommendations to raise awareness among researchers engaged in benchmarking classifiers using this dataset:
_(0) Sub-Types in RVL-CDIP._ Our investigation into RVL-CDIP revealed that many of the RVL-CDIP categories are in fact composed of several
Figure 7: Example documents from RVL-CDIP showing sensitive personally identifiable information (PII; redacted by us).
sub-types. We encourage researchers and practitioners to be aware of this fact. For instance, curricula vitae, bigraphical sketches, executive biographies, and business cards are the four sub-types of the resume category. This finding has implications for modeling tasks where prior knowledge of the label set is assumed, like in zero-shot settings where each category may be specified to the model as a string, as done in Siddiqui et al. (2021). Additionally, unsupervised clustering analyses like Finegan-Dollak and Verma (2020) may exhibit low performance scores on RVL-CDIP due to many of the categories having distinct and disparate subtypes (e.g., radio scripts versus print advertisements in the advertisement category, or business cards versus biographical sketches in the resume category).
_(1) Errors._ Users of RVL-CDIP should be aware that there are many label errors and noisy samples with unknown labels in RVL-CDIP. Recall from Section 4 that an estimated 8.1% of test samples from RVL-CDIP contain label errors, with an additional 1.7% being ambiguous mixed or multi-label cases. This is problematic for benchmarking new models, since the estimated label error rate is now greater than state-of-the-art model accuracy error rates. Here, the implication is that high-capacity models like CNNs and transformers are now overfitting to noise. This is indeed the case for models like DiT (Li et al., 2022), which predict the "IMAGE NOT AVAILABLE" document to be an advertisement document due to its relative abundance in that category's training set.
_(2) Ambiguities._ Users of RVL-CDIP should be aware that there are many samples in RVL-CDIP that could have multiple valid document type labels. We estimate this number to be 1.7% of the RVL-CDIP test set. Like label errors, such mixed or multi-label cases make it challenging to evaluate a model effectively, as there are samples for which a model may make a wrong prediction according to the RVL-CDIP test label annotations, but in reality many of these wrong predictions could actually be reasonable.
_(3) Test-Train Overlap._ Practitioners and researchers should be aware that there is a high degree of overlap between the RVL-CDIP test set and the train set. Recall from Section 5 that almost a third of RVL-CDIP test samples have a near-duplicate in the training set for the same document type category, or a training sample that uses the same document template. This is undesirable, as testing models on data that is very similar to the training data can lead to "inflated" accuracy scores (Elangovan et al., 2021; Sogaard et al., 2021). Moreover, highly similar train and test splits do not facilitate the evaluation of a model's ability to generalize well to new in-domain data.
_(4) Sensitive Information._ There is an unsettling amount of sensitive information in the RVL-CDIP dataset, which naturally leads to information and data privacy concerns. We estimate that 7.7% of resume test samples contain Social Security numbers. While RVL-CDIP is already publicly available, researchers and practitioners should take care when disseminating samples or copies of RVL-CDIP. Moreover, we highlight that prior work (e.g., Carlini et al. (2021)) showed that it is possible to extract training data from machine learning models, making production deployments of models trained on RVL-CDIP an information privacy and security risk.
_Suggestions for a future dataset._ We suggest the development and adoption of a new benchmark for evaluating document classifiers. Several qualities of a such a benchmark would include (1) minimal label errors; (2) multi-label annotations, to allow for modeling more natural occurrences of documents; (3) minimal test-train overlap; (4) absence of sensitive information. Going beyond the points made in this paper, a new benchmark would do well to be (5) large-scale, consisting of 100+ or even 250+ document categories, to test a model's ability to handle breadth, and (6) multi-lingual, to benchmark language transfer approaches.
## 8 Conclusion
RVL-CDIP has been used as the _de facto_ benchmark for evaluating state-of-the-art document classification models, but this paper provides an in-depth analysis of the RVL-CDIP dataset and shows that there are several undesirable characteristics of this dataset. We first provide a set of label guidelines for each RVL-CDIP category, and we use this to help us quantify the presence of errors in RVL-CDIP, finding that the RVL-CDIP test set contains roughly 8.1% label errors. We then observe that roughly a third of the test data is highly similar to the training set. Lastly we observe an unsettling amount of personally sensitive information in RVL-CDIP. Given these findings, we offer suggestions for a new document classification benchmark.
### Limitations
The RVL-CDIP dataset has no official set of label guidelines, making error analyses challenging since we could not rely on pre-defined rules. For this reason we followed best practices to create annotation rules to help us in our error analysis. Detecting duplicates in RVL-CDIP is also challenging, as two documents may appear to be the same, but may have minor differences due to scanning artifacts or even different indexing labels (it appears that many of the documents have been scanned and included in IIT-CDIP more than once). Therefore we again have to rely on best judgement when labeling pairs as duplicates (or near-duplicates). Additionally, due to limitations in human resources, we were unable to exhaustively inspect all 400,000 RVL-CDIP samples for the presence of errors, ambiguities, sensitive information, etc., and thus had to rely on sampling the dataset in order to draw conclusions.
## Acknowledgements
We thank Nicole Cornehl Lima, Ramla Alakraa, Zongyi Liu, Junjie Shen, Temi Okotore for help with data review, as well as the University of Michigan's Undergraduate Research Opportunity Program (UROP) for their support of these student researchers as well as support for Gordon. We also thank the anonymous EACL reviewers for their feedback.
|
2304.12329 | Pre-trained Embeddings for Entity Resolution: An Experimental Analysis
[Experiment, Analysis & Benchmark] | Many recent works on Entity Resolution (ER) leverage Deep Learning techniques
involving language models to improve effectiveness. This is applied to both
main steps of ER, i.e., blocking and matching. Several pre-trained embeddings
have been tested, with the most popular ones being fastText and variants of the
BERT model. However, there is no detailed analysis of their pros and cons. To
cover this gap, we perform a thorough experimental analysis of 12 popular
language models over 17 established benchmark datasets. First, we assess their
vectorization overhead for converting all input entities into dense embeddings
vectors. Second, we investigate their blocking performance, performing a
detailed scalability analysis, and comparing them with the state-of-the-art
deep learning-based blocking method. Third, we conclude with their relative
performance for both supervised and unsupervised matching. Our experimental
results provide novel insights into the strengths and weaknesses of the main
language models, facilitating researchers and practitioners to select the most
suitable ones in practice. | Alexandros Zeakis, George Papadakis, Dimitrios Skoutas, Manolis Koubarakis | 2023-04-24T08:53:54Z | http://arxiv.org/abs/2304.12329v1 | # Pre-trained Embeddings for Entity Resolution:
###### Abstract.
Many recent works on Entity Resolution (ER) leverage Deep Learning techniques involving language models to improve effectiveness. This is applied to both main steps of ER, i.e., blocking and matching. Several pre-trained embeddings have been tested, with the most popular ones being fastText and variants of the BERT model. However, there is no detailed analysis of their pros and cons. To cover this gap, we perform a thorough experimental analysis of \(12\) popular language models over \(17\) established benchmark datasets. First, we assess their vectorization overhead for converting all input entities into dense embeddings vectors. Second, we investigate their blocking performance, performing a detailed scalability analysis, and comparing them with the state-of-the-art deep learning-based blocking method. Third, we conclude with their relative performance for both supervised and unsupervised matching. Our experimental results provide novel insights into the strengths and weaknesses of the main language models, facilitating researchers and practitioners to select the most suitable ones in practice.
Key words and phrases:The source code, data, and/or other artifacts have been made available at [https://github.com/alekZeakis/Embedings4ER](https://github.com/alekZeakis/Embedings4ER) +
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Over the years, numerous language models have been used in both ER steps. Few of them have been used for blocking: GloVe (Golovolovolov et al., 2017) and FastText (Wang et al., 2018; Wang et al., 2018). A much larger variety has been employed in matching: GloVe (Golovolovolov et al., 2017; Golovolovolov et al., 2017), FastText (Golovolovolov et al., 2017; Golovolovolov et al., 2017; Golovolov et al., 2017; Golovolov et al., 2018; Wang et al., 2018; Wang et al., 2018) as well as BERT (Wang et al., 2018) and its main variants, i.e., XLNet, DistilBERT, RoBERTa and AlBERT (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). However, there is no systematic analysis of their relative performance in these ER tasks. We cover this gap, answering the following research questions:
1. How large is their vectorization overhead?
2. What is their relative performance in blocking?
3. What is their relative performance in matching? How is it affected by fine-tuning in supervised matching?
4. Is it possible to build high performing end-to-end ER pipelines based exclusively on pre-trained language models without providing them with labelled instances?
To answer these questions, we perform a thorough experimental analysis, involving 12 established language models: Word2Vec (Wang et al., 2018; Wang et al., 2018), GloVe (Wang et al., 2018), FastText (Golovolovolovolov et al., 2017), BERT (Golovolovolovolov et al., 2017), AlBERT (Golovolovolov et al., 2017), RoBERTa (Golovolovolov et al., 2017), DistilBERT (Golovolov et al., 2017), XLNet (Wang et al., 2018) as well as S-MPNet (Wang et al., 2018), S-GTR-T5 (Wang et al., 2018), S-DistilRoBERTa and S-MiniLM (Wang et al., 2018). Some of them are applied to ER for the first time. We apply them to 10 established real-world and 7 synthetic datasets, making the following contributions:
* We organize them into a taxonomy that facilitates the understanding of their relative performance and we discuss their core aspects like their dimensionality and context awareness.
* We examine the vectorization cost per model.
* We assess their blocking effectiveness, efficiency and scalability.
* We compare their relative performance for both supervised and unsupervised matching.
* We demonstrate that high performance can be achieved by an end-to-end solution that leverages the best language model both for blocking and matching in many datasets with either long or short textual attributes.
## 2. Related Work
**Blocking.** The first approach to leverage embeddings vectors for blocking is DeepER (Golovolovolov et al., 2017), which uses GloVe for the vectorization of entities and hyperplane LSH for indexing and querying. AutoBlock (Wang et al., 2018) goes beyond DeepER by leveraging FastText embeddings in combination with cross-polytope LSH. Yet, it treats blocking as a classification task, requiring a large number of labelled instances to train its bidirectional-LSTM. DeepBlocker (Wang et al., 2018) is a generic framework for synthesizing deep learning-based blocking methods that support any language model. Most of the examined approaches, including the top-performing Auto-Encoder, leverage FastText embeddings, but support Word2Vec and GloVe too. It also considers two transformer-based models, which leverage Byte Pair Encoding, achieving slightly higher recall at the cost of much higher run-times. Given that efficiency is crucial for blocking, DeepBlocker with Auto-Encoder and FastText is the state of the art among these methods. **Matching.** Here, language models are typically used by supervised deep learning-based techniques. The first one is again DeepER (Golovolovolov et al., 2017), which leverages GloVe. Its generalization, DeepMatcher (Golovolovolov et al., 2017), is a framework for deep learning-based matching algorithms that supports the main pre-trained static models, i.e., GloVe and FastText, with the last one being the predefined option. Similarly, GraphER (Golovolov et al., 2017), Seq2SeqMatcher (Wang et al., 2018), CorDEL (Wang et al., 2018), MCAN (Wang et al., 2018), HierMatcher (HierMatcher, 2018) and HIF-KAT (Wang et al., 2018) constitute individual approaches that also leverage FastText embeddings. The reason is its character-level
Figure 1. An example illustrating the end-to-end unsupervised approach to Entity Resolution, based on language models.
operation, which addresses noise in the form of misspellings, while also supporting unseen terms.
More recent works focus on BERT-based models, due to their dynamic, context-aware nature. The first such work is EMTransformer (Beng et al., 2017), which considered BERT and its three main variants: XLNet, RoBERTa and DistilBERT. The same models are used by GNEM (Beng et al., 2017), which extends EMTransformer through a graph that captures the relations between all candidate pairs that are given as input to matching. GNEM also applies this idea to DeepMatcher, in combination with FastText embeddings. DITTO (Li et al., 2019) extends EMTransformer by combining the BERT-based language models with external, domain-specific information (e.g., POS tagging) and data augmentation, which provides more (synthetic) training instances.
JointBERT (Li et al., 2019) goes beyond the classic binary classification definition of matching, by also supporting multi-class classification. The problem of automatically tuning the configuration parameters of deep learning-based matching algorithms is examined in (Wang et al., 2019), considering all BERT-based models in Section 3.2.
**Gaps.** We observe that none of these works considers the main SentenceBERT models (cf. Section 3.3). Moreover, no work has examined the performance of BERT models (cf. Section 3.2) on blocking. To the best of our knowledge, no work investigates the relative ER performance of the main language models in a systematic way. Closest to our work is an in depth analysis of how BERT works in the context of matching (Wang et al., 2019), but it disregards all other models as well as the task of blocking. Surveys (Zhu et al., 2019), books (Zhu et al., 2019) and tutorials (Zhu et al., 2019) about language models are too generic, without any emphasis on ER, and thus, are orthogonal to our work. Our goal in this work is to cover these important gaps in the literature.
**Sentence-similarity tasks in NLP.** Semantic Textual Similarity (STS) is an important NLP task, that is mostly evaluated in Transformer Models with the STS-B task (Beng et al., 2017) via the GLUE Benchmark (Wang et al., 2019). Since in schema-agnostic ER all attributes in a record are concatenated into a "sentence", these tasks seem related. However, the characteristics of the input data differ. For instance, popular STS benchmarks contain sentences like image captions or news headlines, whereas typical ER benchmarks like the datasets considered here contain attributes such as person or product names, movie titles, addresses, etc. Concatenating such attributes does not form actual sentences, even if they are treated as such. Moreover, the task in STS is to predict a similarity score that indicates how similar the meaning of two sentences is, whereas in ER it is to decide whether two entity instances refer to the same real-world entity or not. Hence, it is not safe or straightforward to assume that language models will exhibit the same performance in ER as in STS.
## 3. Language Models Used in the Evaluation
We have used three categories of language models in our evaluation: (1) _Static models_, which associate every token with a fixed embedding vector; (2) _BERT-based models_, which vectorize every token based on its context; (3) _Sentence-BERT models_, which associate every sequence of tokens with a context-aware embedding vector.
For each category, we selected a representative set of language models based on the following criteria: (i) popularity in the ER and NLP literature, (ii) support for the English language, and (iii) availability of open-source implementation. Ideally, this implementation should include a documentation that facilitates the use of each model and all language models should be implemented in the same language so as to facilitate run-time comparisons. Thus, our experimental analysis relies on out-of-the-box, open-source implementations of the main language models that can be easily used by any practitioner that is not necessarily an expert in the field.
All static pre-trained and BERT-based models that are mentioned in Section 2 satisfy these criteria and, thus, are included in our analysis. Given that none of these ER works considers an established SentenceBERT model, we selected the top four ones from the SBERT library ([https://www.sbert.net/docs/pretrained_models.html](https://www.sbert.net/docs/pretrained_models.html)), based on their scores and overall need of resources. The technical characteristics of the selected models are summarized in Table 1. For each model, we indicate the vector dimensionality, the maximum sequence length, the number of parameters, and the ER works that have used it for blocking or matching. Below, we briefly describe the selected models per category in chronological order. Most models have several versions (e.g., base, large) that differ in the number of learned parameters. To ensure a fair comparison, we consider the base version of each model. We also conducted some tests with larger versions, without observing notable differences in the results.
### Static pre-trained models
These models were introduced to capture semantic similarity in text, encapsulating knowledge from large corpora. They replace the traditional high-dimensional sparse vectors with low-dimensional dense ones of fixed size, which define a mathematical space, where semantically similar words tend to have low distance.
**Word2Vec**(Wang et al., 2019; Wang et al., 2019) is a shallow two-layer neural network that receives a corpus as input and produces the corresponding vectors per word. It employs a local context window, as a continuous bag-of-word (order-agnostic) or a continuous skip-gram (order-aware). The latter can link words that behave similarly in a sentence, but fails to utilize the statistics of a corpus.
**GloVe**(GloVe, 2018) combines matrix factorization, i.e., the global co-occurrence counts, with a local context window, i.e., word analogy. It is trained on large corpora, such as Wikipedia, to provide pre-trained vectors for general use. Since it operates on a global
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Model & Dim. & \(Seq.\) & Param & Blocking & Matching \\ \hline \hline Word2Vec (WC) & 300 & - & - & (Li et al., 2019) & (Li et al., 2019) \\ FastText (FT) & 300 & - & & (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) & (Li et al., 2019; Li et al., 2019; Li et al., 2019) \\ GloVe (GE) & 300 & - & - & (Li et al., 2019; Li et al., 2019) & (Li et al., 2019; Li et al., 2019) \\ \hline BERT (BT) & 768 & 100 & 110M & - & (Li et al., 2019; Li et al., 2019; Li et al., 2019) \\ AlBERT (AT) & 768 & 100 & 12M & - & (Li et al., 2019) \\ RoBERTa (RA) & 768 & 100 & 125M & - & (Li et al., 2019; Li et al., 2019; Li et al., 2019) \\ DistilBERT (DT) & 768 & 100 & 66M & - & (Li et al., 2019; Li et al., 2019; Li et al., 2019) \\ XLNet (XT) & 768 & 100 & 110M & - & (Li et al., 2019; Li et al., 2019; Li et al., 2019) \\ \hline S-MPNet (ST) & 768 & 384 & 110M & - & - \\ S-GTR-T (S5) & 768 & 512 & 110M & - & - \\ S-DistilRoBERTa (SA) & 768 & 512 & - & - & - \\ S-MiniLM (SM) & 384 & 256 & 22M & - & - \\ \hline \end{tabular}
\end{table}
Table 1. The language models used in our experiments.
dictionary, it identifies words with a specific writing and fails to detect slight modifications.
**FastText**(Bordes et al., 2017) conceives each word as a group of n-grams instead of a single string. It is trained to vectorize n-grams. It then represents each word as the sum of its underlying n-grams.
### BERT-based models
In static models, each word has a single representation, which is restrictive, since words often have multiple meanings based on their context. Context-awareness was introduced by the transformer models (Tang et al., 2018), which are a natural evolution of Encoders-Decoders (Tang et al., 2019). The latter have many advantages, such as handling larger areas of text around a given word. Nonetheless, they cannot encapsulate any significant relationships between words. This has been fixed with the introduction of Attention (Bordes et al., 2017), which facilitates the communication between each encoder / decoder, by sharing all of the corresponding hidden states and not just the last one. An extra optimization, suggested by the Transformer model, is the Multi-Head attention, which led each encoder to run in parallel. Another useful approach is the use of Positional Encoding for each token, which addresses polysemy, i.e., the fact that the same word has different meanings in different sentences.
**BERT**(Han et al., 2017), which stands for "Bidirectional Encoder Representations from Transformers", was the next major step in the evolution of the transformer models. Its main contribution is the use of multiple transformers - only the encoder part, since it is a language representation model - to pre-train vectors for general use. These vectors can be further fine-tuned by adding an output layer for a wide variety of tasks. BERT is trained on two tasks: masked language modeling (MLM) and next sentence prediction (NSP). The former is token-based, since it tries to predict a masked token based on the unmasked tokens of a sentence. NSP is sentence-based, since it receives a first sentence as input and tries to predict whether a second sentence can follow it.
**AlBERT**(Kal
\begin{table}
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline \multicolumn{1}{|c|}{D} & \multicolumn{1}{c|}{D\({}_{1}\)} & \multicolumn{1}{c|}{D\({}_{2}\)} & \multicolumn{1}{c|}{D\({}_{3}\)} & \multicolumn{1}{c|}{D\({}_{4}\)} & \multicolumn{1}{c|}{D\({}_{5}\)} & \multicolumn{1}{c|}{D\({}_{6}\)} & \multicolumn{1}{c|}{D\({}_{7}\)} & \multicolumn{1}{c|}{Ds} & \multicolumn{1}{c|}{D\({}_{8}\)} & \multicolumn{1}{c|}{D\({}_{9}\)} & \multicolumn{1}{c|}{D\({}_{10}\)} & \multicolumn{1}{c|}{D\({}_{1_{0}}\)} & \multicolumn{1}{c|}{D\({}_{5_{1}}\)} & \multicolumn{1}{c|}{D\({}_{2_{2}}\)} & \multicolumn{1}{c|}{D\({}_{8_{3}}\)} & \multicolumn{1}{c|}{D\({}_{8_{4}}\)} & \multicolumn{1}{c|}{D\({}_{8_{5}}\)} & \multicolumn{1}{c|}{D\({}_{9_{6}}\)} & \multicolumn{1}{c|}{D\({}_{7_{7}}\)} \\ \hline \hline \(\text{D}{\text{at}}_{1}\) & \(\text{Rest}_{1}\) & Abt & Amz & DBLP & IMDb & IMDb & IMDb & \multicolumn{1}{c|}{Wmt} & DBLP & IMDb & \multicolumn{1}{c|}{D\({}_{10K}\)} & \multicolumn{1}{c|}{\(D_{50K}\)} & \multicolumn{1}{c|}{\(D_{100K}\)} & \multicolumn{1}{c|}{\(D_{200K}\)} & \multicolumn{1}{c|}{\(D_{300K}\)} & \multicolumn{1}{c|}{\(D_{1}\)} & \multicolumn{1}{c|}{\(D_{20M}\)} \\ \(\text{Dat}_{2}\) & \(\text{Rest}_{2}\) & Buy & GPr. & ACM & TMDb & TVDB & \multicolumn{1}{c|}{Ama} & Scholar & DBP & \multicolumn{1}{c|}{D\({}_{10K}\)} & \multicolumn{1}{c|}{\(D_{50K}\)} & \multicolumn{1}{c|}{\(D_{100K}\)} & \multicolumn{1}{c|}{\(D_{200K}\)} & \multicolumn{1}{c|}{\(D_{300K}\)} & \multicolumn{1}{c|}{\(D_{1}\)} & \multicolumn{1}{c|}{\(D_{2M}\)} \\ \hline \(\text{|V}_{1}|\) & 339 & 1,076 & 1,354 & 2,616 & 5,118 & 5,118 & 6,056 & 2,554 & 2,516 & 27,615 & 10K & 50K & 100K & 200K & 300K & 1M & 2M \\ \(\text{|V}_{2}|\) & 2,256 & 1,076 & 3,039 & 2,294 & 6,056 & 7,810 & 7,810 & 22,074 & 61,353 & 23,182
perform a _nearest neighbor search_ (NNS) to find the \(k\) most similar vectors to it. For the datasets for Clean-Clean ER in Table 2(a), we perform exact NNS. For each entity in the smallest of the two datasets, we compute all similarity scores and return the \(k\) nearest neighbors. For the datasets for Dirty ER in Table 2(b), which are significantly larger, we perform approximate NNS. According to the state of the art, we follow [24], leveraging an HNSW [29] index. This is a graph-based index, serving as an approximation to a Delaunay graph, but with long-range links as well, to support the small-world navigation property. To avoid the connectivity issues raised by high-degree nodes in the original NSW, HNSW introduces a multi-layer separation of links based on their degree as well as on an advanced heuristic for better selecting the neighbors per node. The resulting index offers very good querying times with the trade-off of a large overhead in building the index. In our experiments, we used the implementation provided by FAISS.5 First, we vectorize and index all input entities. Then, we query the index with every entity \(e\) to retrieve its \(k\) approximate nearest neighbors in terms of Euclidean distance.
Footnote 5: [https://faiss.ai/cpp_api/struct/struct/faiss_1_1IndexHNSW.html](https://faiss.ai/cpp_api/struct/struct/faiss_1_1IndexHNSW.html)
**Unsupervised Matching.** This is considered a clustering task, where each cluster of entities corresponds to a different real-world object. For two datasets in Clean-Clean ER, we model the task as bipartite graph matching, where each entity from the one dataset is matched with at most one entity from the other. To solve this, we apply _Unique Mapping Clustering_ (UMC) [21], which achieves both high effectiveness and time efficiency [40].
The entire process is as follows. First, we calculate the similarity score between all pairs of entities using the following formula: \(sim(e_{i},e_{j})=1/(1+dist(e_{i},e_{j}))\), where \(dist\) denotes the Euclidean distance between the embedding vectors \(v_{i}\) and \(v_{j}\) of the entities \(e_{i}\) and \(e_{j}\), respectively. We do not perform blocking here to avoid its impact on the effectiveness of UMC. As execution time here we measure exclusively the run-time of UMC, i.e., assuming that all similarity scores have been computed already. UMC iterates over all pairs in descending order of similarity score, until all entities from the smallest dataset have been matched, or there are no more pairs that exceed a given similarity threshold \(\delta\). This threshold is the only configuration parameter of UMC. To fine-tune it, we consider all values in \([0.05,0.95]\) with a step of 0.05 and select as optimal the one maximizing F-measure. In this way, our experimental analysis considers the maximum effectiveness per language model, comparing their potential. In practical settings, though, specifying the optimal similarity threshold for UMC is a non-trivial task.
To ensure generality of our results, besides UMC we also tested two other highly-performing algorithms from [40]: _Exact Clustering_, which matches two entities if they are mutually the best matches, and _Kiraly Clustering_, which provides a linear time approximation to the maximum stable marriage problem. In both cases, the results exhibited very high (>0.9) Pearson correlation with those of UMC. This can be seen in Figure 2.
**Supervised Matching** This is considered a binary classification task, classifying each candidate pair as match or non-match. Typically, a training, validation and testing set are used to learn the classification model, choose its optimal configuration, and assess its performance on new instances, respectively.
To examine the performance of BERT and SentenceBERT models, we combine them with _EMTransformer_[3]. We selected this approach among the open-source deep learning-based matching algorithms, because it achieves state-of-the-art performance, while relying exclusively on the embedding models. This is in contrast to DITTO [25], which leverages external information, such as POS tagging. However, EMTransformer is incompatible with the static models. To address this issue, we combine them with the state-of-the-art approach for this type of models, namely DeepMatcher [33].
Note that this analysis includes models that are supported by EMTransformer or DeepMatcher with minor adjustments to the code. S-GTR-T5 and Word2Vec are thus excluded, since the existing implementations could not support them (EMTransformer cannot handle the sequence2sequence input required by the former, while the latter is not in the format required by DeepMatcher). Note also that the original implementation of EMTransformer disregards the validation set and evaluates each model directly on the testing set. However, this results in overfitting, as noted in [26]. We modified the code so that it follows the standard approach in the literature: for each trained model, the validation set is used to check whether it maximizes F1 and this model is then applied to the testing set [25, 33]. For this analysis, we used the five datasets shown in Table 3.
## 5. Comparison on Effectiveness
### Blocking
We measure the recall of the resulting candidate pairs, which is also known as pairs completeness [8, 19, 39]. Recall is the most critical evaluation measure for blocking, as it typically sets the upper bound for the subsequent matching step, i.e., a low blocking recall usually yields even lower matching recall, unless complex and time-consuming iterative algorithms are employed [7, 15]. In contrast, precision is typically low after blocking, due to the large number of false positives, but significantly raises after matching. Thus, in terms of precision, all models have the same denominator (i.e. total number of candidates) and precision can be omitted, since recall and precision behave the same.
The experimental outcomes are shown in Figure 3 for each category of models, for \(k=10\). More specifically:
In static models, GloVe is the top performer in the vast majority of cases, leaving FastText and Word2Vec in the second and third place, respectively. On average, GloVe outperforms FastText by 19%,
Figure 2. Pearson Correlation between Unique Mapping Clustering (UMC), Exact Clustering (EXC) and Kiraly Clustering (KRC).
except for \(D_{1}\), \(D_{8}\) and \(D_{9}\), where FastText takes the lead. Compared to Word2Vec, GloVe's recall is higher by 12.5%, on average, except for \(D_{7}\)\(D_{8}\) and \(D_{10}\) (which abounds in noisy and missing values).
Among the BERT models, we can see that XLNet and AlBERT have very poor performance on almost all datasets. XLNet relies on permuted language modeling (PLM), which tries to model dependencies between words and phrases, instead of masked language modeling (MLM), which is adopted in BERT. This proves to be less effective in our task, where the input text is constructed by concatenating several different attributes, thus not constituting a coherent sentence. AlBERT trains only one encoder to produce a lighter version of BERT and shares its weights with the remaining encoders. This also turns out to perform poorly here. As a result, both models suffer from poor discriminativeness, i.e., they assign low similarity scores to both matching and non-matching pairs of entities. For the same reason, albeit to a lesser extent, the same applies to the remaining BERT models, which have a mediocre performance, with DistilBERT being the best one in all datasets.
Finally, all SentenceBERT models achieve very high recall across all datasets, but the extremely noisy and sparse \(D_{10}\). The best model in this category is S-GTR-T5. This has to do with the base model that each SentenceBERT model relies to. For example, GTR-T5 trains in a dataset with more than 2B pairs, while all three other base models train on a collection of datasets that amount to 1B+ pairs.
Finally, between groups, we can see that the point made in [49] holds in our task as well. BERT models, if not fine-tuned, behave worse in terms of recall than static models (GloVe) and SentenceBERT have overall the best performance. There are two main reasons for the latter: they are designed for sentence rather than word embeddings and they are trained on wider corpora.
**Summary.** The above patterns are summarized in Figure 4, which reports the ranking of each model per dataset with respect to recall
Figure 3: Blocking recall per model across all datasets in Table 2(a). Each line of plots corresponds to a value of \(k\in\{1,5,10\}\).
for \(k\)=10, with the rightmost column indicating the average position per model. We observe that the first four places are occupied by the SentenceBERT models, with S-GTR-T5 ranking first in most datasets. It is interesting that none of these models falls below the fourth place. here are two reasons for the superiority of SentenceBERT models: (a) They are inherently capable of transforming a sentence into an embedding vector, unlike the other two types, which are crafted for vectorizing individual tokens. (b) They encapsulate knowledge from wider corpora, while their final layer comes with reasonable weights already in its pre-trained form (unlike the BERT models).
The next three ranking positions mostly correspond to the static models, with Glove having the highest average one. Yet, FastText is the most stable one, fluctuating between positions 5 and 7, unlike the other two models, which fall up to the \(10^{th}\) place. Finally, BERT-based models are mapped to the last five positions. DistilBERT is the best one, ranked \(8^{th}\) on average, while AlBERT and XLNet are constantly ranked \(11^{th}\) and \(12^{th}\), respectively. The BERT models underperform the static ones, because they suffer from poor discriminativeness, due to the lack of fine-tuning, which guarantees their context-aware functionality. The predetermined weights in their final layer yield very low scores to most pairs of entities, regardless of whether they are matching or not. This is not true for the static models, despite their context-agnostic functionality and their lower dimensionality.
These patterns suggest a high correlation in terms of effectiveness between the models that belong to the same category. Indeed, the Pearson correlation with respect to recall for \(k\)=10 between the models of each category is quite high (\(\geq 0.9\)), as shown in Figure 5. The correlation is equally high between the pre-trained static models and the SentenceBERT ones, due to the high performance of both categories. In contrast, the correlation between BERT-based and the other two categories is significantly lower: it fluctuates between 0.58 and 0.85 for the SentenceBERT models and between 0.69 and 0.86 for the static ones. In the latter case, DistilBERT is an exception, fluctuating between 0.84 and 0.94, since it is the best-performing BERT model and, thus, it is closer to the static ones.
To a greater or lesser extent, all BERT-based models suffer from poor discriminative power in the tasks of blocking and unsupervised matching. This can be attributed to the lack of fine-tuning, which guarantees their context-aware functionality. The predetermined weights in their final layer yield very low scores to most pairs of entities, regardless of whether they are matching or not. This is not true for the static models, despite their context-agnostic functionality and their lower dimensionality. This is shown in Figure 6, which depicts the distribution of similarity scores among datasets D2 and D4 per language model (the positive class per dataset appears in the left column and the negative one in the right). We observe high discriminativeness for SentenceBERT models, lower for the static ones, while the BERT ones fails to separate the two distributions.
Figure 4: Model ranking wrt blocking recall (lower is better).
Figure 5: Pearson correlation of models wrt blocking recall.
Figure 6: Discriminativeness for all models for cases D2 and D4 in the classes of Match (Positive) and Non-match (Negative).
**Comparison to SotA.** The rightmost column in Figure 3 compares the best performing language model, S-GTR-T5, with the state-of-the-art blocking approach that is based on deep learning and embeddings vectors: DeepBlocker's Auto-Encoder with FastText embeddings. S-GTR-T5 consistently outperforms DeepBlocker's recall to a significant extent. The only exceptions are \(D_{1}\) and \(D_{4}\), where both methods achieve practically perfect recall The reason is that \(D_{1}\) involves a very low number of duplicate pairs in relation to the size of each data source, while \(D_{4}\) contains relatively clean entities with long textual descriptions that are easy to match. In the remaining 8 datasets, S-GTR-T5's recall is 15% higher than DeepBlocker.
Seemingly, this can be attributed to the FastText embeddings used by DeepBlocker. However, DeepBlocker is a comprehensive blocking method that uses FastText in a more complex way than the mere nearest neighbour search of S-GTR-T5. For example, a crucial component of DeepBlocker is self-supervision, which automatically labels a random sample of the candidate pairs in order to train its classification model. As a result, DeepBlocker is a stochastic approach, unlike S-GTR-T5, which exclusively performs nearest neighbor search. The ablation analysis in (Wang et al., 2018) indicates that all DeepBlocker's components have a significant contribution to the final outcome. For this reason, the role of the language models is restricted and, thus, the performance of DeepBlocker exhibits a low correlation with that of FastText+NNS.
#### 5.1.1. Scalability.
Blocking must scale to large data volumes in order to restrict the input of matching to manageable levels, even in cases with millions of entities. Therefore, we need to assess how well the selected language models scale as the size of the input data increases. To this end, we conduct an analysis using the seven datasets in Table 2(b). The results appear in Figure 7.
Figure 7(a) shows that recall consistently decreases for all models as we move from \(D_{10K}\) to \(D_{2M}\). This is expected, given that the number of candidates increases quadratically with the size of the input data. The best performance clearly corresponds to S-GTR-T5: its recall over \(D_{2M}\) is quite satisfactory both in absolute terms (0.800) and in relation to the initial one over \(D_{10K}\) (0.962), as it drops by just 17%. S-GTR-T5 has been trained on a much larger and richer corpus. The advantage that this offers to S-GTR-T5 becomes much more evident when testing it on the synthetic datasets, which are more challenging due to the much larger number of candidate pairs. The second best approach in most datasets is FastText, whose recall is reduced by 54%, from 0.901 to 0.415. On the other extreme lie AlBERT and XLNet: their recall is lower than 0.18 across all datasets (even for \(D_{10K}\)) and is reduced by 2/3 over \(D_{2M}\). The rest of the models fluctuate between these two extremes and can be arranged into three groups according to their performance. The best group comprises S-DistilRoberta and S-MiniLM, which start from \(\sim\)0.750 over the smallest dataset and end up \(\sim\)40% lower at 0.450. The worst group includes Word2Vec, DistilBERT, BERT and RoBERTa, whose recall drops by 56%-66%, falling far below 0.2 over \(D_{2M}\). GloVe and S-MPNet lie in the middle of the first two extremes: their recall is reduced by less than 50% and exceeds 0.32 over \(D_{2M}\).
We observe almost the same patterns with respect to precision in Figure 7(b). This is due to the linear relation between the two measures, as explained earlier. Only minor variations occur in the context of Dirty ER, due to the different number of redundant candidate pairs, which are counted once (a candidate pair \(<\)\(\epsilon_{i}\), \(\epsilon_{j}\)\(>\) is redundant if \(\epsilon_{j}\) is included in the nearest neighbors of \(\epsilon_{i}\) and vice versa). Hence, there is a gradual decrease in precision for all models as the input size increases. This is larger than the decrease in recall by 2-3% in most cases, because the denominator of recall (i.e., the number of existing duplicates) increases at a slower pace than the denominator of precision (i.e., number of distinct candidate pairs).
### Unsupervised Matching
Figure 8 reports the performance of all models in this task.
In static models, based on the average distance from the maximum f-measure (F1), GloVe is the best one (31.9%) followed by FastText (37.4%) and Word2Vec (39.4%). In absolute terms, their F1 remains rather low in all datasets except for the bibliographic ones: in \(D_{4}\), it exceeds 0.9, lying very close to the SentenceBERT models, but in \(D_{9}\), only FastText and Glove manage to surpass 0.7. The reason is that the entities from the Google Scholar are much more noisy and involve many more terminologies than \(D_{4}\). In all other cases, their F1 falls (far) below 0.57.
In the BERT-based models, the worst performance is consistently exhibited by XLNet and AlBERT, as their F1 does not exceed 0.37 in any dataset. On average, their F1 is almost an order of magnitude (\(\sim\)87%) lower than the top one. The reason is the same as in Blocking: both were trained for a different task and cannot perform well in the task of Matching, without further fine-tuning. In the other extreme lie DistilBERT and RoBERTa, with an average distance of \(\sim\)55%. Finally, BERT fluctuates between these two extremes, with an average distance from the top equal to 62%. These three models score an acceptable F1 (\(\sim\)0.9) in the clean and easy \(D_{4}\), but remain (far) below 0.54 in all other cases.
In the SentenceBERT models, the best one is S-GTR-T5, achieving the highest F1 in seven out of the 10 datasets. In the remaining datasets, it is ranked second or third, lying very close to the top performer. On average, its distance from the maximum F1 is just 0.7%, being the lowest among all models. The worst case corresponds to \(D_{6}\), where it lies 4.9% lower than the best method. The second best model is S-MiniLM, having the highest F1 in three datasets and the
Figure 7. Scalability effectiveness over the synthetic datasets in Table 2(b). The horizontal axis indicates the number of input entities.
second lowest average distance from the maximum F1 (6.5%). The two next best models are S-MPNet and S-DistillRoBERTa, whose average distance from the top is 10.5%.
In absolute terms, their best performance corresponds to the bibliographic datasets, \(D_{4}\) and \(D_{9}\), where their F1 remains well over 0.9. The reason is that the long titles and list of authors facilitate the distinction between matching and non-matching entities. Also high (\(\sim\)0.8) is their F1 over \(D_{2}\), which combines long textual descriptions with a 1-1 matching between the two data sources (i.e., every entity from the one source matches with one entity from the other). In all other datasets, their F1 ranges from 0.35 (\(D_{10}\)) to 0.77 (\(D_{7}\)), due to the high levels of noise they contain.
Finally, it is worth stressing that as in Blocking, S-GTR-T5 outperforms the best models of the other two types, since all other models do not surpass 0.5 in F1 - except for the relatively clean and easy \(D_{4}\). In contrast with Blocking, though, in Matching we can perform fine-tuning to check whether BERT models can perform better than other two types. See Section 5.3 for more details.
**Summary.** Figure 9 summarizes the ranking position of each model per dataset. The SentenceBERT models typically fluctuate between positions 1 and 4, the static ones between 5 and 7 and the BERT-based ones between 8 and 12. Moreover, the Pearson correlation of their F1 in Figure 10 shows an almost perfect dependency between the SentenceBERT models and a very high one inside the group of static and BERT-based ones. The latter are weakly correlated with the SentenceBERT models. The static models have moderate correlation (\(0.7-0.9\)) with the other two groups.
Overall, the relative performance of the three model types follows the same patterns as in Blocking, due to the same root causes. The SentenceBERT models outperform all others, as they are inherently crafted for vectorizing entire "sentences", while they have been trained on much larger corpora. At the other extreme lie the
Figure 8: Precision, Recall and F1 per model across all datasets in Table 2(a).
BERT models, which lack fine-tuning, with the predefined weights in their final layer yielding low similarities for matching and non-matching pairs alike. The static models lie in the middle of these extremes, due to the context-agnostic, word-level embeddings.
**Comparison to SotA.** The state-of-the art in Unsupervised Matching is ZeroER (Wang et al., 2017), which converts every pair of entities into a feature vector whose dimensions correspond to similarity functions. At its core lies the assumption that the resulting feature vectors are generated by a Gaussian Mixture Model with two mixture components (one for each matching category). Adaptive feature regularization is leveraged to avoid overfitting, while transitivity improves its accuracy. ZeroER uses Magellan's overlap blocking to reduce the search space to a small set of candidate pairs.
We compare ZeroER with an end-to-end framework based on the best language model for Blocking and Matching, i.e. S-GTR-TS. We actually use the above matching algorithm with the similarity threshold set to 0.5 by default, but instead of utilizing all pairs of entities, every entity of the smallest entity collection is allowed only \(k\)=10 candidates, produced by Blocking with exact NNS.
The relative performance of the two approaches appears in Figure 8(d). ZeroER lacks an estimated performance for half the datasets, because it did not terminate after 6 hours - unlike S-GTR-T5, which consistently takes less than 1 minute, as shown in Table 5(b). In \(D_{4}\), both methods have the same, almost perfect performance, due to the rather clean data and the relatively easy task. In \(D_{1}\) and \(D_{2}\), S-GTR-T5 outperforms ZeroER to a significant extent. ZeroER actually yields \(F_{1}\)=0 on \(D_{1}\), because \(D_{1}\) contains many missing and misplaced values, which cannot be supported by ZeroER's schema-based functionality (unlike the schema-agnostic settings of S-GTR-T5). \(D_{2}\) conveys large textual values, which are also barely suitable for most similarity measures employed by ZeroER. In contrast, \(D_{5}\) and \(D_{7}\) contain short attribute values that describe movies (e.g., actor names). These are ideal for the features of ZeroER, which thus achieves much higher effectiveness than S-GTR-T5, which is not crafted for rare, domain-specific (terminological) textual values.
Overall, S-GTR-T5 performs significantly better than or at least equally well as ZeroER in most datasets, despite its parameter-free functionality, while being orders of magnitude faster (cf. Section 6.3).
### Supervised Matching
Figures 11(a)-(c) show the F1 of all models in this task over the datasets in Table 3. We observe that the language models can be distinguished into two groups according to their context awareness. The dynamic models consistently exhibit the highest performance. RoBERTa is actually the most robust model in terms of effectiveness. It achieves the highest F1 in most datasets, ranking first on average. In fact, its mean distance from the top F1 across all datasets amounts to just 0.5%. It is followed in close distance by the second best model, S-MPNet, which is top performer in one dataset (\(DSM_{3}\)) and its average distance from the top F1 is 0.7%. The rest of the models are sorted in the following order according to their average distance from the maximum \(F_{1}\) per dataset: BERT, AlBERT, XLNet, S-DistilRoBERTa, S-MiniLM.
Even the least effective dynamic model, though, is just 5% worse than the best one, on average. The reason for this is the strong correlation between all considered models: high effectiveness for one model on a specific dataset implies similarly high effectiveness for the rest of the models. This pattern should be partially attributed to the \(DSM_{3}\) and \(DSM_{4}\) datasets, where the difference between the maximum and the minimum F1 is less than 1.3%. These datasets convey relatively clean values, even though in both cases artificial noise has been inserted in the form of misplaced values. This means that the duplicate entities share multiple common tokens in the schema-agnostic settings we are considering. As a result, even classic machine learning classifiers that use string similarity measures as features achieve very high F1 (\(>\)0.9) in these datasets (see Magellan in (Zhu et al., 2017; Wang et al., 2017)). The rest of the datasets are more challenging, due to the terminologies they involve (e.g., product names). As a result, the difference between the maximum and minimum F1 of dynamic models ranges from 4.9% (\(DSM_{1}\)) to 17% (\(DSM_{2}\)).
The second group of models includes the static, context-agnostic ones, which perform relatively poorly, even though they are combined with the best configuration of DeepMatcher, i.e., the state-of-the-art algorithm for this task and type of models. GloVe and FastText are actually combined with a two layer fully connected ReLU HighwayNet classifier followed by a softmax layer in the
Figure 10. Pearson correlation of language models with respect to Unsupervised Matching F1.
Figure 9. The ranking position of each model with respect to Unsupervised Matching F1 per dataset. Lower is better.
classification module in combination with a hybrid model for the attribute similarity vector module. On average, GloVe and FastText underperform the top model per dataset by 22% and 37%, respectively. Only in \(DSM_{3}\) and \(DSM_{4}\), where all dynamic models exceed 0.95, the two static models exhibit high performance, with their F1 within 5% of the maximum one (this is another indication about the straightforward matching task posed by the bibliographic data of these two datasets). Note that FastText consistently outperforms GloVe across all datasets, since its character-level functionality is capable of addressing the out-of-vocabulary tokens that arise, due to the domain-specific terminology of each dataset, unlike GloVe.
Overall, _the static models underperform the dynamic ones in most cases, as reported in the literature (Zhu et al., 2017), while the BERT-based models match the SentenceBERT ones, unlike for the previous ER tasks, due to the fine-tuning of their last layer. SentenceBERT models also benefit from fine-tuning, but to a lesser extent, probably because the sentence representing each entity is constructed in an ad-hoc manner, lacking any obsectiveness._
**Comparison to SotA.** Figure 11(d) depicts the performance of the state-of-the-art supervised matching algorithms that leverage static and dynamic models, DeepMatcher+ (Krizhevsky et al., 2015) and DITTO (Zhu et al., 2017). We actually consider their optimized F1 that is reported in (Zhu et al., 2017).
Comparing DITTO to the dynamic models, we observe that its F1 is directly comparable to the best performing language model in each dataset. In \(DSM_{2}\), \(DSM_{3}\) and \(DSM_{4}\), DITTO's F1 is lower by just \(\leq\)0.5%. In \(DSM_{1}\) and \(DSM_{5}\), though, DITTO outperforms all language models by 3% and 1.5%, respectively. This should be attributed to the external knowledge and the data augmentation, whose effect is more clear when comparing DITTO to the language model at its core, i.e., RoBERTa. On average, across all datasets, the latter underperforms DITTO by 1.3%.
Comparing the static models to DeepMatcher+, we observe that its performance is almost identical with FastText in most datasets, because it leverages the same language model. Only in \(DSM_{2}\) and \(DSM_{5}\), DeepMatcher+ performs substantially better, by 9% and 23%, respectively. This should be attributed to its advantage over the original DeepMatcher algorithm, which DeepMatcher+ combines with transfer and active learning. Note that DeepMatcher+ consistently outperforms GloVe by 28%, on average.
Note also that DeepMatcher+ outperforms the dynamic models in practically all cases. This is particularly true in \(DSM_{1}\) and \(DSM_{5}\), where the worst dynamic model (DistilBERT) exceeds DeepMatcher+ by \(\sim\)20%. This verifies the superiority of dynamic models over the static ones in supervised matching, due to their fine-tuning, which optimizes the weights of their last layer to the data at hand.
## 6. Comparison on Efficiency
### Vectorization
**Initialization.** The initialization time of each model is shown in the first line of Table 4. It refers to the time taken to load the necessary data structures in main memory (e.g., a dictionary for the static models and a learned neural network for the dynamic ones), and is independent of the dataset used. The static models are inefficient due to the hash table they need to load into main memory to map tokens (or character n-grams) to embedding vectors. Regarding the dynamic models, we observe that the BERT-based ones are much faster than the SentenceBERT ones, as their average run-time is 4.7\(\pm\)0.8 and 8.9\(\pm\)0.6 seconds, respectively. This is due to the larger and more complex neural models that are used by the latter.
**Transformation.** The rest of Table 4 shows the total time required by each model to convert the entities of each dataset into dense embeddings vectors (after the initialization). Word2Vec and Glove are the fastest models by far, exhibiting the lowest processing run-times in practically all cases. Word2Vec is an order of magnitude
\begin{table}
\begin{tabular}{|l|c c c|c c c c c|c c c c|} \cline{2-13} \multicolumn{1}{c|}{} & WC & FT & GE & BT & AT & RA & DT & XT & ST & SS & SA & SM \\ \hline \hline Init & 32.4 & 159.7 & 5.87 & 4.72 & 3.99 & 5.28 & 4.3 & 4.73 & 9.19 & 9.84 & 9.33 & 8.36 \\ \hline D1 & 0.0 & 0.2 & 1.9 & 2.6 & 2.4 & 2.3 & 1.3 & 4.0 & 1.1 & 1.1 & 0.7 & 0.5 \\ D2 & 0.1 & 1.6 & 0.2 & 3.1 & 2.4 & 2.3 & 2.2 & 3.3 & 3.4 & 3.4 & 1.8 & 0.9 \\ D3 & 0.9 & 9.6 & 0.4 & 10.1 & 6.7 & 6.3 & 8.6 & 8.3 & 10.3 & 12.4 & 5.8 & 2.3 \\ D4 & 0.2 & 2.5 & 0.3 & 5.9 & 5.2 & 5.3 & 4.2 & 7.8 & 5.1 & 5.4 & 2.8 & 1.4 \\ D5 & 0.4 & 3.8 & 0.4 & 13.6 & 13.0 & 12.9 & 8.7 & 20.3 & 10.7 & 12.1 & 6.0 & 3.2 \\ D6 & 0.6 & 5.5 & 0.5 & 15.4 & 14.3 & 13.8 & 10.4 & 21.3 & 14.9 & 17.2 & 8.2 & 3.9 \\ D7 & 0.4 & 3.6 & 0.4 & 11.9 & 11.2 & 11.4 & 8.0 & 17.7 & 9.7 & 10.4 & 5.3 & 2.8 \\ D8 & 1.0 & 10.0 & 0.8 & 28.8 & 25.0 & 24.2 & 19.5 & 38.9 & 28.5 & 27.3 & 14.9 & 6.7 \\ D9 & 2.4 & 27.7 & 1.9 & 73.4 & 66.0 & 65.5 & 49.9 & 99.9 & 58.0 & 61.5 & 31.4 & 16.0 \\ D10 & 1.1 & 10.6 & 1.2 & 51.1 & 49.1 & 47.9 & 31.6 & 78.9 & 28.9 & 30.2 & 16.5 & 10.3 \\ \hline \end{tabular}
\end{table}
Table 4. Vectorization time in seconds per model and dataset.
Figure 11. Supervised Matching F-Measure per model across all datasets in Table 3.
faster than the third most efficient model per dataset, which interchangeably corresponds to FastText and S-MiniLM. Except for \(D_{1}\), GloVe outperforms these two models by at least 6 times. In absolute terms, both Word2Vec and GloVe process the eight smallest datasets, \(D_{1}\)-\(D_{8}\), in much less than 1 second, while requiring less than 2.5 seconds for the two larger ones.
Among the BERT-based models, DistilBERT is significantly faster than BERT, as expected, with its processing time being lower by 33%, on average. Note, though, that it is slower than FastText by \(>\)50%, on average, except for \(D_{3}\). The next most efficient models of this category are ALBERT and RoBERTa, which outperform BERT by 11% and 13%, on average, respectively. XLNet is the most time consuming BERT-based model, being slower than BERT by \(\sim\)30%, on average across all datasets but \(D_{3}\). This can be explained by the fact that \(D_{3}\) has larger sentences, as shown in Table 2(a).
Among the SentenceBERT models, the lowest time is achieved by S-MiniLM, which, as mentioned above, is the third fastest model together with FastText. The second best model in this category is S-DistilRoBERTa, which is slower by 30%, on average. Both models are faster than BERT by 63% and 53%, respectively, on average. In contrast, the quite complex learned model of S-GTR-T5 yields the highest processing time among all models in all datasets but the smallest one. S-MPNet lies between these two extremes.
**Summary.** The static models excel in processing time, but suffer from very high initialization time. More notably, FastText is the slowest model in all datasets, but \(D_{9}\), where S-GTR-T5 exhibits a much higher processing time. This means that FastText's high initialization cost does not pay off in datasets with few thousand entities like those in Table 2(a). On average, FastText requires 2 minutes per dataset. All other models vectorize all datasets in less than 1 minute, with very few exceptions: BERT over \(D_{9}\), XLNet over \(D_{9}\)-\(D_{10}\) and S-GTR-T5 over \(D_{8}\)-\(D_{10}\).
### Blocking
The execution time of blocking is very low for all models, not exceeding 0.5 seconds in most cases. The only exceptions are the two largest datasets, \(D_{9}\) and \(D_{10}\), which still require less than 2 seconds in all cases. The differences between the various models are rather insignificant. In most cases, the lower end in these ranges corresponds to the language models with the lowest dimensionality, namely the static ones (300) as well as S-MiniLM (385), and the higher end to the rest of the models, which involve 768-dimensional vectors (see Table 1). In more detail, fastText and S-GTR-T5 are consistently the fastest and slowest models, respectively. However, this overhead time is negligible in comparison to the vectorization time in Table 4.
**Comparison to SoTA.** Table 5(a) reports the run-times corresponding to the rightmost column in Figure 3. We observe that DeepBlocker is consistently faster than S-GTR-T5 for \(k\)=1 and \(k\)=5 in all datasets but \(D_{10}\). The reason for the slow operation of S-GTR-T5 is its high vectorization cost, which accounts for \(\sim\)99% of the overall blocking time. This explains why its run-time is practically stable per dataset across all values of \(k\). This is expected, though, given that S-GTR-T5 leverages 768-dimensional embeddings vectors, compared to 300-dimensional FastText vectors of DeepBlocker. Yet, for \(k\)=10, DeepBlocker is faster than S-GTR-T5 only in \(D_{2}\) and \(D_{3}\) (by 14.4% and 26%, respectively). The situation is reversed in \(D_{1}\) and \(D_{4}\)-\(D_{8}\), where S-GTR-T5 is faster by 14.1%, on average. The reason is that DeepBlocker does not scale well as the number of candidates
Figure 12. Blocking Time (sec) per method. There are three values for \(k\in\{1,5,10\}\).
per query entity increases, due to the high complexity of the deep neural network that lies at its core. Most importantly, DeepBlocker scale poorly as the size of the input data increases: in \(D_{10}\), S-GTR-T5 is faster by 1.5 times for \(k\)\(\in\)\(\{1,5\}\) and 3.7 times for \(k\)=10.
#### 6.2.1. Scalability
Figure 13(a) shows that the blocking time of all models scales superlinearly, but subquadratically: as the number of input entities increases by 200 times from \(D_{10K}\) to \(D_{2M}\), the run-times increase by up to 1,435 times. With the exception of the two most efficient models, Word2Vec (742 times) and S-GTR-T5 (873 times), all other models fluctuate between 1,053 (SMPNet) and 1,435 (XLNet) times. This is mainly attributed to the fact that FAISS(HNSW) trades high indexing time for significantly lower querying time, i.e., the former dominates the latter. During indexing, it requires complex graph-based operations that involve long paths. The larger the input size, the longer these paths get, increasing the cost of their traversal and processing superlinearly.
Figure 13(b) reports the evolution of vectorization time, which increases sublinearly with the size of the input data for all models. The increase fluctuates between 127 (DistilBERT) and 165 (XLNet) times for all BERT-based models, thus remaining far below the raise in the input size (i.e., 200 from \(D_{10K}\) to \(D_{2M}\)). A similar behavior is exhibited by S-GTR-T5, but the rest of the SentenceBERT models achieve even better scalability, as their increase is reduced to 59 (S-MiniLM), 94 (S-MPNet) and 62 (XLNet). This should be attributed to the initialization of each model, which is independent of the input size and accounts for a large portion of the overall vectorization time of each model, as shown in Table 4. The larger the relative cost of initialization is, the lower is the increase in vectorization time as size of the input increase. As a result, Word2Vec and GloVe, which raise their run-times by just 2.5 and 4 times, respectively. The former actually remains practically stable up to \(D_{300K}\), because its vectorization time is dominated by its high initialization time across the five smallest datasets. On the other extreme lies FastText, which is the only model that scales linearly with the size of the input data (by 205 times), due to its character-level functionality.
In absolute terms, GloVe is consistently the fastest model in all datasets, but \(D_{10K}\), with Word2Vec ranking second from \(D_{200K}\) on. They vectorize \(D_{2M}\) within 0.8 and 1.3 minutes, respectively. FastText is the second fastest model for the three smallest datasets, but converges to the fastest dynamic model, S-MiniLM, for the larger ones. They both need \(\sim\)11 minutes to process \(D_{2M}\). On the other extreme lies S-GTR-T5, which consistently exhibits the slowest vectorization, with XLNet being the second worst model across all datasets. For \(D_{2M}\), they take 90.3 and 50.6 minutes, respectively.
### Unsupervised Matching
We define as matching time of a model the time that is required for applying the UMC algorithm to all pairwise similarities using the optimal pruning threshold. For the most effective model, S-GTR-T5, the matching time is typically much lower than half a second, except the largest dataset, \(D_{10}\), where it raises to almost two seconds. This high efficiency in the context of a large number of candidate pairs should be attributed to its rather high similarity threshold, which fluctuates between 0.55 and 0.70, with a median (mean) of 0.625 (0.615), thus pruning the vast majority of pairs. Similar behavior is exhibited by the rest of the SentenceBERT models. The static models also apply UMC within few seconds in most datasets, depending on their similarity threshold. This is less frequently true for the BERT-based models, which suffer from low discriminativeness, thus yielding lower thresholds and significantly more pairs to be processed. As a result, one of the BERT-based models is the slowest one in most datasets - typically AlBERT or XLNet.
**Comparison to SotA.** Table 5(b) demonstrates that ZeroER's run-time is dominated by its blocking time, which is higher larger than the total execution time of S-GTR-T5 in most datasets. Due to its simple functionality, the end-to-end S-GTR-T5 approach is two orders of magnitude faster than ZeroER in all datasets, but \(D_{1}\). Its run-time is dominated by the vectorization and the indexing of the input entities, with the matching time accounting for a few milliseconds even for the largest dataset, due to blocking.
Figure 13. Scalability over the synthetic datasets in Table 2(b). The horizontal axis indicates the number of input entities.
\begin{table}
\begin{tabular}{|l|r r r|r r r||r r r|} \hline \multicolumn{3}{|c|}{DeepBlocker} & \multicolumn{3}{c||}{S-GTR-T5} & \multicolumn{3}{c|}{ZeroER} & \multicolumn{3}{c|}{S-GTR-T5} \\ & \(k=1\) & \(k=5\) & \(k=10\) & \(k=1\) & \(k=5\) & \(k=10\) & \(t_{p}\) & \(t_{m}\) & \(t_{p}\) & \(t_{m}\) \\ \hline \hline D1 & 8 & 8 & 17 & 11 & 11 & 11 & 2 & 1 & 13 & 1 \\ D2 & 9 & 9 & 17 & 13 & 13 & 13 & 1,728 & 71 & 14 & 3 \\ D3 & 19 & 19 & 36 & 22 & 22 & 23 & - & - & 23 & 4 \\ D4 & 13 & 13 & 28 & 15 & 15 & 16 & 9,595 & 113 & 16 & 9 \\ D5 & 24 & 25 & 63 & 22 & 22 & 23 & 1,291 & 10 & 23 & 12 \\ D6 & 28 & 28 & 68 & 27 & 27 & 28 & - & - & 28 & 16 \\ D7 & 22 & 22 & 49 & 20 & 22 & 21 & 1,599 & 338 & 21 & 13 \\ D8 & 46 & 46 & 46 & 37 & 38 & 39 & - & - & 38 & 8 \\ D9 & 111 & 111 & 270 & 72 & 74 & 77 & - & - & 72 & 10 \\ D10 & 154 & 150 & 371 & 41 & 41 & 42 & - & - & 46 & 96 \\ \hline \end{tabular}
\end{table}
Table 5. Comparison of S-GTR-T5 with (a) DeepBlocker in Blocking, and (b) ZeroER in Unsupervised Matching. All columns are in seconds, but the rightmost one which is in milliseconds. \(t_{p}\) (\(t_{m}\)) stands for preprocessing (matching) time.
### Supervised Matching
Table 6 demonstrates that S-MPNet constitutes the best choice for applications that emphasize time efficiency at the cost of slightly lower effectiveness: its training and prediction time are consistently lower than that the top-performing model, RoBERTa, by 9% and 7%,
Figure 14. Unsupervised Matching Time (sec) per method. Blue is the time until the highest F1; orange is total time.
Figure 15. Unsupervised Matching \(\delta\) per method. Blue is the \(\delta\) where the best F1 is found and orange is the \(\delta\) where the algorithm terminates.
respectively, on average. More significant gains in efficiency are achieved by the rest of the SentenceBERT models, S-DistilRoBERTa and S-MiniLM: they reduce RoBERTa's training and testing times by more than 50% in all cases, while their F1 is lower by just 5%, on average. Similarly, DistilBERT reduces RoBERTa's run-times to a half for a 7% reduction in F1. XLNet underperforms RoBERTa in all respects. XLNet is consistently the slowest by far model among all datasets, thus underperforming RoBERTa in all respects. The same applies to BERT, albeit to a minor extent, i.e., \(<\)2% with respect to all measures. ALBERT achieves slightly lower training times (by 7%) than RoBERTa at the cost of higher prediction times (by 8%), while its F1 is lower by just 2%, on average. Regarding the static models, on average, they are just 10% and 17% faster than RoBERTa with respect to training and testing time, respectively, despite their very low F1. Overall, we can conclude that the _SentenceBERT models are significantly faster than the BERT-base ones, thus achieving a better trade-off between effectiveness and time efficiency._
## 7. Discussion & Conclusions
Figure 16 summarizes our experimental results on the three ER tasks. The horizontal axis in each diagram corresponds to the effectiveness measure of the respective task, while the vertical one corresponds to the normalized run-time, which is computed by dividing the overall run-time of a model with that of the fastest one (i.e., 1 is assigned to the fastest model). The space formed by these axes illustrates the trade-off between effectiveness and time efficiency, with the top performing model lying closer to (1,1), i.e., the lower right corner. Note that for each model, we have computed its average effectiveness and normalized time across all datasets in Table 2(a).
We observe that we can distinguish the ER tasks into two groups based on the relative performance of the three model types. The first group includes the unsupervised tasks, i.e., Blocking and Unsupervised Matching, where the SentenceBERT models consistently outperform the other two types to a significant extent. The reason is that they are able to distinguish between matching and non-matching entities without fine-tuning their top attention layers. Among them, the differences are minor, on average. Typically, though, S-GTR-T5 excels in effectiveness, but is slower, while S-MiniLM offers a better balance, trading slightly lower effectiveness for slightly higher time efficiency. The second best type includes the static models, where GloVe clearly dominates FastText and Word2Vec, which suffer from significantly higher initialization cost, while being slightly less effective. GloVe is actually the fastest model across all datasets in both ER tasks, thus being ideal for ER applications emphasizing time efficiency at the cost of lower effectiveness. Finally, all BERT-based models consistently underperform the other two types in all respects. Their poor effectiveness stems from the lack of fine-tuning, which causes them to assign low similarity scores to both matching and non-matching pairs alike.
Different patterns are observed in Supervised Matching, where all BERT-based models excel in effectiveness. As expected, DistilBERT sacrifices effectiveness for significantly higher run-time. Note that in this case, we exclusively consider the testing/prediction time per model, because the training time constitutes an one-off cost. S-MPNet also exhibits very high effectiveness, while S-DistilRoBERTa and S-MiniLM emphasize time efficiency. The former dominates DistilBERT, while the latter is the fastest albeit the least effective dynamic model. The static models are less accurate than all dynamic models, while offering no advantage in terms of run-time. Therefore, they should be avoided in this task. Instead, any of the dynamic models can be selected, depending on requirements of the application at hand. The only exception is XLNet, which is much slower but not more effective than most dynamic models.
In the future, we will extent our analysis on ER datasets with numeric values. We also intent to enhance our end-to-end, parameter
\begin{table}
\begin{tabular}{|c|c|c c|c c|c c|c c|} \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{DSM1} & \multicolumn{2}{c|}{DSM2} & \multicolumn{2}{c|}{DSM3} & \multicolumn{2}{c|}{DSM4} & \multicolumn{2}{c|}{DSM5} \\ \multicolumn{1}{c|}{} & \(t_{t}\) & \(t_{e}\) & \(t_{t}\) & \(t_{e}\) & \(t_{t}\) & \(t_{e}\) & \(t_{t}\) & \(t_{e}\) & \(t_{t}\) & \(t_{e}\) \\ \hline \hline FT & 851.9 & 4.8 & 100.9 & 0.6 & 1,155.0 & 6.9 & 2,527.3 & 14.8 & 882.1 & 4.7 \\ GE & 847.0 & 4.8 & 101.4 & 0.6 & 1,157.2 & 6.9 & 2,534.3 & 14.7 & 876.9 & 4.7 \\ \hline BT & 1,811.2 & 11.2 & 66.3 & 0.4 & 1,525.9 & 6,261.9 & 15.8 & 1,093.8 & 6.8 \\ AT & 1,700.5 & 1.02 & 62.8 & 0.5 & 1,448.4 & 0.4 & 2,422.1 & 17.0 & 1,026.6 & 7.4 \\ RA & 1,810.8 & 11.2 & 66.2 & 0.4 & 1,548.7 & 9.6 & 2,666.6 & 15.8 & 1,111.3 & 6.9 \\ DT & 915.2 & 5.5 & 34.0 & 0.2 & 779.2 & 4.8 & 1,341.9 & 7.9 & 559.1 & 3.4 \\ XT & 2,920.7 & 24.2 & 92.2 & 0.7 & 2,120.5 & 15.9 & 3,196.5 & 21.0 & 1,423.2 & 10.1 \\ \hline ST & 1,667.5 & 10.5 & 63.4 & 0.4 & 1,475.8 & 8.8 & 2,549.9 & 14.4 & 1,076.8 & 6.3 \\ SA & 828.2 & 5.0 & 31.4 & 0.2 & 716.9 & 4.3 & 1,253.7 & 7.1 & 518.1 & 3.1 \\ SM & 406.6 & 2.1 & 13.4 & 0.1 & 299.2 & 1.7 & 521.6 & 2.7 & 216.6 & 1.2 \\ \hline \end{tabular}
\end{table}
Table 6. Training (\(t_{t}\)) and testing (\(t_{e}\)) times in seconds of all models in Supervised Matching over the datasets in Table 3.
Figure 16. Tradeoff between Effectiveness and Time Efficiency on average, across all datasets in Table 2(a) for (a) Blocking with \(k\)-10 and (b) Unsupervised Matching (wrt best attainable F1) and all datasets in Table 3 for (c) Supervised Matching.
and learning-free approach to ER with SentenceBERT models, whose performance in Figure 8(d) is remarkable. We will explore ways of replacing its default thresholds with data-driven ones.
|
2305.12000 | Deep Learning Approaches to Lexical Simplification: A Survey | Lexical Simplification (LS) is the task of replacing complex for simpler
words in a sentence whilst preserving the sentence's original meaning. LS is
the lexical component of Text Simplification (TS) with the aim of making texts
more accessible to various target populations. A past survey (Paetzold and
Specia, 2017) has provided a detailed overview of LS. Since this survey,
however, the AI/NLP community has been taken by storm by recent advances in
deep learning, particularly with the introduction of large language models
(LLM) and prompt learning. The high performance of these models sparked renewed
interest in LS. To reflect these recent advances, we present a comprehensive
survey of papers published between 2017 and 2023 on LS and its sub-tasks with a
special focus on deep learning. We also present benchmark datasets for the
future development of LS systems. | Kai North, Tharindu Ranasinghe, Matthew Shardlow, Marcos Zampieri | 2023-05-19T20:56:22Z | http://arxiv.org/abs/2305.12000v1 | # Deep Learning Approaches to Lexical Simplification: A Survey
###### Abstract
Lexical Simplification (LS) is the task of replacing complex for simpler words in a sentence whilst preserving the sentence's original meaning. LS is the lexical component of Text Simplification (TS) with the aim of making texts more accessible to various target populations. A past survey Paetzold and Specia (2017) has provided a detailed overview of LS. Since this survey, however, the AI/NLP community has been taken by storm by recent advances in deep learning, particularly with the introduction of large language models (LLM) and prompt learning. The high performance of these models sparked renewed interest in LS. To reflect these recent advances, we present a comprehensive survey of papers published between 2017 and 2023 on LS and its sub-tasks with a special focus on deep learning. We also present benchmark datasets for the future development of LS systems.
## 1 Introduction
LS improves the readability of any given text with the aim of helping vocabulary and literacy development. LS achieves this by replacing complex words in a sentence with simpler alternatives. LS returns a simplified sentence which can be passed to a TS system for further syntactic and grammatical simplification. The replaced complex words are those words which a general or targeted population found to be hard to read, interpret, or understand. Previous LS systems have been designed to simplify complex words for children, second language learners, individuals with reading disabilities or low-literacy Paetzold and Specia (2017). LS therefore provides both developers and users with a degree of personalization that is unattainable through seq2seq or generative TS systems Yeung and Lee (2018); Lee and Yeung (2018).
Deep learning, and latterly, LLM and prompt learning, have revolutionized the way we approach many NLP tasks, including LS. Previous LS systems have relied upon lexicons, rule-based, statistical, n-gram, and word embedding models to identify and then simplify complex words Paetzold and Specia (2017). These approaches would identify a complex word, for example, "_bombardment_" as being in need of simplification and would suggest "_attack_" as a suitable alternative (Figure 1), hereby referred to as a candidate substitution.
State-of-the-art deep learning models, such as BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), GPT-3 Brown et al. (2020), and others, automatically generate, select, and rank candidate substitutions with performances superior to traditional approaches. These include relying on pre-existing lexicons, simplification rules, or engineered features Saggion et al. (2022). There have been no surveys published on deep learning approaches for LS. The paper by Paetzold and Specia (2017) is the most recent survey on LS but it precedes studies that demonstrate the headway made by state-of-the-art deep learning approaches. A broad comprehensive survey on TS was published in 2021Al-Thanyyan and Azmi (2021). However, this survey likewise does not cover recent advances in the field nor does it focus specifically on LS. This paper therefore continues pre-existing literature by providing an updated survey of the latest deep learning approaches for LS and its sub-tasks of substitute generation (**SG**), selection (**SS**), and ranking (**SR**).
## 2 Pipeline
We structure this survey around the main components of the LS pipeline: SG, SS, and SR (Section 3). We also provide an overview of recent datasets (Section 4), and discuss open challenges in LS (Section 5.1). Normally, an LS pipeline starts with complex word identification (**CWI**). However, since it is often considered as a standalone precursor, we refer the reader to North et al. (2022), for a detailed survey on CWI methods.
Substitute GenerationSG returns a number: \(k\), of candidates substitutions that are suitable replacements for a previously identified complex word. Usually, an LS system will generate candidate substitution in the range of \(k\) = [1, 3, 5, or 10] with _top-k_ referring to the most appropriate candidates. These candidate substitutions need to be more simple, hence easier to read, interpret, or understand than the original complex word. The candidate substitutions also need to preserve the original complex word's meaning, especially in its provided context.
Substitute SelectionSS filters the generated _top-k_ candidate substitutions and removes those which are not suitable. For instance, candidate substitutions which are not synonymous to the original complex word or that are more complex are often removed.
Substitute RankingSR orders the remaining _top-k_ candidate substitutions from the most to the least appropriate simplification. The original complex word is then replaced with the most suitable candidate substitution.
### Evaluation Metrics
All sub-tasks of the LS pipeline are evaluated using precision, accuracy, recall, and F1-score. Several additional metrics have also been used: potential, mean average precision (MAP), and accuracy at top-_k_. Potential is the ratio of predicted candidate substitutions for which at least one of the top-_k_ candidate substitutions generated was among the gold labels (Saggion et al., 2022). MAP evaluates whether the returned top-_k_ candidate substitutions match the gold labels as well as whether they have the same positional rank. Accuracy at top-_k_ = [1, 2, or 3] is the ratio of instances where at least one of the candidate substitutions at \(k\) is among the gold labels.
## 3 Deep Learning Approaches
Prior to deep learning approaches, lexicon, rule-based, statistical, n-gram, and word embedding models were state-of-the-art for SG, SS, and SR. As previously mentioned, Paetzold and Specia (2017) have provided a comprehensive survey detailing these approaches, their performances, as well as their impact on LS literature. The following sections provide an extension of the work carried out by Paetzold and Specia (2017). We introduce new deep learning approaches for LS and begin our survey of the LS pipeline at the SG phase. The recent developments in the CWI step of the pipeline have been extensively surveyed by North et al. (2022).
### Substitute Generation
In 2017, word embedding models were state-of-the-art for SG. Word embedding models, such as Word2Vec (Mikolov et al., 2013), were used alongside more traditional approaches, such as querying a lexicon, or generating candidate substitutions based on certain rules (Paetzold and Specia, 2017). Word embedding models conducted SG by converting potential candidate substitutions into vectors, hence word embeddings, and then calculating which of these vectors had the highest cosine similarity, or lowest cosine distance, with the vector of the target complex word. These vectors were then converted back into their word forms and were considered the top-k candidate substitutions.
Word Embeddings + LLMsPost 2017, word embedding models continued to be implemented for SG. However, they were now combined with the word embeddings produced by LLMs or by a LLM's prediction scores. Alarcon et al. (2021) experimented with various word embeddings models for generating Spanish candidate substitutions. They used word embeddings models, such as Word2Vec, Sense2Vec (Trask et al., 2015), and FastText (Bojanowski et al., 2016), along with the pre-trained LLM BERT, to generate these word embeddings. It was discovered that a more traditional approach that produced candidate substitutions by querying a pre-existing lexicon outperformed these word embedding models in terms of both potential and recall yet slightly under-performed these word embedding models in regards to precision. The traditional approach achieved a potential of 0.898, a recall of 0.597, and a precision of 0.043 on the EASIER dataset (Alarcon et al., 2021). The highest performing word embedding model
Figure 1: LS Pipeline. SG, SS, and SR are the main components of LS.
(Sense2Vec), on the other hand, attained a potential, recall, and precision score of 0.506, 0.282, and 0.056, respectively. Surprisingly, this went against the assumption that word embedding models would have achieved a superior performance given their state-of-the-art reputation demonstrated by Paetzold and Specia (2017). During error analysis, it was found that these word embeddings models often produced antonyms of the target complex word as potential candidate substitutions. This is due to how word embedding models calculate word similarity between vectors.
Seneviratne et al. (2022) used a word embedding model and a pre-trained LLM: XLNet Yang et al. (2019), to produce an embedding similarity score and a prediction score for SG. They followed a similar approach conducted by Arefyev et al. (2020). Arefyev et al. (2020) utilized context2vec Melamud et al. (2016) and ELMo Peters et al. (2018) to encode the context of the target complex word to gain a probability distribution of each word belonging to that particular context. They then used this probability distribution to estimate the likelihood, or appropriateness, of a potential candidate substitution replacing the target complex word. This score was used alongside a LLM prediction score from either BERT, RoBERTa, or XLNet, to produce a final list of top-k candidate substitutions. Both Seneviratne et al. (2022) and Arefyev et al. (2020) discovered that their combined approach of using a word embedding model alongside a pre-trained LLM prediction score failed to surpass the performance of using a single pre-trained LLM. For instance, Seneviratne et al. (2022) was outperformed by North et al. (2022) on the TSAR-2022 dataset.
Masked Language ModelingThe introduction of pre-trained LLMs, also saw the arrival of Masked Language Modeling (MLM) for SG. Przybyla and Shardlow (2020) used LLMs trained on a MLM objective for multi-word LS, whereas Qiang et al. (2020) were the first to use MLM for Spanish SG. MLM has subsequently become a popular approach to SG. 7 out of the 11 system reports submitted to TSAR-2022 (Saggion et al., 2022), described their approach as consisting of a MLM objective.
Known as LSBert, the model introduced by Qiang et al. (2020), used the pre-trained LLM BERT. Sentences were taken from the LS datasets LexMTurkHorn et al. (2014), BenchLS Paetzold and Specia (2016), and NNSeval Paetzold and Specia (2016). Two versions of each sentence were then concatenated, being separated by the [SEP] special token. They were then fed into the LLM. The first sentence was identical to that extracted from the datasets, whereas the second sentence had its complex word replaced with the [MASK] special token. The LLM then attempted to predict the word replaced by the [MASK] special token by taking into consideration its left and right context as well as the prior original sentence. In this way, LLMs provide candidate substitutions with the highest probability (highest prediction score) of fitting into the surrounding context and that are also similar to the target complex word in the original sentence. For the top-k=1 candidate substitution, LSBert achieved F1-scores for SG of 0.259, 0.272, and 0.218 on the three datasets LexMTurk Horn et al. (2014), BenchLS Paetzold and Specia (2016), and NNSeval Paetzold and Specia (2016) respectively. These performances surpassed that of all prior approaches Paetzold and Specia (2017). The previous highest F1-score was achieved by a word-embedding model Paetzold and Specia (2017), which produced F1-scores of 0.195, 0.236, and 0.218 for each dataset, respectively.
Before the release of the TSAR-2022 shared-task
\begin{table}
\begin{tabular}{c c|c|c c c|c} \hline \hline \multicolumn{2}{c|}{**Deep Learning Approaches**} & **ACC** & **ACC@1** & **ACC@3** & **MAP@3** & **Potential@3** & **Paper** \\ \hline SG & SS \& SR & \multicolumn{4}{c|}{TSAR-2022 (EN)} & \multicolumn{1}{c}{} \\ \hline GPT-3-Prompts & GPT-3 & **0.8096** & **0.4299** & **0.6863** & **0.5834** & **0.9624** & (Aumiller and Gertz, 2022) \\ MLM & LLM+Embeddings+Freq & 0.6568 & 0.3190 & 5.3388 & 0.4730 & 0.8766 & (Li et al., 2022) \\ LLM+Prompt & MLM Prediction Score & 0.6353 & 0.2895 & 0.5308 & 0.4244 & 0.8739 & (Vásquez-Rodríguezuez et al., 2022) \\ \hline SG & SS \& SR & \multicolumn{4}{c|}{TSAR-2022 (ES)} & \multicolumn{1}{c}{} \\ \hline GPT-3-Prompts & GPT-3 & **0.6521** & **0.3505** & **0.5788** & **0.4281** & **0.8206** & (Aumiller and Gertz, 2022) \\ MLM & Embedding+POS & 0.3695 & 0.2038 & 0.3288 & 0.2145 & 0.5842 & (Whistely et al., 2022) \\ LLM+Prompt & MLM Prediction Score & 0.3668 & 0.160 & 0.2690 & 0.2128 & 0.5326 & (Vásquez-Rodríguezuez et al., 2022) \\ \hline SG & SS \& SR & \multicolumn{4}{c|}{TSAR-2022 (PT)} & \multicolumn{1}{c}{} \\ \hline GPT-3-Prompts & GPT-3 & **0.7700** & **0.4358** & **0.6299** & **0.5014** & **0.9171** & (Aumiller and Gertz, 2022) \\ MLM & MLM Prediction Score & 0.4812 & 0.2540 & 0.3957 & 0.2816 & 0.6871 & (North et al., 2022) \\ MLM & Freq+BinaryClassifier & 0.3689 & 0.1737 & 0.2673 & 0.1983 & 0.5240 & (Wilkens et al., 2022) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The top 3 deep learning approaches across the TSAR-2022 datasets. Best performances in bold.
(Saggion et al., 2022), Ferres and Saggion (2022) introduced a new dataset: ALEXSIS (TSAR-2022 ES), that would later make up (along with an additional English and Portuguese dataset) the TSAR-2022 dataset (Saggion et al., 2022). Using their Spanish dataset, they experimented with a number of monolingual LLMs pre-trained on either Spanish data as well as several multilingual LLMs, such as mBERT and RoBERTa. Ferres and Saggion (2022) adopted the MLM approach used by LSBert. They experimented with the Spanish LLMs: BETO (Canete et al., 2020), BERTIN (De la Rosa and Fernandez, 2022), RoBERTa-base-BNE, and RoBERTa-large-BNE (Fandino et al., 2022) for SG. They discovered that their largest pre-trained Spanish LLM: RoBERTa-large-BNE, achieved the greatest SG performance after having also removed candidate substitutions equal to the complex word, regardless of capitalization or accentuation and being less than 2 characters long.
North et al. (2022) was inspired by the success of the monolingual LLMs shown by Ferres and Saggion (2022). They likewise tested a range of LLMs for SG with a MLM objective, including multilingual LLLMs: mBERT, and XLM-R (Conneau et al., 2020), and several monolingual LLMs, including Electra for English (Clark et al., 2020), RoBERTa-large-BNE for Spanish, and BERTimbau (Souza et al., 2020) for Portuguese. Their monolingual LLMs scored an acc@1 score of 0.517, 0.353, and 0.481 on the English, Spanish, and Portuguese TSAR-2022 datasets respectively. Whistely et al. (2022) also experimented with similar monolingual LLMs for SG. They used BERT for English, BETO for Spanish, and BERTimbau for Portuguese. Interestingly, their models' performances were lower compared to that of North et al. (2022), despite their Portuguese LS system consisting of the same language model. Whistely et al. (2022) achieved acc@1 scores of 0.378, 0.250, and 0.3074 for English, Spanish, and Portuguese, respectively. This is likely due to the additional SS and SR steps implemented by Whistely et al. (2022) and the lack thereof shown within the LS system provided by North et al. (2022) (Section 3.2).
Wilkens et al. (2022) also used a range of monolingual LLMs for SG. However, they used an ensemble of BERT-like models with three different masking strategies: 1). copy, 2). query expansion, and 3). paraphrase. The copy strategy replicated that of LSBert (Qiang et al., 2020), whereby two sentences were inputted into a LLM concatenated with the [SEP] special token. The first sentence being an unaltered version of the original sentence, and the second sentence having its complex word masked. The query expansion strategy used FastText to generate five related words with the highest cosine similarity to the target complex word. For iteration 2a). of the query expansion strategy, the first sentence was the original unaltered sentence, the second sentence replaced the complex word with one of the suggested similar words produced by FastText, and sentence 3 was the masked sentence. Iteration 2b). of this strategy was the same as iteration 2a)., however, sentence 2 now consisted of all five suggested words. Lastly, the paraphrase strategy generated 10 new contexts for each complex word composed of paraphrases of the original sentence. These new contexts were limited to 512 tokens. The ensembles used for these three masking strategies consisted of BERT and RoBERTa LLMs for English, several BETO LLMs for Spanish, and several BERTimbau LLMs for Portuguese. The paraphrase strategy showed the worst performance with a joint MAP/Potential@1 score of 0.217, whereas the query expansion strategy obtained a MAP/Potential@1 score of 0.528, 0.477, and 0.476 for English, Spanish, and Portuguese, respectively. This surpassed the performance of the paraphrase strategy and the original copy strategy used by LSBert, regardless of the LLMs used.
Prompt LearningPrompt learning has also been used for SG and is currently state-of-the-art (Table 3). Prompt learning involves feeding into a LLM input that is presented in such a way as to provide a description of the task as well as to return a desired output. PromptLS is an example of prompt learning applied to SG. Created by Vasquez-Rodriguez et al. (2022), PromptLS consisted of a variety of pre-trained LLMs fine-tuned on several LS datasets. These fine-tuned LLMs were then presented with four combinations of prompts: a). "a _easier word_ for bombardment is", b). "a _simple word_ for bombardment is", c). "a _easier synonym_ for bombardment is", and lastly, d). "a _simple synonym_ for bombardment is". These prompt combinations were supplied to a RoBERTa LLM on all of the English data extracted from the LexMTurk (Horn et al., 2014), BenchLS (Paetzold and Specia, 2016), NN-Seval (Paetzold and Specia, 2016), and CERF-LS (Uchida et al., 2018) LS datasets. They were also translated and fed into BERTIN fine-tuned on the
Spanish data obtained from EASIER, along with BR-BERTo fine-tuned on all of the Portuguese data taken from SIMPLEX-PB (Hartmann and Aluisio, 2020). Vasquez-Rodriguez et al. (2022) also used these prompts on a zero-shot condition. It was discovered that the fine-tuned LLMs outperformed the zero-shot models on all conditions by an average increase in performance between 0.3 to 0.4 across all metrics: acc@1, acc@3, MAP@3, and Precision@3. The prompt combinations that produced the best candidate substitutions were "easier word" for English, "palabra simple" and "palabra facil" for Spanish, and "palavra simples" and "sinonimo simples" for Portuguese.
Prompt learning has likewise been applied to causal language models for SG, such as GPT-3. Aumiller and Gertz (2022) experimented with a variety of different prompts, which they fed into a GPT-3. These prompts were of four types: 1). zero-shot with context, 2). single-shot with context, two-shot with context, 3). zero-shot without context, and 4). single-shot without context. The size of each shot: \(n\), refers to how many times a prompt is inputted into GPT-3. For instance, those shots with context would input a given sentence and then ask the question, "Given the above context, list ten alternative words for \(<\)complex word\(>\) that are easier to understand.", \(n\) number of times. Those without context, however, would input \(n\) times the following:"Give me ten simplified synonyms for the following word: \(<\)complex word\(>\)". Aumiller and Gertz (2022) also combined all types of prompts in an ensemble, generating candidate substitutions from each prompt type and then deciding upon final candidate substations through plurality voting and additional SS and SR steps (Section 3.2). Their ensemble approach outperformed all other prompt types and SG models submitted to TSAR-2022 (Saggion et al., 2022) (Table 3).
### Substitute Selection and Ranking
Traditional approaches to SS are still implemented post SG. Methods such as POS-tag and antonym filtering, semantic or sentence thresholds have been used to remove inappropriate candidate substitutions after having been generating from the above deep learning approaches (Saggion et al., 2022). Nevertheless, the majority of modern deep learning approaches have minimal SS, with SS often being simultaneously conducted during SG or SR. For instance, the metric used to generate the top-k candidate substitutions, by it either similarity between word embeddings, or a pre-train LLM's prediction score, tends not to suggest candidate substitutions that are deemed as being inappropriate by other SS methods. Likewise, SR techniques that rank candidate substitutions in order of their appropriateness will in turn move inappropriate simplifications further down the list of top-k candidate substitutions to the point that they are no longer considered.
Word EmbeddingsWord embedding models continued to be used for SS without LLMs, regardless of the arrival of pre-trained LLMs, such as BERT. For instance, Song et al. (2020) created a unique LS system that filtered candidate substitutions by applying a semantic similarity threshold, matching only those candidate substitutions with the same POS tag as the target complex word, calculating contextual relevance, being a measure of how reasonable and fluent a sentence is after the complex word had been replaced, and by using cosine similarity between word embeddings to rank candidate substitutions. They generated word embeddings by Word2Vec and evaluated their model's performance on the LS-2007 dataset (McCarthy and Navigli, 2007). It was found that the use of Word2Vec improved their model's performance having achieved an acc@1 of 0.269. Their second highest performing model, without the use of Word2Vec embeddings, produced an acc@1 of 0.218.
Neural RegressionMaddela and Xu (2018) created the neural readability ranker (NNR) for SR. Consisting of a feature extraction, a Gaussian-based feature vectorization layer, and a task specific output node, NNR is a deep learning algorithm capable of ranking candidate substitutions based on their perceived complexity. It performances regression, whereby having been trained on the Word Complexity Lexicon (WCL), as well as several features and character n-grams converted into Gaussian vectors, it is able to provide a value between 0 and 1 corresponding to the complexity of any given word. It achieves this by conducting pairwise aggregation. For each pair of potential candidate substitutions, the model predicts a value that defines which candidate substitution is more or less complex than the other. A return positive value indicates that the first candidate substitution is more complex than the second, whereas a negative value dictates that the second candidate substitution is more complex
than the first. This is applied to all combinations of candidate substitutions given a complex word. Each candidate substitution is then ranked in accordance to its comparative complexity with all other potential candidate substitutions. Maddela and Xu (2018) applied their NNR model to the LS-2012 dataset and outperformed prior word embedding techniques for SR. They achieved an Prec@1 of 0.673, whereas the previous state-of-the-art model provided by Paetzold and Specia (2017) achieved an Prec@1 of 0.656.
Word Embeddings + LLMsOne of the most common approaches to SS and SR involves the use of word embeddings and LLMs. Seneviratne et al. (2022) filtered and ranked top-k=20 candidate substitutions based on the same combined score that they used for SG. It consisted of their MLM model's prediction score of the generated candidate together with the inner product of the target word's embedding and the embedding of the potential candidate substitution. These top-k=20 candidate substitutions were then subject to one of three additional ranking metrics. The first ranking metric (CILex_1) ranked candidate substitutions on their cosine similarity between the original sentence and a copy of the original sentence with the candidate substitution in place of its complex word. The second and third ranking metrics made use of dictionary definitions of the target complex word and its candidate substitutions. They calculated the cosine similarity between each embedding of each definition and the embedding of the sentence of the target complex word. Those with the highest cosine similarities between a). the definition of the target complex word and the definition of the candidate substitution (CILex_2), or b). the definition of the target complex word and the word embedding of the original sentence with the candidate substitution in place of its complex word (CILex_3), were used to determine the rank of each candidate substitution. They discovered that all three metrics produced similar performances on the TSAR-2022 dataset with CILex 1, 2, and 3 achieving acc@1 scores of 0.375, 0.380, and 0.386, respectively.
Li et al. (2022) used a set of features taken from LSBert combined with what they referred to as an equivalence score. Equivalence score was created to gauge semantic similarity between candidate substitution and complex word to an extent that was more expressive than the cosine similarity between word embeddings. To obtain this equivalence score, they used a pre-trained RoBERTa LLM trained for natural language inference (NLI) which predicts the likelihood of one sentence entailing another. The model was trained on a multi-genre corpus with a MLM objective. The product of the returned likelihood of the original sentence with the candidate substitution preceding the original sentence and vice-versa equated to the equivalence score. Since Li et al. (2022) used the same method of SG as LSBert, having only changed their LLM to RoBERTa, they concluded that their system's superior performance was a consequence of its unique SR. They achieved an acc@1 of 0.659, whereas LSBert attained an acc@1 of 0.598 on the English TSAR-2022 dataset (Saggion et al., 2022).
Aleksandrova and Brochu Dufour (2022) ranked candidate substitutions on three metrics: a). grammaticality, b). meaning preservation, and c). simplicity. Grammaticality was calculated by firstly determining whether the candidate substitution had the same POS tag in terms of person, number, mood, tense, and so forth. Those that matched on all POS-tag categories were assigned the value of 1 or 0 if at least one category did not match. Preservation was determined by using BERTScore to generate cosine similarities between the embeddings of the original sentence and the embeddings of the original sentence, having replaced the target complex word with the candidate substitution. Lastly, preservation was obtained by using a CEFR vocabulary classifier trained on data from the English Vocabulary Profile (EVP). The data used to train the CEFR classifier was first masked and fed into a pre-trained LLM: BERT. The outputted encodings were then used to train an SVM model resulting in their CEFR classifier. Their model failed to surpass the baseline LSBert models at TSAR-2022 in terms of acc@1, having achieved a score of 0.544.
MLM Prediction ScoresLS systems have also relied entirely on MLM prediction scores for SS and SR. North et al. (2022) and Vasquez-Rodriguez et al. (2022) adopt this approach. They have no additional SR steps and rank their candidate substitutions per their generated MLM prediction scores. They do, however, apply some basic filtering with both studies removing duplicates as well as candidate substitutions equal to the complex word. Surprisingly, minimal SR has been shown to surpass other more technical approaches (Table 3). North et al. (2022) has achieved
state-of-the-art performance on the TSAR-2022 Portuguese dataset, whereas Vasquez-Rodriguez et al. (2022) has consistently produced high performances across the English and Spanish TSAR-2022 datasets. Only GPT-3 based-models have surpassed these performances (Aumiller and Gertz, 2022) (Table 3).
## 4 Resources
Post 2017 LS datasets have been created for either all sub-tasks within the LS pipeline or for a specific purpose (Appendix, Table 2). Recent international competitions (shared-tasks) have also provided their own LS datasets (*). LS resources are available for multiple languages, predominately English (EN), Spanish (ES), Portuguese (PT), French (FR), Japanese (JP), and Chinese (ZH).
### English
Personalized-LSLee and Yeung (2018) constructed a dataset of 12,000 English words for personalized LS. These words were ranked on a five-point Likert scale. 15 native Japanese speakers were tasked with rating the complexity of each word. These complexity rating were then applied to BenchLS, in turn personalizing the dataset for Japanese speakers.
WclMaddela and Xu (2018) introduced the Word Complexity Lexicon (WCL). The WCL is a dataset made up of 15,000 English words annotated with complexity ratings. Annotators were 11 nonnative English speakers using a six-point Likert scale.
Lcp-2021*The dataset provided at the LCP-2021 shared-task (CompLex) (Shardlow et al., 2020), was developed using crowd sourcing. 10,800 complex words in context were selected from three corpora covering the Bible, biomedical articles, and European Parliamentary proceedings. Their lexical complexities were annotated using a 5-point Likert scale.
SimpleText-2021*The SimpleText-2021 shared-task (Ermakova et al., 2021) introduced three pilot tasks: 1). to select passages to be simplified, 2). to identify complex concepts within these passages, and 3). to simplify these complex concepts to generate an easier to understand passage. They provided their participants with two sources of data, these being the Citation Network Dataset, DBLP+Citation, ACM Citation network, together with titles extracted from The Guardian newspaper with manually annotated keywords.
Tsar-2022*TSAR-2022 (Saggion et al., 2022) supplied datasets in English, Spanish, and Portuguese. These datasets contained target words in contexts taken from journalistic texts and Wikipedia articles, along with 10 candidate substitutions (approx. 20 in raw data) provided by crowdsourced annotators located in the UK, Spain, and Brazil. The candidate substitutions were ranked per their suggestion frequency. The English, Spanish, and Portuguese datasets contained 386, 381, and 386 instances, respectively.
### Datasets in Other Languages
SpanishThe ALexS-2020 shared-task (Zambrano and Raez, 2020) included a Spanish dataset consisting of 723 complex words from recorded transcripts. Merejildo (2021) provided the Spanish CWI corpus (ES-CWI). A group of 40 native-speaking Spanish annotators identified complex words within 3,887 academic texts. The EASIER corpus (Alarcon et al., 2021) contains 5,310 Spanish complex words in contexts taken from newspapers with 7,892 candidate substitutions. A small version of the corpus is also provided with 500 instances (EASIER-500).
PortugueseThe PorSimples dataset (Alusio and Gasperin, 2010) consists of extracts taken from Brazilian newspapers. The dataset is divided into nine sub-corpora separated by degree of simplification and source text. The PorSimplesSent dataset (Leal et al., 2018) was adapted from the previous PorSimples dataset. It contains strong and natural simplifications of PorSimples's original sentences. SIMPLEX-PB (Hartmann and Aluisio, 2020) provides a selection of features for each of its candidate substitutions.
FrenchReSyf contains French synonyms that have been ranked in regards to their reading difficulty using a SVM (Billami et al., 2018). It consists of 57,589 instances with a total of 148,648 candidate substitutions. FrenchLys is a LS tool designed by Rolin et al. (2021). It provides its own dataset that contains sentences sampled from a French TS dataset: ALECTOR, and french schoolbooks. Substitute candidates were provided by 20 French speaking annotators.
JapaneseThe Japanese Lexical Substitution (JLS) dataset (Kajiwara and Yamamoto, 2015) con
tains 243 target words, each with 10 contexts (2,430 instances in total). Crowd-sourced annotators provided and ranked candidate substitutions. The JLS Balanced Dataset (Kodaira et al., 2016) expanded the previous JLS dataset to make it more representative of different genres and contains 2,010 generalized instances. Nishihara and Kajiwara (2020) created a new dataset (JWCL & JSSL) that increased the Japanese Education Vocabulary List (JEV). It houses 18,000 Japanese words divided into three levels of difficulty: easy, medium, or difficult.
ChinesePersonalized-ZH (Lee and Yeung, 2018) consists of 600 Chinese words. Each word's complexity was ranked by eight learners of Chinese on a 5-point lickert-scale. HanLS was constructed by Qiang et al. (2021). It contains 534 Chinese complex words. 5 native-speaking annotators gave and ranked candidate substitutions. Each complex word has on average 8 candidate substitutions.
## 5 Discussion and Conclusion
Since the 2017 survey on LS (Paetzold and Specia, 2017), deep learning approaches have provided new headway within the field. MLM is now the go to method for SG, with the majority of recent LS studies having employed a MLM objective. The casual language model: GPT-3, surpasses the performance of all other approaches when subjected to prompt learning, especially when an ensemble of prompts are taken into consideration (Table 3). The prediction scores of MLM or casual language modeling have replaced various SS and SR techniques. LS systems that employ minimal SS and no SR apart from ranking their LLM's prediction scores, have outperformed more technical, feature-oriented, and unsupervised ranking methods (Table 3). However, an exception is made with regards to equivalence score (Li et al., 2022), which has been shown to be effective at SR.
Future LS systems will make use of new advances in deep learning. We believe prompt learning and models, such as GPT-3, will become increasingly popular, given their state-of-the-art performance at SG. Using an ensemble of various prompts for SS and SR may advance LS performance. In addition, the creation of new metrics similar to equivalence score will likewise be beneficial.
### Open Challenges in LS
LS has a number of open research areas that are either unaddressed, or the current body of work is inconclusive. In this brief section, we conclude this survey by outlining a few key areas for future development of LS research.
Evaluation:The metrics we use to evaluate LS are not perfect (Section 2.1). Automated metrics that condense a wide problem into a single numerical score can harm outcomes with human participants. Development of more faithful resources, as well as direct evaluation with intended user groups of simplification systems is a fruitful avenue for future work. This can be done by taking into consideration variation in data annotation instead of labels produced by aggregating unique annotations as in most datasets currently available.
Explainability:Lexical simplifications are inherently more explainable than sentence simplification as the operations are directly applied at the lexeme level. However, the decision process on whether to simplify and which word to choose is increasingly hidden behind the black-box of a model. Work to explain and interpret these decisions will allow researchers to better understand the opportunities and threats of applying modern NLP techniques to LS research.
Personalization:One model does not fit all. The simplification needs of a language learner compared to a stroke victim, compared to a child are each very different. Modeling these needs and using them to personalize LS systems will allow for personalized simplification output more adequate the needs of particular user groups.
Perspectivism:Even within a population of common characteristics, each individual will bring a unique perspective on what and how to simplify. Systems which can alter their outputs to each user's needs will provide adaptive simplifications that go beyond our current technology. This will, in turn, improve the evaluation of LS models as previously discussed in this section.
Integration:LS is only one part of the wider simplification puzzle. Integrating LS systems with explanation generation, redundancy removal, and sentence splitting will further accelerate the adoption of automated simplification practices beyond the halls of research allowing such technology to reach a wider audience. |
2307.03774 | Ricci Reheating Reloaded | A Hubble-induced phase transition is a natural spontaneous symmetry breaking
mechanism allowing for explosive particle production in non-oscillatory models
of inflation involving non-minimally coupled spectator fields. In this work, we
perform a comprehensive characterisation of this type of transitions as a
tachyonic Ricci-heating mechanism, significantly extending previous results in
the literature. By performing $\mathcal{O}(100)$ 3+1-dimensional classical
lattice simulations, we explore the parameter space of two exemplary scenarios,
numerically determining the main timescales in the process. Based on these
results, we formulate a set of parametric equations that offer a practical
approach for determining the efficiency of the heating process, the temperature
at the onset of radiation domination, and the minimum number of e-folds of
inflation needed to resolve the flatness and horizon problems in specific
quintessential inflation scenarios. These parametric equations eliminate the
need for additional lattice simulations, providing a convenient and efficient
method for evaluating these key quantities. | Giorgio Laverda, Javier Rubio | 2023-07-07T18:00:06Z | http://arxiv.org/abs/2307.03774v2 | # Ricci Reheating Reloaded
###### Abstract
A Hubble-induced phase transition is a natural spontaneous symmetry breaking mechanism allowing for explosive particle production in non-oscillatory models of inflation involving non-minimally coupled spectator fields. In this work, we perform a comprehensive characterisation of this type of transitions as a tachyonic Ricci-heating mechanism, significantly extending previous results in the literature. By performing \(\mathcal{O}(100)\) 3+1-dimensional classical lattice simulations, we explore the parameter space of two exemplary scenarios, numerically determining the main timescales in the process. Based on these results, we formulate a set of parametric equations that offer a practical approach for determining the efficiency of the heating process, the temperature at the onset of radiation domination, and the minimum number of e-folds of inflation needed to resolve the flatness and horizon problems in specific quintessential inflation scenarios. These parametric equations eliminate the need for additional lattice simulations, providing a convenient and efficient method for evaluating these key quantities.
## 1 Introduction
The success of inflation is based on its ability to solve the main shortcomings of the hot Big Bang model while providing a mechanism able to generate the primordial density perturbations seeding structure formation. Although numerous theoretical models of inflation have been already excluded by observations, current data sets are still insufficient to identify a particular scenario within the whole inflationary paradigm. Furthermore, there is still a rather limited understanding of how a specific inflationary model should be embedded into a more fundamental particle physics theory. The situation is expected to improve, however, in the next decade, when missions such as the upgraded Simons Observatory [1], the LiteBird satellite [2; 3] or the ground-based CMB Stage 4 program [4] will significantly reduce the uncertainties in the tensor-to-scalar ratio [5]. In this context, understanding the details of the heating stage following the end of inflation [6; 7; 8] becomes of the uttermost importance for providing accurate theoretical predictions to be compared with observations.
A minimal Hubble-induced heating mechanism based on the time-dependence of the Ricci scalar was recently put forward in the context on non-oscillatory quintessential inflation scenarios, where the absence of a potential minimum gives rise, rather generically, to a kinetic-dominated post-inflationary era [9; 10] (cf. also Refs. [11; 12; 13] and Ref. [14] for a comprehensive review of quintessential inflation). In this setting, any self-interacting spectator field non-minimally coupled to gravity may potentially undergo a second-order phase transition when the Universe evolves from inflation to kination and the Ricci scalar turns negative. Generally speaking, the transition to the new minima of the potential appearing at large field values involves a tachyonic instability that comes to an end only when the effective mass of the spectator field becomes again globally positive. This symmetry restoration can either take place at the eventual onset of radiation domination or be effectively induced by the non-linear dynamics following particle production. In particular, if the energy lost by the spectator in each oscillation is smaller than the one associated to the motion of the new Hubble-induced minima as the Universe expands, the field distribution will be able to cross back the origin of the potential [9], giving rise to a bimodal distribution that disappears after a few oscillations [10].
The main purpose of this work is to perform a full characterisation of Hubble-induced phase transitions as a tachyonic-heating mechanism, significantly expanding the results in Refs. [10; 15]. By performing a large number \(\simeq\mathcal{O}(100)\) of 3+1-dimensional classical lattice simulations we will explore a wide portion of the parameter space of the theory. In particular, the outputs of our simulations will allow us to numerically characterise the main timescales in the process, namely the initial tachyonic instability, the onset of backreaction effects and the virialisation phase leading to a radiation-like equation-of-state for the spectator field. These numerical results constitute the starting point for obtaining a set of ready-to-use parametric formulas characterising the total energy-density at each timescale, the efficiency of the heating process and the radiation temperature at the onset of radiation domination for all the model parameters in the scan, in the spirit of Ref. [16]. This line of work comes with several benefits. First, the non-linear phenomenology of Hubble-induced phase transitions is investigated comprehensively, without resorting on _ad hoc_ homogeneity assumptions and with a strong emphasis on the interplay between the different model parameters. Second, the simple relations given by the fitting formulas allow us to accurately determine the minimal number of \(e\)-folds of inflation needed to solve the flatness and horizon problems in specific quintessential inflation scenarios, without resorting to additional time-consuming lattice simulations. Third, the high efficiency of the tachyonic process enable us to directly apply our results to a plethora of variations of the model including, for instance, quartic couplings among species leading potentially to energy transfer effects.
This work is organised as follows. In Section 2, we review the basic working assumptions of Hubble-induced phase transitions, focusing for concreteness on a \(\mathds{Z}_{2}\)-symmetric model. After deriving analytical solutions for the tachyonic amplification of fluctuations in the linear regime, we proceed to study the non-linear dynamics of the system using \(3+1\) classical lattice simulations. The parametric formulas for the heating efficiency and the radiation temperature following from this procedure are presented in Section 3, where we describe also their impact on the main inflationary observables and the potential effects of non-gravitational interactions with other beyond the Standard Model sectors. Finally, our conclusions are presented in Section 4.
## 2 Spectator field dynamics
The simplest realisation of a Hubble-induced phase transition is described by an action containing an energetically-dominant quintessential scalar field \(\phi\)[14; 17; 18] and a subdominant \(\mathds{Z}_{2}\)-symmetric spectator field \(\chi\) non-minimally coupled to gravity,
\[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{P}^{2}}{2}R+\mathcal{L}_{\phi}-\frac{1}{ 2}\partial_{\mu}\chi\partial^{\mu}\chi-\frac{1}{2}\xi R\chi^{2}-\frac{\lambda }{4}\chi^{4}\right]\,, \tag{1}\]
with \(R\) the Ricci scalar and \(M_{P}=(8\pi G)^{1/2}=2.4\times 10^{18}\,\mathrm{GeV}\) the reduced Planck mass. The precise form of the inflaton Lagrangian density \(\mathcal{L}_{\phi}\) in this expression will not play a relevant role in the following discussion, being the associated scalar field only responsible for driving the background evolution of the Universe at early times and, most importantly, for inducing a transition from inflation to a kinetic-dominated epoch. Regarding the spectator field counterpart, it includes a dimensionless coupling constant \(\xi\), only restricted _a priori_ by the self-consistency of the procedure. In particular, the field-dependent contribution to the overall Ricci scalar prefactor is required to be subdominant as compared to the usual Planck mass counterpart, namely \(\xi\chi^{2}\ll M_{P}^{2}\). In this limit, the graviton propagator is well-defined
for all field values of interest, while coinciding approximately with the graviton propagator in General Relativity. Finally, note that, although the phenomenology of Hubble-induced phase transitions can be easily extended to arbitrary monomial potentials [14, 15], we have intentionally restricted ourselves to a marginal quartic self-interaction with dimensionless coupling constant \(\lambda\). This avoids the introduction of additional energy scales beyond the constant Planck mass and the dynamical scalar curvature entering the Einstein-Hilbert term.
In a Friedmann-Lemaitre-Robertson-Walker background \(g_{\mu\nu}=\text{diag}(-1,\,a^{2}(t)\delta_{ij})\) with scale factor \(a(t)\), the Klein-Gordon evolution equation following from the variation of Eq. (1) with respect to \(\chi\) takes the form
\[\ddot{\chi}+3H\dot{\chi}-\frac{1}{a^{2}}\nabla^{2}\chi+\xi R\chi+\lambda\chi^{ 3}=0\,, \tag{2}\]
where dots denote derivatives with respect to the coordinate \(t\). The energetic subdominance of the spectator field allows us to interpret the Ricci scalar in this expression as an effective time-dependent mass for \(\chi\), whose dynamics in a Friedmann-Lemaitre-Robertson-Walker metric is completely determined by the evolution of the inflaton field, namely
\[R=6\left(\dot{H}+2H^{2}\right)=3(1-3w_{\phi})H^{2}\,, \tag{3}\]
with \(H=\dot{a}/a\) the Hubble rate and \(w_{\phi}\) the effective inflaton equation-of-state parameter. In these expressions, the Ricci scalar constitutes a cosmic timekeeping device, changing sign precisely at the transition between inflation (\(w_{\phi}=-1\)) and kination (\(w_{\phi}=1\)) and triggering with it the spontaneous breaking of the \(\mathds{Z}_{2}\) symmetry in the \(\chi\) sector of the theory. Given the time-dependence of the resulting symmetry-breaking potential, the true vacua appearing at large field values migrate monotonically towards zero as the Hubble function decreases with time.
In a generic quintessential-inflation scenario, the transition from inflation to kination takes place on a timescale dictated by the detailed shape of the inflationary potential at the end of inflation. Since we are mainly interested in the production of particles upon spontaneous symmetry breaking, we will assume the crossover stage between these two cosmological epochs to be essentially instantaneous, neglecting with it any particle creation effects prior to the onset of kinetic domination. This assumption allows us to parametrise the evolution of the scale factor and the Hubble rate as
\[a(t)=a_{\text{kin}}[1+3H_{\text{kin}}(t-t_{\text{kin}})]^{1/3}\,,\hskip 28.452756ptH (t)=\frac{H_{\text{kin}}}{1+3H_{\text{kin}}(t-t_{\text{kin}})}\,, \tag{4}\]
with \(a_{\text{kin}}=a(t_{\text{kin}})\) and \(H_{\text{kin}}=H(t_{\text{kin}})\) the associated values at the beginning of kination. In order to remove the Hubble-friction term in Eq. (2), it is also convenient to transform the field and spacetime coordinates to their conformal version
\[Y=\frac{a}{a_{\text{kin}}}\frac{\chi}{\chi_{\text{kin}}}\,,\hskip 28.452756pt \mathbf{y}=a_{\text{kin}}\chi_{\text{kin}}\mathbf{x}\,,\hskip 28.452756ptz=a_{ \text{kin}}\chi_{\text{kin}}\tau\,, \tag{5}\]
where \(\tau\) is the usual conformal time variable (\(d\tau=dt/a\)) and \(\chi_{\text{kin}}=\sqrt{6\xi}H_{\text{kin}}\) is a typical field value associated to the initial curvature of the effective potential around the origin. Taking into account all these changes, the final version of the Klein-Gordon equation for the scalar field \(Y\) takes the form
\[Y^{\prime\prime}-\nabla^{2}Y-M^{2}(z)Y+\lambda Y^{3}=0\,, \tag{6}\]
with
\[M^{2}(z)=\frac{\nu^{2}-1/4}{(z+\nu)^{2}}=(4\nu^{2}-1)\mathcal{H}^{2} \tag{7}\]
an effective time-dependent mass term and
\[\nu=\sqrt{\frac{3\xi}{2}}\,,\hskip 28.452756pt\mathcal{H}(z)=\frac{a^{\prime}}{a}= \frac{1}{2(z+\nu)}\,,\hskip 28.452756pta(z)=a_{\rm kin}\sqrt{1+\frac{z}{\nu}}\,. \tag{8}\]
From Eq. (1), we can additionally compute the spectator energy-momentum tensor and, in particular, its energy density and pressure [10]
\[\rho_{\chi}(\tau) = \frac{1}{2}\chi^{\prime 2}+\frac{1}{2a}(\nabla\chi)^{2}+\frac{ \lambda}{4}\chi^{4}+3\xi\mathcal{H}\chi^{2}+6\xi\mathcal{H}\chi\chi^{\prime}- \frac{\xi}{a^{2}}\nabla^{2}\chi^{2}\,, \tag{9}\] \[p_{\chi}(\tau) = \frac{1}{2}(1-4\xi)\chi^{\prime 2}-\frac{1-12\xi}{6a^{2}}( \nabla\chi)^{2}-\frac{\lambda}{4}\chi^{4}+2\xi\mathcal{H}\chi\chi^{\prime}\] (10) \[- \frac{\xi}{3a^{2}}\nabla^{2}\chi^{2}+2\xi\lambda\chi^{4}+\xi \left[\mathcal{H}^{2}+12\left(\xi-\frac{1}{6}\right)\left(\frac{a^{\prime \prime}}{a}+\mathcal{H}^{2}\right)\right]\chi^{2}\,.\]
Note that, due to the non-minimal coupling to gravity, these quantities turn out to depend non-quadratically on the spectator field value and its velocity, being therefore not guaranteed to be positive definite at all times [19; 20; 21; 22]. Although this well-known fact leads to the violation of the weak energy condition within the spectator field sector, the total energy density of the Universe, i.e. that including the dominant inflaton contribution, is still positive definite at all times.
### Linear regime
At this stage, one could be tempted to treat the spectator field \(\chi\) as an homogeneous component alike the runaway inflaton field \(\phi\), reducing the analysis of Hubble-induced phase transitions to the mere integration of the homogeneous equations of motion, as done for instance in Refs. [9; 11; 13]. Despite its simplicity, this naive approach is based on the false premise that the only relevant mode for the spectator field's evolution is the zero mode. On the contrary, as argued in Refs. [23; 15; 24] and shown explicitly in Ref. [10], the existence of spatial gradients plays a crucial role in the formation of topological defects, in the overall energy budget and in determining the effective equation of state of the spectator field (cf. also Refs. [25; 26] and [27; 28] for previous analyses supporting this view). Moreover, the homogeneous-field approximation does not respect the fundamental \(\mathds{Z}_{2}\) symmetry of the model under consideration, requiring _ad hoc_ initial conditions that cannot be sourced by stochastic quantum fluctuations after inflation comes to an end.
Having in mind the above arguments, let us proceed now to the analysis of the spectator field fluctuations. To this end, we notice that immediately after symmetry breaking the non-linear term in the evolution equation (6) plays a rather subdominant role, allowing us to approximate this expression by that for a free scalar field with time dependent mass \(M(z)\), making therefore the problem exactly solvable. As described in detail in Refs. [10; 14; 15], the associated spectator field quantisation proceeds according to the standard procedure in non-trivial backgrounds [29; 30]. In particular, each Fourier mode \(\mathbf{\kappa}=\mathbf{k}/(a_{\rm kin}\chi_{\rm kin})\) of the scalar field \(Y\) is promoted to a quantum operator
\[\hat{Y}(\mathbf{\kappa},z)=f_{\kappa}(z)\hat{a}(\mathbf{\kappa},z_{0})+f_{\kappa}(z)^ {*}\hat{a}^{\dagger}(-\mathbf{\kappa},z_{0}) \tag{11}\]
that satisfies the equal-time commutation relations \([\hat{Y}_{\mathbf{\kappa}}(z),\hat{\Pi}_{\mathbf{\kappa}^{\prime}}(z)]=i\delta(\mathbf{ \kappa}+\mathbf{\kappa}^{\prime})\), where \(\hat{\Pi}_{\mathbf{\kappa}}=d\hat{Y}_{\mathbf{\kappa}}/dz\), \(\hat{a}_{\mathbf{\kappa}}\) and \(\hat{a}_{\mathbf{\kappa}}^{\dagger}\) are the standard annihilation and creation operators and \(f_{\kappa}\) is a set of isotropic mode functions fulfilling the Klein-Gordon equation
\[f_{\kappa}^{\prime\prime}(z)+(\kappa^{2}-M^{2}(z))f_{\kappa}(z)=0\,. \tag{12}\]
To solve this initial-value problem, we will assume the \(\nu\) parameter within the effective square mass \(M^{2}(z)\) to be large enough as to sufficiently separate the scales corresponding to the Hubble horizon and the typical momenta amplified by the tachyonic instability,
\[\mathcal{H}\leq\kappa\leq\sqrt{4\nu^{2}-1}\mathcal{H}\,, \tag{13}\]
thus reducing potential horizon-reentry effects [15]. Given this requirement and assuming vacuum initial conditions \(f_{\kappa}(z_{0})=1/\sqrt{2\kappa}\) and \(f_{\kappa}^{\prime}(z_{0})=-i\sqrt{\kappa/2}\), the Klein-Gordon equation (12) admits a solution [15]
\[f_{\kappa}(z)=\sqrt{z+\nu}\left[\alpha_{\kappa}\mathcal{J}_{\nu}(\kappa(z+\nu ))-\beta_{\kappa}\mathcal{J}_{\nu}(\kappa(z+\nu))\right]\,, \tag{14}\]
with \(\mathcal{J}_{\nu}\) and \(\mathcal{Y}_{\nu}\) the Bessel functions of the first kind and the quantities \(\alpha_{\kappa}\), \(\beta_{\kappa}\), \(\delta\) defined as
\[\alpha_{\kappa} =\mathcal{Y}_{\nu}(\kappa\nu)\delta\,,\hskip 28.452756pt\beta_{ \kappa}=\mathcal{J}_{\nu}(\kappa\nu)\delta-\frac{f(0,\kappa)}{\sqrt{\nu} \mathcal{Y}_{\nu}(\kappa\nu)}\,, \tag{15}\] \[\delta =\frac{\pi f(0,\kappa)}{4\sqrt{\nu}}\left[1-2\nu+2\nu\left(\kappa \frac{\mathcal{Y}_{\nu-1}(\kappa\nu)}{\mathcal{Y}_{\nu}(\kappa\nu)}-\frac{f ^{\prime}(0,\kappa)}{f(0,\kappa)}\right)\right]\,. \tag{16}\]
This solution consists of a rapidly growing mode (\(\mathcal{J}\)) and a decaying mode (\(\mathcal{Y}\)). By considering only the former, one can easily obtain the following approximated expression for the amplitude of the spectator field mode functions in the large-\(\nu\) limit [15],
\[|f_{\kappa}(z)|^{2}\approx\frac{1}{\kappa}\left(\frac{2\nu-1}{4\sqrt{2\nu}} \right)^{2}\left(\frac{z}{\nu}+1\right)^{2\nu+1}\exp\left[-\frac{\kappa^{2}( \nu+z)^{2}}{2(\nu+1)}\right]\, \tag{17}\]
which, for highly-symmetric spherical configurations, leads to a two-point correlation function
\[\zeta(r,z)=\int\frac{d^{3}\kappa}{(2\pi)^{3}}\,e^{i\mathbf{\kappa}\cdot\mathbf{y}}\,| f_{\kappa}(z)|^{2}\simeq\zeta(0,z)\,G_{1}(\kappa_{*}r)\,, \tag{18}\]
with root mean-square perturbation (rms)
\[\zeta(0,z)\equiv Y_{\text{rms}}^{2}(z)=\left(\frac{2\nu-1}{8\pi\nu}\right)^{2 }\left(1+\frac{z}{\nu}\right)^{2\nu+1}\kappa_{*}^{2}(z)\,, \tag{19}\]
and shape function
\[G_{1}(\kappa_{*}r)\equiv\sqrt{\frac{\pi}{2}}\frac{1}{\kappa_{*}r}\exp\left(- \frac{1}{2}\kappa_{*}^{2}r^{2}\right)\text{erfi}\left(\frac{\kappa_{*}r}{\sqrt {2}}\right)\,. \tag{20}\]
Here erfi stands for the imaginary error function and \(\kappa_{*}(z)\equiv 2\sqrt{\nu+1}\,\kappa_{\text{min}}(z)\) is a typical momentum scale.
### Non-linear regime
As quantum fluctuations grow during the tachyonic phase, the previously-neglected quartic self-interaction term in Eq. (6) becomes increasingly important, up to the point of completely halting the tachyonic growth of perturbations and marking the beginning of the oscillations of the spectator field. In order to study this highly non-linear regime, we will resort in what follows to fully-fledged classical lattice simulations in \(3+1\) dimensions. To this end, we will make use of a modified version of the publicly-available \(\mathcal{C}osmo\mathcal{L}attice\) code [16, 31], designed to evolve interacting scalar fields in an expanding background. In particular, we modified the default lattice implementation of the Klein-Gordon equation in this numerical tool to account for the Hubble-dependent mass term in Eq. (2), introducing also related definitions of the energy density and pressure in Eqs. (9) and (10).
In all our simulations, the lattice parameters are chosen to ensure the stability and reliability of the output. In particular:
1. The comoving grid size \(L\) and the number of lattice points per dimension \(N\) are selected in such a way that all relevant modes are always well within the associated infrared (IR) and ultraviolet (UV) resolution in momentum space, \[\kappa_{\text{IR}}=\frac{2\pi}{L}\,,\qquad\qquad\kappa_{\text{UV}}=\frac{\sqrt {3}}{2}N\kappa_{\text{IR}}\,.\] (21) In particular, these quantities are set to properly cover the tachyonic band in Eq. (13) as well as other modes subsequently enhanced by rescattering effects (\(\kappa_{\text{IR}}=\mathcal{H}\), \(\kappa_{\text{UV}}\gg\sqrt{4\nu^{2}-1}\mathcal{H}\)). Since the mode excitation depends mainly on the non-minimal coupling parameter \(\nu\), we set \(N=128\) for \(\nu<10\), \(N=256\) for \(10\leq\nu<25\) and \(N=512\) for \(\nu\geq 25\). In order to verify that our results do not depend on the lattice size, we perform when possible a few test simulations with increased lattice size.
2. The time-step variable is chosen according to the stability criterion \(\delta t/\delta x\ll 1/\sqrt{d}\)[31], with \(d=3\) the number of spatial dimensions and \[\delta x=\frac{2\pi}{N\,\kappa_{\text{IR}}}\Bigg{|}_{z=0}=\frac{4\pi\,\nu}{N}\] (22) the length of the side of a lattice cell. More specifically, we set \(\delta t=0.1\) for \(\nu\geq 10\) and \(\delta t=0.01\) for \(\nu<10\).
Given the fixed power-law background expansion in (4), a symplectic 4th order Velocity-Verlet evolver guarantees stability and precision of the numerical solutions when the conservation of energy cannot be explicitly checked. In agreement with the overall picture developed in the previous section, the initial conditions for our lattice simulations are set as \(\chi(0)=\chi^{\prime}(0)=0\), with fluctuations over this homogeneous background included as Gaussian random fields, as done customarily for systems with short classicalisation times like the one under consideration [10]. The resulting evolution is therefore deterministic up to a randomly-generated initial seed. A robust (and time-consuming) approach would require to perform several realisations of the same simulation, averaging subsequently over the final output to make sure that the random initial conditions do not influence the dynamics. However, because of the computational resources at our disposal, we choose to keep the base seed constant in all our simulations, making them exactly comparable. Since the system looses memory of
the initial conditions soon after the development of the tachyonic instability, this choice does not influence the overall macroscopic evolution, as we have explicitly checked by performing a few simulations with different base seeds.
### The timescales of heating
In this section, we present the main results of our extensive parameter scanning in the \((\nu,\lambda)\) plane, obtained by performing repeated 3+1 dimensional classical lattice simulations with \(\lambda\in[10^{-6},\,10^{-1}]\) and \(\nu\in[5,\,35]\). The domain of \(\nu\) is set to avoid sizeable horizon-reentry effects (\(\nu\geq 5\)) while ensuring the covering of all relevant modes throughout the evolution (\(\nu\leq 35\)). As will become clear in what follows, the main timescales in the evolution of the system for \(\nu>35\) can be inferred from simulations with smaller non-minimal couplings. The range of \(\lambda\) is chosen to ensure perturbativity (\(\lambda<1\)) while partially covering the typical values appearing in Standard Model settings (\(10^{-6}\lesssim\lambda\lesssim 10^{-1}\)), namely those of the Higgs self-coupling at energies around \(10^{10}-10^{16}\) GeV [32; 33; 34]. In principle, the requirements of energetic subdominance of the spectator field and of sub-Planckian contribution to the reduced Planck mass set boundaries on the parameter space as well. However, being these boundaries dependent on the scale of kination \(H_{\rm kin}\), we can proceed independently from its exact value and work with dimensionless quantities such as \(\chi(z)/H_{\rm kin}\) and \(\rho(z)/H_{\rm kin}^{4}\), as we did in the previous section through \(\chi_{\rm kin}\). The discussion on the correction to the Planck mass and to the energetic subdominance of the spectator field will be made _a posteriori_.
A typical output of our simulations is presented in Fig. 1, where we display the evolution of the spectator-field energy density, separated in its kinetic (\(K\)), gradient (\(G\)), potential (\(V\)) and interaction (\(I\)) contributions,
\[K=\frac{1}{2}\chi^{\prime 2}\,,\ \ \ \ G=\frac{1}{2a}(\nabla\chi)^{2}\,,\ \ \ \ V=\frac{\lambda}{4}\chi^{4}\,,\ \ \ \ I=3\xi\mathcal{H}\chi^{2}+6\xi\mathcal{H}\chi\chi^{\prime}-\frac{\xi}{a^{2}} \nabla^{2}\chi^{2}\,. \tag{23}\]
Note that the interaction term \(I\) contains a total-divergence piece that averages to zero once integrated over the full lattice volume with periodic boundary conditions [35; 36]. As observed in the figure, the non-minimal coupling to gravity leads to a rapid growth of fluctuations that comes to an end when the non-linearities associated to the quartic self-interaction term become relevant. The _backreaction time_\(z_{\rm br}\) can be identified with the first time at which the second time-derivative of the spectator field vanishes, \(Y^{\prime\prime}(z)=0\), which is itself related to the time at which the effective frequency of the field's oscillations vanishes,
\[\left(-\nabla^{2}Y+\lambda Y^{3}\right)\Big{|}_{z_{\rm br}}=M^{2}(z)Y\Big{|} _{z_{\rm br}}\,. \tag{24}\]
Contrary to other choices in the literature, this definition can be conveniently applied to both analytical and numerical results, enabling their direct comparison. 1 In particular, due to the stochastic nature of the lattice approach, we choose to consider the lattice-averaged amplitude of the fluctuations as a good indicator of the overall dynamical evolution, comparing the time at which the root-mean-squared \(\langle Y(z)^{\prime}\rangle_{\rm rms}\) reaches its maximum value to the analytical definition of \(z_{\rm br}\) following from
Footnote 1: One could consider other definitions of the backreaction timescale based on the the amplitude of the scalar field’s oscillations, on the evolution of the effective mass or on the different contributions to the total energy density (see for example Refs. [37; 38]). We have verified that, due to the fast growth of tachyonic perturbations, such alternative definitions translate into negligible differences within our numerical results.
\[\kappa_{*}^{2}+\lambda Y^{2}(z_{\rm br})=M^{2}(z_{\rm br})\,, \tag{25}\]
with \(\kappa_{*}=2\sqrt{2\nu+1}{\cal H}\) the typical momentum scale entering the analytical two-point correlation function in Eq. (18).
The behaviour of the backreaction time \(z_{\rm br}\) following from the analytical expression (25) is displayed in the upper panel of Fig. 2 as a function of the model parameters in the scan. We observe that, due to the fast-growth of tachyonic modes upon symmetry breaking, this quantity is roughly comparable with the duration of the first semi-oscillation, \(z_{\rm br}\approx{\cal O}(10)\). The explosive enhancement of fluctuations reaches, however, a saturation point at large \(\nu\), with different lines approaching an almost constant value in one-to-one correspondence with the choice of \(\lambda\). Consequently, for non-minimal couplings \(\nu\gtrsim 20\), the difference in backreaction times at constant \(\lambda\) becomes completely negligible. We also recall here that our analytical estimate of the backreaction scale relies on the large-\(\nu\) limit, a factor that can contribute to the small discrepancies in the interval \(5\leq\nu\leq 10\).
From the data points in Fig. 2 we can additionally derive fitting formulas for the amplitude of the spectator field at backreaction time
\[\langle\chi^{2}_{\rm br}(\lambda,\nu)\rangle_{\rm rms}=4H_{\rm kin}^{2}\,\exp \left(\alpha_{1}+\alpha_{2}\nu+\alpha_{3}\ln\nu\right)\, \tag{26}\]
and the total energy of the spectator field at that moment,
\[\rho^{\chi}_{\rm br}(\lambda,\nu)=16H_{\rm kin}^{4}\,\exp\left(\beta_{1}+ \beta_{2}\,\nu+\beta_{3}\ln\nu\right)\,. \tag{27}\]
Figure 1: Evolution of the volume-averaged kinetic (\(K\)), gradient (\(G\)), potential (\(V\)) and interaction (\(I\)) contributions in (23) to the total energy density \(\rho^{\chi}\) for exemplary values \(\nu=10\), \(\lambda=10^{-4}\). Each energy component is multiplied by \(a^{4}\) to highlight asymptotic radiation-like behaviours. In order to facilitate the comparison, we display the absolute value of the interaction term, since this is not positive definite at all times. The timescales identifying the backreaction time \(z_{\rm br}\) and the virialisation time \(z_{\rm rad}\) for this specific case are indicated at the top of the image.
The dependence on the quartic self-coupling parameter \(\lambda\) is encoded in the coefficients
\[\alpha_{1}(\lambda)=-4.92+0.74\,n\,,\qquad\alpha_{2}(\lambda)=-0.04-0.02\,n\,, \qquad\alpha_{3}(\lambda)=2.54+0.61\,n\,, \tag{28}\]
and
\[\beta_{1}(\lambda)=-7.03-0.56\,n\,,\qquad\beta_{2}(\lambda)=-0.06-0.04\,n\,, \qquad\beta_{3}(\lambda)=5.15+1.10\,n\,, \tag{29}\]
which are all simple first-order polynomials in \(n=-\log(\lambda)\). These formulas are displayed as black dashed lines in the middle and lower panels of Fig. 2. Note that the mild dependence on the quartic self-coupling allows us to obtain a reliable fit on a logarithmic interval for \(\lambda\) that spans six orders of magnitude.
Figure 2: Parametric dependence of the backreaction timescale \(z_{\rm br}\) (upper panel), of the amplitude of the scalar field fluctuations \(\langle\chi^{2}_{\rm br}\rangle_{\rm rms}\) (middle panel) and of the total energy at backreaction \(\rho^{\chi}_{\rm br}\) (lower panel) on the model parameters \(\nu\) and \(\lambda\), as indicated in the figure. In the upper panel, dashed coloured lines correspond to the analytical curve obtained from Eq. (25). In the middle and lower panels, solid lines are derived from the numerical output of lattice simulations while the black dashed lines correspond to the fitting formulas (26) and (27).
As mentioned previously, the self-consistency of our procedure requires the explosive tachyonic enhancement of spectator field fluctuations to fulfil the condition on negligible contributions to the reduced Planck mass. The parametric formula (26) allows us to estimate the peak amplification of the scalar field and with it the effective change in \(M_{P}^{2}\) at the level of the model's Lagrangian. Figure 3 shows with solid lines the 10% threshold for different choices of the energy scale at the beginning of kination. Given the self-similar evolution of the lines representing \(\langle\chi_{\rm br}^{2}\rangle_{\rm rms}\) in Fig. 2, we can extrapolate the maximal amplitude even for \(\lambda<10^{-6}\), thanks to the parametric dependence in (26). The red rectangle indicates the actual region explored in our scanning, which confirms that our setup is fully consistent with a subdominant scalar field for \(H_{\rm kin}\leq 10^{12}\,\)GeV.
For \(z>z_{\rm br}\), the system enters a highly-oscillatory regime driven initially by the Hubble-induced contributions (cf. Fig. 1). The impact of these time-dependent terms becomes, however, completely negligible at later times, where the steady flow of energy from the IR to the UV part of the spectrum effectively virializes the system [10], thus approaching an asymptotic
Figure 3: Correction to the squared Planck mass given by the peak mode enhancement at backreaction \(\langle\chi_{\rm br}^{2}\rangle_{\rm rms}\) as a function of the model parameters. The coloured lines set the contours for a 10% correction at different values of the Hubble scale at the onset of kination. The grey shading indicates the duration of the tachyonic phase in \(e\)-folds \(N_{\rm br}=1/2\ln(1+z_{\rm br}/\nu)\). The dashed red rectangle delimits the range explored in our extensive scanning of the parameter space.
energy configuration with \(\langle K\rangle\approx\langle G\rangle+2\langle V\rangle\). This condition provides a natural lattice definition for the _radiation time_\(z_{\rm rad}\) at which the spectator field starts behaving as a radiation fluid, namely
\[\left\langle\frac{\langle K(z_{\rm rad})\rangle}{\langle G(z_{\rm rad})\rangle+2 \langle V(z_{\rm rad})\rangle}\right\rangle_{t}=1\pm 0.01\,, \tag{30}\]
with the subscript \(t\) denoting a time-average over several oscillations of the scalar field. Note that this indirect definition of \(z_{\rm rad}\) holds as long as the contribution of the non-minimal coupling terms is negligible, coinciding in that limit with the usual condition \(w_{\chi}(z_{\rm rad})=1/3\), as can be easily inferred from Eqs. (9) and (10). Fig. 1 shows that the condition (30) is first fulfilled at \(z_{\rm rad}\approx{\cal O}(100)\), when the energy stored in the non-minimal interaction is at least one order of magnitude smaller than the kinetic, gradient and potential counterparts.
The specific dependence of the radiation time \(z_{\rm rad}\) on the model parameters in the scan is displayed in the top panel of Fig. 4. In deriving these results we have checked that the contribution of the non-minimal coupling becomes subleading as the original symmetry of the theory is restored and the Hubble function decreases with time. This translates into the condition \(I<0.1\times K\), \(G\), \(V\) which is satisfied by all the data points in the figure. For small values of \(\nu\), the virialisation process is completed soon after the non-linear regime sets in. At that stage, the spectator field is still undergoing large oscillations, making the precise determination of \(z_{\rm rad}\) more difficult and explaining the irregular features at \(\nu<15\). We note also that, due to a stronger rescattering of modes, \(z_{\rm rad}\) is generally shorter for higher values
Figure 4: Parametric dependence of the backreaction timescale \(z_{\rm rad}\) (upper panel), the total energy at completion of virialisation \(\rho_{\rm rad}^{\chi}\) (lower panel) on the model parameters \(\nu\) and \(\lambda\), as indicated in the figure. Solid coloured lines correspond to the numerical results obtained from the full set of lattice simulations. In the lower panel, black dashed lines correspond to the fitting formula (33)
of \(\lambda\).
The above results allow us to derive fitting formulas for the timescale \(z_{\rm rad}\) as function of the model parameters. For all values of \(\lambda\), the behaviour of \(z_{\rm rad}\) deduced from Fig. 4 is linear in \(\nu\), namely
\[z_{\rm rad}(\lambda,\nu)=\gamma_{1}+\gamma_{2}\,\nu\,, \tag{31}\]
with
\[\gamma_{1}(\lambda)=33.63+15.02\,n+0.22\,n^{2}\,,\hskip 28.452756pt\gamma_{2}( \lambda)=7.91-0.01\,n+0.02\,n^{2}\,, \tag{32}\]
second-order polynomials in \(n=-\log(\lambda)\). We can additionally derive a fitting formula for the total energy of the spectator field at \(z_{\rm rad}\), namely
\[\rho^{\chi}_{\rm rad}(\lambda,\nu)=16H_{\rm kin}^{4}\,\exp\left(\delta_{1}+ \delta_{2}\,\nu+\delta_{3}\ln\nu\right)\,, \tag{33}\]
with
\[\delta_{1}(\lambda)=-11.10-0.06\,n\,,\hskip 14.226378pt\delta_{2}(\lambda)=-0.0 4-0.03\,n\,,\hskip 14.226378pt\delta_{3}(\lambda)=3.62+0.87\,n\,. \tag{34}\]
This is displayed in the bottom panel of Fig. 4 as black dashed lines. By comparing the bottom panels of Fig. 2 and Fig. 4, we notice that the total energy available at the end of the virialisation process is roughly a constant fraction of the energy produced during the tachyonic phase. In particular, the fitting formulas (27) and (33) tell us that \(\rho^{\chi}_{\rm rad}(\lambda,\nu)\sim 10^{-4}\times\rho^{\chi}_{\rm br}( \lambda,\nu)\).
At this stage, it is instructive to compare our finding with the results of an homogeneous treatment, where the system dynamics gives rise to an effective equation-of-state parameter \(w_{\chi}=1/3\) soon after the onset of the oscillatory regime [9, 10]. As apparent in Fig. 1, the presence of spatial inhomogeneities is certainly not a negligible effect, with the energy density into gradients accounting to a sizeable fraction of the total energy density and generically exceeding the potential energy contribution. This is confirmed in Fig. 5, where we display the gradient-to-total energy ratio \(G/\rho^{\chi}\) computed at the radiation time \(z_{\rm rad}\) for all the model parameters in the scan. Since the initial tachyonic production is more effective at sourcing spatial inhomogeneities, this ratio is generically larger for larger values of \(\nu\), leading to gradient contributions of up to 40% of the total energy density. The kink feature visible at \(\nu=25\) is a mild numerical effect due to the change of lattice size, which slightly modifies the relative importance of gradients.
Figure 5: Ratio of the energy density stored in gradients and the total energy density of the spectator field at \(z_{\rm rad}\), as a function of the model parameters \(\nu\) and \(\lambda\).
## 3 Heating efficiency and temperature
After reaching a relativistic equation of state at \(z_{\rm rad}\), the spectator field's energy density evolves proportionally to \(a^{-4}\), with the associated power spectrum approaching slowly a thermal distribution [39; 40; 41]. From there on, the evolution of the Universe's energy budget can be studied analytically via the so-called _heating efficiency_ introduced in Ref. [42]. The most straightforward definition of this quantity is based on the ratio of the energy density of the spectator field _at the onset of kination_ to that of the inflaton component, \(\Theta_{\rm ht}=\rho^{\psi}(a_{\rm kin})/\rho^{\phi}(a_{\rm kin})\), which is valid in case of instantaneous production of relativistic daughter fields \(\psi\) at the end of inflation [42; 14]. However, in the present model the tachyonic production of \(\chi\) particles is completed in about one \(e\)-fold after the onset of kination, taking another \(e\)-fold for the spectator field to reach a radiation-like equation of state. Therefore, it is more convenient to define the heating efficiency _at the onset of the radiation-like behaviour_ namely
\[\Theta_{\rm ht}\equiv\frac{\rho^{\chi}(a_{\rm rad})}{\rho^{\phi}(a_{\rm rad})}\,, \tag{3.1}\]
with \(\rho^{\phi}(a)=3M_{P}^{2}H_{\rm kin}^{2}(a/a_{\rm kin})^{-6}\) the energy density of the inflaton field in the kinetic dominated regime. Note that, since \(\Theta_{\rm ht}|_{\rm rad}=\Theta_{\rm ht}|_{\rm kin}\,(a_{\rm rad}/a_{\rm kin })^{2}\), it is always possible to switch between the two alternative definitions. By inserting the parametric formulas (2.31) and (2.33) in Eq. (3.1) we gain also a parametric expression for \(\Theta_{\rm ht}\),
\[\Theta_{\rm ht}(\lambda,\nu)=\frac{16}{3}\left(\frac{H_{\rm kin}}{M_{P}} \right)^{2}\left(\frac{\nu\gamma_{1}+\gamma_{2}\,\nu}{\nu\,a_{\rm kin}^{2}} \right)^{3}\exp\left(\delta_{1}+\delta_{2}\,\nu+\delta_{3}\ln\nu\right)\,, \tag{3.2}\]
Figure 6: Parametric dependence of the heating efficiency \(\Theta_{\rm ht}\) on the model parameters \(\lambda\) and \(\nu\) for a fiducial energy scale of kination \(H_{\rm kin}=10^{11}\) GeV. Solid lines correspond to the values computed from the simulations scanning of the parameter space, while the dashed lines correspond to the fitting formula (3.2).
showing explicitly its dependence on the model parameters. As displayed in Fig. 6 for the specific case of \(H_{\rm kin}=10^{11}\) GeV, the computed values fulfil the big bang nucleosynthesis bound \(\Theta_{\rm ht}\gtrsim 10^{-16}\) derived in Ref. [14]. Higher values of this quantity correspond to longer and stronger tachyonic phases following from large-\(\nu\) and small-\(\lambda\) regimes, as demonstrated by the tight correlation between \(\Theta_{\rm ht}(\lambda,\nu)\), \(\rho_{\rm rad}(\lambda,\nu)\) and \(\rho_{\rm br}(\lambda,\nu)\) in Figs. 2, 4 and 6.
The expression for the heating efficiency in Eq. (3.2), together with Eqs. (2.26), (2.27), (2.31) and (2.33), constitutes the central result of this work. The timescales corresponding to backreaction time and to radiation time mark the interval where the complicated non-linear dynamics has to be solved with numerical simulations. The initial tachyonic phase is well captured by the analytical solution to the mode equation in (2.14). The parametric formulas for \(\rho_{\rm rad}^{\chi}\) and \(z_{\rm rad}\) summarise the macroscopic characteristics of the complicated self-interacting system that we have numerically simulated in more than one hundred runs. The quality of the fitting procedure has been checked for all of our fitting formulas via the \(R^{2}\) method, yielding always \(R^{2}>0.99\). Given the simplicity and reliability of our results, the parametric formulas deliver a ready-to-use description of the main phases of heating, eliminating the need for further simulations over (and beyond) the parameter space we have explored.
Although the spectator scalar field achieves a radiation-like equation of state at \(z_{\rm rad}\), a fully thermalised state is not reached until much later, once the non-linear rescattering processes has redistributed the available energy-density within a large range of modes [39; 40; 41; 10]. Nonetheless, it is convenient to define an _instantaneous temperature_ scale,
\[T_{\rm ht}=\left(\frac{30\,\rho_{\rm ht}^{\chi}}{\pi^{2}g_{*}^{\rm ht}} \right)^{1/4}\,, \tag{3.3}\]
with \(g_{*}^{\rm ht}=106.75\) the Standard Model number of relativistic degrees of freedom at energies above \(\mathcal{O}(100)\) GeV and \(\rho_{\rm ht}^{\chi}=\rho^{\chi}(z_{\rm ht})=\rho^{\phi}(z_{\rm ht})\) the total energy-density of the scalar field at the end of the heating phase. This temperature should be understood as a mere estimate of the typical energy of a particle at the onset on radiation domination, coinciding with the most common concept of _(re)heating temperature_ in the fast thermalisation limit. Independently of its thermal character, this scale plays an important role in the production of dark matter relics via UV-freeze-in [43; 44] and many mechanisms aiming to explain the observed matter-antimatter asymmetry of the Universe [45]. Assuming that the overall equation of state of the Universe is \(w\approx 1\) until the end of heating, we obtain
\[T_{\rm ht}\simeq 2.7\times 10^{8}\,{\rm GeV}\left(1+\frac{z_{\rm rad}}{\nu} \right)^{-3/4}\,\left(\frac{\Theta_{\rm ht}\,a_{\rm kin}^{2}}{10^{-8}}\right)^ {3/4}\left(\frac{H_{\rm kin}}{10^{11}\,{\rm GeV}}\right)^{1/2}\,, \tag{3.4}\]
with the heating efficiency \(\Theta_{\rm ht}\) containing an implicit dependence on the scale of kination, cf. Eq. (3.2). As expected, due to their long-lasting tachyonic instabilities, simulations with smaller values of \(\lambda\) are characterised by a significantly higher heating temperature, up to \(T_{\rm ht}\sim 10^{7}\,{\rm GeV}\) for \(H_{\rm kin}=10^{11}\,{\rm GeV}\). However, even the lowest temperature in our simulations, \(T_{\rm ht}\sim 10^{3}\,{\rm GeV}\), falls way above the Planck and big bang nucleosynthesis bounds on the heating temperature, \(T_{\rm ht}\geq 5\,{\rm MeV}\)[46; 47].
The parametric formulas for \(\rho_{\rm rad}^{\chi}\) and \(z_{\rm rad}\) allow also for the computation of the minimal number of \(e\)-folds of inflation \(N\) needed to solve the flatness and horizon problems in scenarios where the Hubble-induced phase transition is the only particle production channel. As computed in Ref. [42], this is given by
\[N=-\ln\left(\frac{k_{\rm bc}}{a_{0}H_{0}}\right)+\ln\left(\frac{H_{\rm bc}}{H_ {0}}\right)+\frac{1}{4}\left(\frac{\rho_{\rm mat}}{\rho_{\rm bc}}\right)+\ln \left(\frac{a_{\rm mat}}{a_{0}}\right)-\frac{1}{2}\ln\left(\frac{a_{\rm kin}}{ a_{\rm rad}}\right)+\frac{1}{4}\ln\left(\frac{\rho_{\rm bc}}{\rho_{\rm kin}} \right)\,, \tag{3.5}\]
with the subscript \(hc\) referring to quantities evaluated at the horizon crossing of the pivot scale \(k_{\rm hc}=0.002\,{\rm Mpc}^{-1}\), _kin_, _rad_, _mat_ denoting respectively quantities evaluated at the beginning of the kination, radiation and matter-domination and 0 indicating their present-day values. Numerically this corresponds to [42]
\[N\simeq 63.3+\frac{1}{4}\ln\left(\frac{r}{10^{-3}}\right)-\frac{1}{4}\ln \left(\frac{\Theta_{\rm ht}a_{\rm kin}^{2}}{10^{-8}}\right)\,+\frac{1}{4}\ln \left(1+\frac{z_{\rm rad}}{\nu}\right) \tag{3.6}\]
where the parametric formulas (2.31) and (3.2) can be directly included.
### Impact on inflationary observables
The heating process summarised in the parametric formulas (3.2) and (3.4) plays an important role when computing the total duration of inflation, with the contribution of heating efficiency in Eq. (3.6) accounting for up to \(\sim 4\)\(e\)-folds of expansion history for \(H_{\rm kin}=10^{11}\,\,{\rm GeV}\). Given the capability of future Cosmic Microwave Background experiments to detect a tensor-to-scalar ratio at the level of \(r=\mathcal{O}(10^{-3}-10^{-4})\) while marginally improving the current constraints on the spectral tilt \(n_{s}\)[1; 3; 4], it might become eventually possible to set bounds on the physics of heating [48; 49]. Having this in mind, let us make use of the lattice-based relations (3.6) and (3.2) to determine the precise value of the inflationary observables in a specific quintessential-inflation scenario involving only gravitational particle production. Among the many inflationary models in the literature, one can consider, for instance, the simple \(\alpha\)-attractor family,
\[\mathcal{L}_{\phi}=-\frac{\partial_{\mu}\phi\partial^{\mu}\phi}{2\left(1- \frac{\phi^{2}}{6\alpha M_{P}^{2}}\right)^{2}}-V(\phi)\;, \tag{3.7}\]
encoding, among others, several incarnations of the Higgs inflation paradigm [50; 51] and covering a big portion of the \((n_{s},\,r)\) space [52] currently favoured by the Planck/BICEP2/Keck collaborations [53] using a single parameter \(\alpha\). The presence of the pole restricts the field's movement in field space, imposing a bound on its value. By transforming to a canonical field, the pole can be effectively shifted to infinity while stretching the potential for the canonical variable near the pole, resulting in the emergence of plateaus [54; 55; 56].
Although initially formulated for oscillatory models, the \(\alpha\)-attractor idea can be easily adapted to the quintessential inflation paradigm by considering non-symmetric potentials [57; 58; 59]. The simplest realisation involves a linear term, adjusted to satisfy the Planck/COBE normalisation constraint, and a small cosmological constant \(\Lambda\) responsible for the late-time acceleration of the Universe, namely \(V(\phi)=\gamma\phi+\Lambda\) with \(\gamma/(\sqrt{\alpha}M_{P}^{3})\sim 2\times 10^{-11}\) in order to satisfy the normalisation of the primordial power spectrum [57]. When written in terms of a canonically-normalised field \(\varphi=\sqrt{6\alpha}M_{P}\,{\rm arctanh}\,\phi/(\sqrt{6\alpha}M_{P})\), this potential becomes effectively stretched,
\[V(\varphi)=\sqrt{6\alpha}\gamma M_{P}\left[\tanh\left(\frac{\varphi}{\sqrt{6 \alpha}M_{P}}\right)+1\right]+\Lambda\,, \tag{3.8}\]
coinciding in the large field limit \(\varphi\gg\sqrt{6\alpha}M_{P}\) with the T-model potential and sharing therefore the very same predictions for the spectral tilt \(n_{s}\), the tensor-to-scalar ratio \(r\) and spectral amplitude \(\mathcal{A}_{s}\),
\[n_{s}=1-\frac{2}{N}\,,\hskip 28.452756ptr=\frac{12\alpha}{N^{2}}\,, \hskip 28.452756pt\mathcal{A}_{s}=\frac{V_{0}N^{2}}{18\pi^{2}\alpha M_{P}^{2}}. \tag{3.9}\]
Combining these expressions with Eqs. (3.2) and (3.6), and neglecting the variation of the Hubble rate during inflation, \(V_{0}\simeq 3M_{P}^{2}H_{\rm kin}^{2}\), we can link the inflationary observables to the _heating efficiency_ following from our detailed lattice simulations. The result is displayed in Figure 7, where we have computed \(r\) and \(n_{s}\) assuming specific values for the Hubble scale \(H_{\rm kin}\). The vertical grey lines correspond to the predictions for the maximum and minimum number of inflationary \(e\)-folds (\(N_{\rm max}=65.0\), \(N_{\rm min}=61.9\)) obtained from the minimum and maximum heating efficiency respectively (cf. Figure 6). The region between the black lines is therefore the full area spanned by the parameter space \((\lambda,\nu)\) of our simulations.
### Beyond the single field approximation
The prototypical case of a \(\mathds{Z}_{2}\)-symmetric scalar field considered in the previous section provides us with an extensive understanding of the tachyonic phase (both from the analytical and numerical point of view) and of the timescales of the heating process. This is to be understood as a basic scenario whose phenomenology can be extended to more elaborated field content.
Let us consider, for instance, a scale-free interaction \(\frac{1}{2}g^{2}\chi^{2}\varphi^{2}\) that respects the global \(\mathds{Z}_{2}\) symmetry of the spectator field, with \(\varphi\) some scalar field in a beyond the Standard Model sector and \(g^{2}\) a dimensionless coupling constant. This term, customary in the literature of heating [38, 60], opens up the possibility of depleting the spectator field energy density via perturbative and non-perturbative effects. Given the non-linear nature of the self-scattering interaction and the coupling between fields, let us turn once more to lattice simulations. Due
Figure 7: Predicted values of tensor-to-scalar ratio and spectral tilt in an \(\alpha\)-attractor quintessential-inflation scenario involving only gravitational particle production. Lines of different colours correspond to different values of the \(H_{\rm kin}\) parameter, where the end points correspond to the largest (left) and smallest (right) heating efficiencies achieved in our simulations. The vertical grey lines delimit the region allowed by the numerically-obtained heating efficiency. The results are shown in comparison with the constraints from Planck + Bicep/Keck at one and two sigma.
to the fast population of the daughter field's UV modes, a larger lattice is generically needed to cover the evolution of the field's fluctuations, especially in case of large \(g\) and prolonged tachyonic phases for \(Y\) (i.e. small \(\lambda\)). In order to maintain a good coverage of all momenta within a lattice box of \(N=256\), we will restrict ourselves to model parameters in the fiducial intervals \(g\in[10^{-4},\,10^{-3}]\), \(\nu\in[10,\,20]\) and \(\lambda\in[10^{-5},10^{-1}]\), considering only particular benchmark points due to the longer computational time needed for each simulation.
The time-evolution of the different terms contributing to the total energy-density in a typical simulation is displayed in Fig. 8. As noted already in Section 2.3, the overall evolution of the spectator field is characterised by three main phases: the initial tachyonic amplification of the field's modes, the evolution towards virialisation and the thermalisation of the system. Additionally, we can observe one additional timescale related to the daughter field. The _equipartition_ timescale marks the moment when the total energy density is evenly distributed between the two scalar fields. A numerical computation of this timescale is a challenging task, since it can require a very long time to be reached, depending on the production mechanisms and on their efficiency. We simply define it through the condition \(\rho^{\chi}(z_{\rm eq})=\frac{1}{2}\rho^{\rm tot}(z_{\rm eq})\) for those simulations in which equipartition is reached within their total duration.
The parametric dependence of the timescales \(z_{\rm br}\), \(z_{\rm rad}\) and of the energy densities \(\rho^{\chi}_{\rm br}\), \(\rho^{\chi}_{\rm rad}\) following from our simulations is presented in Fig. 9. We observe that, for the values of \(g\) considered in our simulations, the end of the tachyonic instability is caused primarily by the self-interactions of the spectator scalar field, thus matching closely the results obtained in the single field case. After the peak production of the spectator field modes, the associated kinetic and gradient energies evolve as radiation, with both fields contributing to the overall virialisation of the system. The timescale \(z_{\rm rad}\) is computed numerically by extending the condition in (2.30) to a 2-component interacting fluid, i.e. by
Figure 8: Energy components of the model as defined in (2.23). Each component has been multiplied by \(a^{4}\) in order to highlight the radiation-like behaviour. In this figure, the model parameters have been set to \(\nu=20\), \(\lambda=10^{-3}\), \(g=10^{-3}\).
kinetic, gradient and interaction terms. Again, we notice a good matching between \(z_{\rm rad}\) for the single- and two-fields scenarios. The interval between \(z_{\rm rad}\) and \(z_{\rm eq}\) has a variable duration, depending on many factors, including the intensity of the production processes, the appearance of parametric resonances and the magnitude of the coupling constant \(g\). For the cases (\(g=10^{-3},\lambda=10^{-3}\)) and (\(g=10^{-4},\lambda=10^{-4}\)), the two fields are reaching equipartition of the total energy already during the virialisation phase. This phenomenon induces stronger oscillations on the spectator field which makes the estimation of \(z_{\rm rad}\) somewhat less reliable, see the middle panel of Fig. 9. On the other hand, a short equipartition timescale allows us to compute it numerically for the special cases (\(g=10^{-3},\lambda=10^{-3}\)) and (\(g=10^{-4},\lambda=10^{-4}\)), finding it to be in the range \(250<z_{\rm eq}<400\). After rescattering between fields has redistributed the total energy density, the system evolves towards the end of the heating phase.
Figure 9: Dependence of the timescales \(z_{\rm br}\) (first panels), \(z_{\rm rad}\) (third panels) and of the energy densities \(\rho_{\rm br}^{\chi}\) (second panels) and \(\rho_{\rm rad}^{\chi}\) (fourth panels) on the model parameters \(\nu\) and \(\lambda\). The daughter coupling as been set to \(g=10^{-3}\) for all simulations. Dots represent the numerically-computed timescales in the two-fields scenario. Dashed lines refer to the fitting formulas obtained from Eq. (2.27) and (2.33) while solid lines are derived from the lattice simulations of the single-spectator-field scenario. Lighter lines correspond to smaller quartic self-couplings (down to \(\lambda=10^{-5}\)) while darker lines to larger couplings, up to \(\lambda=10^{-1}\).
We notice once more that the results in Fig. 9 are in excellent agreement with the previous results for the a single spectator field. Lastly, Fig. 10 displays the heating efficiency for the two-field setup with \(g=10^{-3}\) and \(g=10^{-4}\) compared with the parametric formula obtained from Eq. (3.2).
It is interesting to notice that for sufficiently large couplings \(g\sim\mathcal{O}(0.1)\) and small self-interactions \(\lambda\), the backreaction is expected to be produced by the daughter field itself. In this scenario, the rapid growth of \(\varphi\) fluctuations could dynamically generate an effective mass for the spectator field at large times, even if the quartic self-coupling is completely negligible (\(\lambda\to 0\)). Unfortunately, the energy transfer to UV modes in this case is fast and affects a large band of momenta that cannot be properly covered by a lattice with \(N=512\), the largest one we are currently able to simulate. Therefore, we leave the testing of this enticing scenario to a future work.
## 4 Conclusions and outlook
In the present article, we have carried out an extensive analysis of the Hubble-induced heating mechanism appearing in quintessential inflation scenarios involving spectator fields non-minimally coupled to gravity. By performing a large number of 3+1-dimensional classical lattice simulations, we have probed a wide range of the parameter space of the theory, numerically determining the main timescales in the process, namely the initial tachyonic instability, the onset of backreaction effects and the virialisation phase leading to a radiation-like equation-of-state for the spectator field. These findings allowed us to derive a set of parametric equations that can be conveniently applied to evaluate the efficiency of the heating process, the temperature at the onset of radiation domination and the minimum number of
Figure 10: Parametric dependence of the heating efficiency \(\Theta_{\rm ht}\) on the two model parameters \(\lambda\) and \(\nu\). Dashed lines correspond to the fitting formulas obtained for the single-scalar-field case. The colour coding matches the one defined for the previous figures.
e-folds of inflation required to solve the flatness and horizon problems in specific quintessential scenarios, without relying on further lattice simulations. In this sense, our results constitute a major step forward in addressing the question of heating in non-oscillatory models of inflation [42; 58; 61; 62; 63], within a minimal, natural and non-perturbative setup. In particular, since the gravitational coupling considered in this work is required by the renormalisation of the energy-momentum tensor in curved space-time [64; 29], our findings provide a lower bound on the heating efficiency and the associated radiation temperature, allowing generically for successful big bang nucleosynthesis even in the absence of direct couplings between the inflaton and the matter sector.
For the sake of completeness, we also explored a prototypical yet significant extension of the single-field scenario. By introducing an explicit coupling with a secondary scalar sector beyond the standard model, we have assessed the potential depletion of the spectator field fluctuations prior to the onset of non-linearities. The two-field scenario constitutes an excellent testing ground for the robustness of the single-scalar-field fitting formulas. Indeed, our analysis shows that, for moderate values of the coupling between fields, the results of the single-field model constitute an accurate description of the two-fields model as well. Note also that these results can be directly applied to scenarios involving Abelian gauge bosons as daughter fields, as first shown in Ref. [37] in the context of parametric resonance and explicitly verified here.
Our approach could be easily adapted to scenarios far beyond the toy-model cases considered in this work. The most direct application would be the Standard Model itself, with a Higgs field stable up to the Planck scale [34] playing the role of the spectator field. 2 In this context, the analysis of secondary interactions becomes extremely important, since the tachyonic productions of Higgs perturbations is expected to compete with the secondary amplification of gauge bosons and their potential decay into fermions, along the lines of the _Combined Preheating_ scenario developed in Refs. [66; 35; 67]. Interestingly enough, the inclusion of these dissipative decay channels could strongly modify the single-field picture presented in this paper, opening the door to stabilising the defects generated during the transition and modifying with it the associated gravitational waves spectrum [68; 24].
Footnote 2: For works assuming the _instability_ of the electroweak vacuum at high energies, see e.g. Refs. [11; 65; 9].
Further applications of our setting could include models with a larger field content, involving, for instance, additional dark matter or beyond-the-standard sectors, as well as higher-dimensional operators. In this context, it would be interesting to explore the early non-thermal production of dark matter relics [69; 70; 71] and the generation of the matter-antimatter asymmetry in the Universe [23]. Finally, directly coupled gauge fields, involving for instance Chern-Simons interactions, could also be considered as interesting secondary fields with a rich phenomenology, e.g. in the context of primordial magnetogenesis.
###### Acknowledgements.
We thank Dario Bettoni and Asier Lopez-Eiguren for useful discussions. G.L. (ORCID 0000-0002-4739-4946) is supported by a fellowship from "la Caixa" Foundation (ID 100010434) with fellowship code LCF/BQ/DI21/11860024. G. L. thanks also the Fundacao para a Ciencia e a Tecnologia (FCT), Portugal, for the financial support to the Center for Astrophysics and Gravitation-CENTRA, Instituto Superior Tecnico, Universidade de Lisboa, through the Project No. UIDB/00099/2020. J.R. (ORCID ID 0000-0001-7545-1533) is supported by a Ramon y Cajal contract of the Spanish Ministry of Science and Innovation with
Ref. RYC2020-028870-I. The authors acknowledge the Department of Theoretical Physics of the Universidad Complutense de Madrid for allowing access to the cluster facilities utilised for the research carried out in this paper.
|
2305.19155 | Flavor, CP and Metaplectic Modular Symmetries in Type IIB Chiral Flux
Vacua | We examine symmetries of chiral four-dimensional vacua of Type IIB flux
compactifications with vanishing superpotential $W=0$. We find that the ${\cal
N}=1$ supersymmetric MSSM-like and Pati-Salam vacua possess enhanced discrete
symmetries in the effective action below the mass scale of stabilized complex
structure moduli and dilaton. Furthermore, a generation number of
quarks/leptons is small on these vacua where the flavor, CP and metaplectic
modular symmetries are described in the framework of eclectic flavor symmetry. | Takafumi Kai, Keiya Ishiguro, Hiroshi Okada, Hajime Otsuka | 2023-05-30T15:57:24Z | http://arxiv.org/abs/2305.19155v3 | # Flavor, CP and Metaplectic Modular Symmetries in Type IIB Chiral Flux Vacua
###### Abstract
We examine symmetries of chiral four-dimensional vacua of Type IIB flux compactifications with vanishing superpotential \(W=0\). We find that the \({\cal N}=1\) supersymmetric MSSM-like and Pati-Salam vacua possess enhanced discrete symmetries in the effective action below the mass scale of stabilized complex structure moduli and dilaton. Furthermore, a generation number of quarks/leptons is small on these vacua where the flavor, CP and metaplectic modular symmetries are described in the framework of eclectic flavor symmetry.
+
Footnote †: institutetext: Department of Physics, University of Wisconsin, Madison, WI 53706, USA
## 1 Introduction
The string theory predicts a huge number of low-energy effective field theories, the so-called string theory landscape. In particular, background fluxes in extra-dimensional spaces lead to the rich and attractive vacuum structure of the string landscape, which will be quantified by a statistical study [1; 2; 3] as well as the swampland program [4; 5; 6]1. It is known that the statistical study of Type IIB flux vacua is a powerful approach to address the vacuum distribution and selection rules on the moduli spaces.
Footnote 1: See for a review, e.g., Ref. [7].
In Type IIB flux compactifications on \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\) orientifolds, the distribution of complex structure moduli fields was known to be clustered at fixed points of \(SL(2,\mathbb{Z})\) modular symmetry of the torus [8; 9], where the fixed points in the \(SL(2,\mathbb{Z})\) moduli space correspond to \(\tau=i,\omega,i\infty\) with \(\omega=\frac{-1+\sqrt{3}i}{2}\), each with enhanced symmetries. Remarkably, probabilities of moduli values are peaked at a \(\mathbb{Z}_{3}\) fixed point \(\tau=\omega\), indicating that discrete \(\mathbb{Z}_{3}\) symmetry remains in the low-energy effective action of moduli fields [9]. Such a novel feature about the distribution of flux vacua was explored in this simple toroidal orientifold but it is expected to appear in a more generic Calabi-Yau moduli space with symplectic modular symmetry.
In this paper, we further examine semi-realistic four-dimensional (4D) vacua with Standard Model (SM) spectra. Since a generation number of fermions is determined by background fluxes on magnetized D-branes, the generation number and three-form fluxes stabilizing moduli fields will be correlated through tadpole cancellation conditions of D-branes.
It is interesting to reveal how much 3-generation models are distributed in the flux landscape. Furthermore, the flavor symmetries of quarks and leptons will also be related to the modular symmetry of the torus, because moduli-dependent Yukawa couplings transform under the moduli symmetry [10]. For illustrative purpose, we deal with the simple \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\) orientifolds. By analyzing physically-distinct configurations of background fluxes leading to vanishing superpotential \(W=0\), we find that the generation number of quarks and leptons is restricted to be small due to the tadpole cancellation condition. Furthermore, flavor symmetry, CP, and modular symmetry in semi-realistic 4D vacua are uniformly described in the context of eclectic flavor symmetry [11; 12] as developed in both the top-down and bottom-up approaches [11; 12; 13; 14; 15; 16].
This paper is organized as follows. In Sec. 2, we first review the flux compactifications on \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\). Next, we incorporate specific magnetized D-brane models without and with a discrete \(B\) field in Secs. 2.2 and 2.3, respectively. It turned out that the string landscape leads to the small generation number of quarks and leptons. In Sec. 3, we begin with the metaplectic modular symmetry in Sec. 3.1, which can be realized in \(T^{2}\) and \(T^{2}/\mathbb{Z}_{2}\) orbifold with magnetic fluxes as discussed in Secs. 3.2 and 3.3, respectively. The CP transformation will be unified in the context of generalized modular symmetry in Sec. 3.4. Finally, we discuss the unification of flavor, CP, and modular symmetries in Type IIB chiral 4D flux vacua in Sec. 3.5. Sec. 4 is devoted to the conclusion.
## 2 Moduli distributions in Type IIB flux vacua with SM spectra
In Sec. 2.1, we first review the vacuum structure of Type IIB flux compactification on \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\) orientifolds. Next, we introduce semi-realistic magnetized D-brane models in Type IIB flux vacua, taking into account the tadpole cancellation conditions in Secs. 2.2 and 2.3. It is found that the generation number of quarks and leptons is restricted to be small due to the tadpole cancellation condition.
### Flux compactifications on \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\) orientifolds
We begin with the Type IIB flux compactifications on \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\) orientifolds without discrete torsion, following the notation of Ref. [17]. The complex coordinates of \(T^{6}=(T^{2})_{1}\times(T^{2})_{2}\times(T^{2})_{3}\) are defined as \(z_{i}=x_{i}+\tau_{i}y_{i}\) with \(i=1,2,3\), subject to the following \(\mathbb{Z}_{2}\) orbifoldings:
\[\theta:(z_{1},z_{2},z_{3}) \rightarrow(-z_{1},-z_{2},z_{3}),\] \[\theta^{\prime}:(z_{1},z_{2},z_{3}) \rightarrow(z_{1},-z_{2},-z_{3}). \tag{1}\]
Furthermore, the orientifold projection acts \(z_{i}\) as
\[\mathcal{R}:(z_{1},z_{2},z_{3})\rightarrow(-z_{1},-z_{2},-z_{3}), \tag{2}\]
in addition to the world-sheet parity \(\Omega\). It results in 64 O3-planes, 4O7\({}_{1}\)-, 4O7\({}_{2}\)-, 4O7\({}_{3}\)-planes, located at a fixed point of \(\mathcal{R}\theta^{\prime}\), \(\mathcal{R}\theta\theta^{\prime}\), \(\mathcal{R}\theta\), respectively.
On the toroidal ambient space, i.e., \(T^{6}\), the following three forms are invariant under \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime}\) symmetry:
\[\alpha_{0}=dx_{1}\wedge dx_{2}\wedge dx_{3}, \beta^{0}=dy_{1}\wedge dy_{2}\wedge dy_{3},\] \[\alpha_{1}=dy_{1}\wedge dx_{2}\wedge dx_{3}, \beta^{1}=-dx_{1}\wedge dy_{2}\wedge dy_{3},\] \[\alpha_{2}=dx_{1}\wedge dy_{2}\wedge dx_{3}, \beta^{2}=-dy_{1}\wedge dx_{2}\wedge dy_{3},\] \[\alpha_{3}=dx_{1}\wedge dy_{2}\wedge dx_{3}, \beta^{3}=-dy_{1}\wedge dy_{2}\wedge dx_{3}. \tag{3}\]
with \(\int_{T^{6}}\alpha_{I}\wedge\beta^{J}=\delta_{I}^{J}\). To define the complex structure moduli \(\tau_{i}\), we expand the holomorphic three-form on the above bases:
\[\Omega_{3}=dz_{1}\wedge dz_{2}\wedge dz_{3}=\sum_{I=0}^{3}\left( X^{I}\alpha_{I}-F_{I}\beta^{I}\right), \tag{4}\]
where the homogeneous coordinates \(X^{I}\) and the derivatives \(F_{I}=\partial_{X^{I}}F\) of the prepotential \(F\) are defined as
\[X^{I}=\int_{A^{I}}\Omega_{3},\qquad F_{I}=\int_{B_{I}}\Omega_{3}. \tag{5}\]
These explicit forms are now given by
\[X^{0}=1, F_{0}=-\tau_{1}\tau_{2}\tau_{3},\] \[X^{1}=\tau_{1}, F_{1}=\tau_{2}\tau_{3},\] \[X^{2}=\tau_{2}, F_{2}=\tau_{1}\tau_{3},\] \[X^{3}=\tau_{3}, F_{3}=\tau_{1}\tau_{2}. \tag{6}\]
Note that the three-cycles \(\{A^{I},B_{I}\}\) correspond to the Poincare-dual basis of the three-forms (3), satisfying \(A^{I}\cap A^{J}=B_{I}\cap B_{J}=0\) and \(A^{I}\cap B_{J}=\delta_{J}^{I}\) with \(I,J=0,...,3\). The \(\Pi_{i=1}^{3}SL(2,\mathbb{Z})_{i}\) modular symmetries of the factorizable torus \((T^{2})_{i}\) can be seen on these coordinates. As discussed in Ref. [18], two generators of \(\Pi_{i=1}^{3}SL(2,\mathbb{Z})_{i}\):
\[S_{(i)}:\tau_{i}\rightarrow-1/\tau_{i},\quad T_{(i)}:\tau_{i} \rightarrow\tau_{i}+1, \tag{7}\]
are embedded into \(8\times 8\) matrices, e.g.,
\[S_{(1)}=\begin{pmatrix}0&-1&0&0&0&0&0\\ 1&0&0&0&0&0&0\\ 0&0&0&0&0&0&-1\\ 0&0&0&0&0&-1&0\\ 0&0&0&0&-1&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&1&0&0&0&0\end{pmatrix},\quad T_{(1)}=\begin{pmatrix}1&0&0&0&0&0&0\\ 1&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&1&-1&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&1&0&0&1&0\\ 0&0&1&0&0&0&1\end{pmatrix}. \tag{8}\]
The other generators are also constructed by flipping the corresponding moduli fields. Thus, the modular groups are the subgroup of \(Sp(8,\mathbb{Z})\), which is the symplectic modular symmetry of the complex structure moduli space in the homogeneous coordinates.
It was known that the 4D kinetic terms of the closed string moduli, i.e., three complex structure moduli \(\tau_{i}\), the axio-dilaton \(S\) and three Kahler moduli \(T_{i}\) are derived from the following Kahler potential in units of the reduced Planck mass \(M_{\rm Pl}=1\):
\[K=-\ln(i(\tau_{1}-\bar{\tau}_{1})(\tau_{2}-\bar{\tau}_{2})(\tau_{3}-\bar{\tau}_ {3}))-\ln(i(\bar{S}-S))-2\ln{\cal V}(T,\bar{T}), \tag{11}\]
where \({\cal V}\) denotes the torus volume in units of the string length \(l_{s}=2\pi\sqrt{\alpha^{\prime}}\). The moduli superpotential is induced by background three-form fluxes in Type IIB string theory. Throughout this paper, we focused on the stabilization of complex structure moduli and axio-dilaton. Let us introduce the background Ramond-Ramond (RR) \(F_{3}\) and Neveu-Schwarz three forms \(H_{3}\) as follows:
\[\frac{1}{l_{s}^{2}}F_{3}=a^{0}\alpha_{0}+a^{i}\alpha_{i}+b_{i} \beta^{i}+b_{0}\beta^{0},\] \[\frac{1}{l_{s}^{2}}H_{3}=c^{0}\alpha_{0}+c^{i}\alpha_{i}+d_{i} \beta^{i}+d_{0}\beta^{0}, \tag{12}\]
where \(\{a^{0,1,2,3},b_{0,1,2,3},c^{0,1,2,3},d_{0,1,2,3}\}\) correspond to the integral flux quanta. They lead to the flux-induced superpotential in the 4D effective action [19]:
\[W =\frac{1}{l_{s}^{2}}\int G_{3}\wedge\Omega\] \[=a^{0}\tau_{1}\tau_{2}\tau_{3}+c^{1}S\tau_{2}\tau_{3}+c^{2}S\tau_ {1}\tau_{3}+c^{3}S\tau_{1}\tau_{2}-\sum_{i=1}^{3}b_{i}\tau_{i}+d_{0}S\] \[-c^{0}S\tau_{1}\tau_{2}\tau_{3}-a^{1}\tau_{2}\tau_{3}-a^{2}\tau_ {1}\tau_{3}-a^{3}\tau_{1}\tau_{2}+\sum_{i=1}^{3}d_{i}S\tau_{i}-b_{0}. \tag{13}\]
In Ref. [20], the moduli stabilization was performed in the isotropic regime, namely
\[\tau\equiv\tau_{1}=\tau_{2}=\tau_{3}, \tag{14}\]
with overall flux quanta:
\[a\equiv a^{1}=a^{2}=a^{3},\quad b\equiv b_{1}=b_{2}=b_{3},\quad c\equiv c^{1} =c^{2}=c^{3},\quad d\equiv d_{1}=d_{2}=d_{3}. \tag{15}\]
The moduli vacuum expectation values (VEVs) are given by
\[\langle S\rangle=\frac{r\tau+s}{u\tau+v}\,, \tag{16}\]
for the axio-dilaton and
\[\langle\tau\rangle =\frac{-m+\sqrt{m^{2}-4ln}}{2l} (l,n>0)\,,\] \[\langle\tau\rangle =\frac{-m-\sqrt{m^{2}-4ln}}{2l} (l,n<0)\,,\]
for the overall complex structure modulus, respectively. Here, we redefine the flux quanta
\[rl=a^{0},\,\,\,rm+sl=-3a,\,\,\,rn+sm=-3b,\,\,\,sn=-b_{0},\] \[ul=c^{0},\,\,um+vl=-3c,\,\,un+vm=-3d,\,\,vn=-d_{0}. \tag{15}\]
Since the effective action is invariant under the \(SL(2,\mathbb{Z})_{\tau}\equiv SL(2,\mathbb{Z})_{1}=SL(2,\mathbb{Z})_{2}=SL(2, \mathbb{Z})_{3}\) and \(SL(2,\mathbb{Z})_{S}\) modular symmetries, one can count finite number of physically-distinct flux vacua.2 Note that we have to be careful about the tadpole cancellation condition of D-brane charges because we deal with a compact manifold. In particular, we focus on the cancellation condition of the D3-brane charge, and other conditions will be analyzed in the next subsections. Specifically, the flux-induced D3-brane charge
Footnote 2: The modular symmetry was classified in Ref. [21] in the context of flux compactifications.
\[N_{\rm flux}=\frac{1}{l_{s}^{4}}\int H_{3}\wedge F_{3}=c^{0}b_{0}-d_{0}a^{0}+ \sum_{i=1}^{3}(c^{i}b_{i}-d_{i}a^{i})=c^{0}b_{0}-d_{0}a^{0}+3(cb-da) \tag{16}\]
satisfies
\[0\leq N_{\rm flux}\leq N_{\rm flux}^{\rm max}=\mathcal{O}(10^{5})\,, \tag{17}\]
where we suppose that Type IIB orientifolds are extended to the F-theory in the strong coupling regime, and \(N_{\rm flux}^{\rm max}=\mathcal{O}(10^{5})\) will be a largest value as discussed in Refs. [22; 23]. It allows us to understand the vacuum structure of the string landscape more specifically, as will be shown later. Furthermore, each flux quantum is in multiple of 8, that is, \(\{a^{0},a,b,b_{0},c^{0},c,d,d_{0}\}\in 8\,\mathbb{Z}\), and correspondingly \(N_{\rm flux}\in 192\,\mathbb{Z}\). Since the effective action, as well as the tadpole charge, are invariant under the modular symmetry, one can map the moduli VEVs into the fundamental domains. The number of stable vacua is shown in Figure 1, from which there is huge degeneracy at the fixed points in the \(SL(2,\mathbb{Z})_{\tau}\) moduli space. In particular, the \(\tau=\omega\) vacuum is realized by a high probability such as 62.3 % for \(N_{\rm flux}^{\rm max}=192\times 10\) and 40.3 % for \(N_{\rm flux}^{\rm max}=192\times 1000\)[9]. It can also be justified in a statistical argument. By taking the flux quanta as the continuous one, the number of supersymmetric \(W=0\) vacua is analytically estimated as [8]
\[N_{\rm vacua}\sim\frac{\pi^{2}}{108}\frac{(N_{\rm flux}^{\rm max})^{2}}{(t(m^ {2}-4ln))^{2}}, \tag{18}\]
with
\[t(x)=\left\{\begin{array}{ll}x&\quad\mbox{for}\,x\equiv 0\pmod{3}\\ 3x&\quad\mbox{for}\,\mbox{otherwise}\end{array}\right.\,. \tag{19}\]
Here, \(\gcd(l,m,n)=1\) is adopted in the analysis of Ref. [8], but the results are the same with our results as pointed out in Ref. [9]. Remarkably, \(\tau=\omega\) corresponding to \((l,m,n)=(1,-1,1)\) is invariant under the discrete \(\mathbb{Z}_{3}\) symmetry, generated by
\[\{1,\qquad ST,\qquad(ST)^{2}\}, \tag{20}\]
where \(S\) and \(T\) are generators of \(SL(2,\mathbb{Z})_{\tau}\):
\[S\,:\tau\rightarrow-\frac{1}{\tau},\qquad T\,:\tau\rightarrow\tau+1, \tag{2.21}\]
with \((ST)^{3}=1\). Thus, the effective action in Type IIB flux landscape enjoys the discrete \(\mathbb{Z}_{3}\) symmetry. However, it is unclear whether such a \(\mathbb{Z}_{3}\) symmetry still remains in the effective action with the SM spectra. In the next section, we will engineer the semi-realistic SM-like models on magnetized D-branes and discuss the role of discrete symmetry.
### Distribution of \(g\)-generation models without discrete \(B\) field
In addition to O3- and O7-planes located at fixed loci, we construct semi-realistic models on \(N_{a}\) stacks of magnetized D\((3+2n)\)-branes wrapping \(2n\)-cycles on \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime})\) orientifolds. We turn on the background \(U(1)_{a}\) gauge field strength \(F_{a}\) on \((T^{2})_{i}\),
\[\frac{m_{a}^{i}}{2\pi}\int_{T_{i}^{2}}F_{a}^{i}=n_{a}^{i}, \tag{2.22}\]
where wrapping numbers of \(N_{a}\) D\((3+2n)\)-branes on \((T^{2})_{i}\) are represented by integers \(m_{a}^{i}\) with non-vanishing 0, 1, 2 and, 3 values on D3-, D5-, D7-, D9-branes, respectively. Note that \(\{n_{a}^{i},m_{a}^{i}\}\) for each \(a\) and \(i\) are assumed to be coprime numbers, and only the wrapping number \(m_{a}^{i}\) transforms as \(\Omega\mathcal{R}\,:\,m_{a}^{i}\rightarrow-m_{a}^{i}\) under \(\Omega\mathcal{R}\). For practical purposes, let us
introduce the homology classes of each \((T^{2})_{i}\), that is, \([{\bf 0}]_{i}\) and \([{\bf T^{2}}]_{i}\) for the class of point and of the two-torus with \([{\bf 0}]_{i}\cdot[{\bf T^{2}}]_{i}=-[{\bf T^{2}}]_{i}\cdot[{\bf 0}]_{i}=1\). Then, the stack \(a\) of D-branes has an associated homology class:
\[[{\bf Q}_{a}]=\Pi_{i=1}^{3}\left(n_{a}^{i}[{\bf 0}]_{i}+m_{a}^{i}[{\bf T^{2}}] _{i}\right). \tag{23}\]
Similarly, the 64 O3- and 4 O7\({}_{i}\) planes are expressed by RR charges -32 times the following homology classes:
\[[{\bf Q}_{O3}]=[{\bf 0}]_{1}\times[{\bf 0}]_{2}\times[{\bf 0}]_{3}, [{\bf Q}_{O7_{1}}]=-[{\bf 0}]_{1}\times[{\bf T^{2}}]_{2}\times[{\bf T^{2}}]_{3},\] \[[{\bf Q}_{O7_{2}}]=-[{\bf T^{2}}]_{1}\times[{\bf 0}]_{2} \times[{\bf T^{2}}]_{3}, [{\bf Q}_{O7_{3}}]=-[{\bf T^{2}}]_{1}\times[{\bf T^{2}}]_{2}\times[{\bf 0 }]_{3}. \tag{24}\]
Remarkably, these gauge fluxes will lead to semi-realistic D-brane models, that is, gauge groups \(G_{\rm SM}\times G_{\rm hidden}\) with chiral spectra. In particular, the index theorem tells us that the number of chiral zero-modes between two stacks \(a\) and \(b\) of D-branes on \(T^{6}=(T^{2})_{1}\times(T^{2})_{2}\times(T^{2})_{3}\) is counted by
\[I_{ab}=[{\bf Q}_{a}]\cdot[{\bf Q}_{b}]=\Pi_{i=1}^{3}(n_{a}^{i}m_{ b}^{i}-n_{b}^{i}m_{a}^{i}). \tag{25}\]
However, some of the couplings of zero-modes are projected out by \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{\prime}\) projection (1). Indeed, internal fermionic wavefunctions transform as
\[\theta : \psi(z_{1},z_{2},z_{3})\to s_{1}s_{2}\psi(-z_{1},-z_{2},z_{3}),\] \[\theta^{\prime} : \psi(z_{1},z_{2},z_{3})\to s_{2}s_{3}\psi(z_{1},-z_{2},-z_{3}), \tag{26}\]
where \(s_{i}={\rm sign}(I_{ab}^{i})\) corresponds to the chirality on each torus and its product \((s_{1}s_{2}s_{3})\) corresponds to the 4D chirality. Thus, there exist \(\mathbb{Z}_{2}\)-even and -odd modes on each torus whose explicit form of zero-mode wavefunctions is shown later. Note that the two conditions should be consistent with each other; that is,
\[\psi(z_{1},z_{2},z_{3})=s_{1}s_{2}\psi(-z_{1},-z_{2},z_{3})=s_{2} s_{3}\psi(z_{1},-z_{2},-z_{3}), \tag{27}\]
from which allowed zero-modes are given by a specific combination of \(\mathbb{Z}_{2}\)-even modes (\(\psi_{\rm even}^{i}\)) and \(\mathbb{Z}_{2}\)-odd zero-modes (\(\psi_{\rm odd}^{i}\)) on \((T^{2})_{i}\):
\[\psi=\{\psi_{\rm even}^{i}\psi_{\rm even}^{j}\psi_{\rm even}^{k}, \ \psi_{\rm even}^{i}\psi_{\rm even}^{j}\psi_{\rm odd}^{k},\ \psi_{\rm even}^{i}\psi_{\rm odd}^{j}\psi_{\rm even}^{k},\ \psi_{\rm odd}^{i}\psi_{\rm even}^{j}\psi_{\rm even}^{k},\] \[\psi_{\rm odd}^{i}\psi_{\rm odd}^{j}\psi_{\rm odd}^{k},\ \psi_{\rm odd}^{i}\psi_{\rm odd}^{j}\psi_{\rm even}^{k},\ \psi_{\rm odd}^{i}\psi_{\rm even}^{j}\psi_{\rm odd}^{k},\ \psi_{\rm even}^{i}\psi_{\rm odd}^{j}\psi_{\rm odd}^{k}\}, \tag{28}\]
with \(i\neq j\neq k\). Since the number of these \(\mathbb{Z}_{2}\)-even and -odd zero-modes is counted by [24]
\[I_{\rm even}^{i}=\frac{1}{2}(I_{ab}^{i}+s_{i}f_{i}),\qquad I_{ \rm odd}^{i}=\frac{1}{2}(I_{ab}^{i}-s_{i}f_{i}), \tag{29}\]
with \(f_{i}=1\) for odd \(I_{ab}^{i}\) and \(f_{i}=2\) for even \(I_{ab}^{i}\), the total number of zero-modes is still described by
\[I_{ab}=\Pi_{i=1}^{3}(I_{\rm even}^{i}+I_{\rm odd}^{i}). \tag{30}\]
Here, we assume \(I^{i}_{ab}\neq 0\). If one of the indices is 0, e.g., \(I^{3}_{ab}=0\), the spectrum is not chiral, and the index is counted by [24]
\[I_{ab}=\left\{\begin{array}{ll}I^{1}_{\rm even}I^{2}_{\rm even}-I^{1}_{\rm odd }I^{2}_{\rm odd}&(s_{1}>0,\,s_{2}>0)\\ I^{1}_{\rm even}I^{2}_{\rm odd}-I^{1}_{\rm odd}I^{2}_{\rm even}&(s_{1}>0,\,s_ {2}<0)\\ I^{1}_{\rm odd}I^{2}_{\rm even}-I^{1}_{\rm even}I^{2}_{\rm odd}&(s_{1}<0,\,s_ {2}>0)\\ I^{1}_{\rm odd}I^{2}_{\rm odd}-I^{1}_{\rm even}I^{2}_{\rm even}&(s_{1}<0,\,s_ {2}<0)\end{array}\right.. \tag{31}\]
On \(N_{a}\) stack of D-branes that does not lie on one of the O-planes, the mass spectra consist of \(U(N_{a}/2)\) vector multiplets and three adjoint chiral multiplets (called a \(aa\) sector)3. On the other hand, when \(2N_{a}\) stack of D-branes lies on one of the O-planes, the mass spectra consist of \(USp(N_{a})\) vector multiplets and three antisymmetric chiral multiplets, which we also call a \(aa\) sector. In addition, there are chiral multiplets that arise from intersections of two different stacks \(a\) and \(b\) of D-branes or the stack \(a\) and its orientifold image \(a^{\prime}\), as summarized in Table 1.
Footnote 3: The \(U(N_{a})\) gauge group reduces to \(U(N_{a}/2)\) due to the orbifold projection.
Since the magnetic fluxes induce the D3- and D7-brane charges, we have to be careful about their tadpole cancellation conditions:
\[D3 : \sum_{a}N_{a}n^{1}_{a}n^{2}_{a}n^{3}_{a}+\frac{1}{2}N_{\rm flux}=16,\] \[D7_{1} : \sum_{a}N_{a}n^{1}_{a}m^{2}_{a}m^{3}_{a}=-16,\] \[D7_{2} : \sum_{a}N_{a}n^{2}_{a}m^{1}_{a}m^{3}_{a}=-16,\] \[D7_{3} : \sum_{a}N_{a}n^{3}_{a}m^{1}_{a}m^{2}_{a}=-16. \tag{32}\]
These conditions play a role of the cancellation of 4D chiral anomalies, but K-theory conditions require extra constraints. Indeed, probe D3 and D7-branes with \(USp(2)\simeq SU(2)\) gauge group suffer from a global gauge anomaly if the number of 4D fermions charged in the fundamental representation of \(SU(2)\) is odd [25]. It imposes the following K-theory constraints [26]:
\[\left\{\sum_{a}N_{a}n^{1}_{a}n^{2}_{a}n^{3}_{a},\,\sum_{a}N_{a}n^{1}_{a}m^{2}_ {a}m^{3}_{a},\,\sum_{a}N_{a}n^{2}_{a}m^{1}_{a}m^{3}_{a},\,\sum_{a}N_{a}n^{3}_ {a}m^{1}_{a}m^{2}_{a}\right\}\in 4\,\mathbb{Z}. \tag{33}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Sectors & Representations & Multiplicities \\ \hline \(ab+ba\) & \((\mathbbm{\square}_{\bf 0},\overline{\mathbbm{\square}}_{\bf 0})\) & \(I_{ab}\) \\ \(ab^{\prime}+b^{\prime}a\) & \((\mathbbm{\square}_{\bf 0},\overline{\mathbbm{\square}}_{\bf 0})\) & \(I_{ab^{\prime}}\) \\ \(aa^{\prime}+a^{\prime}a\) & \(\overline{\square}\) (symmetric) & \(\frac{1}{2}(I_{aa^{\prime}}-4I_{aO})\) \\ \(aa^{\prime}+a^{\prime}a\) & \(\overline{\square}\) (anti-symmetric) & \(\frac{1}{2}(I_{aa^{\prime}}+4I_{aO})\) \\ \hline \end{tabular}
\end{table}
Table 1: Multiplicities of chiral zero-modes in each sector.
Since magnetized D9-branes with negative \(n_{1,2,3}\) will carry anti D3- and D7-brane charges, it will be possible to construct semi-realistic 3-generation models on the flux background (see, e.g., [27; 28]). In these analyses, we have not introduced anti-D3 branes satisfying tadpole cancellation condition, but it would be possible to construct realistic models, taking into account the effect of anti-D3 brane annihilations with flux [29]. Note that the \({\cal N}=1\) supersymmetry on the orientifold background will be preserved when the following condition is satisfied [30]:
\[\sum_{i}\biggl{[}\tan^{-1}\left(\frac{m_{a}^{i}}{n_{a}^{i}}{ \cal A}_{i}\right)+\theta(n_{a}^{i})\pi\biggr{]}=0\qquad(\bmod 2\pi), \tag{34}\]
with
\[\theta(n_{a}^{i})=\left\{\begin{array}{ll}0&(n_{a}^{i}\geq 0) \\ 1&(n_{a}^{i}<0)\end{array},\right. \tag{35}\]
where \({\cal A}_{i}\) denote the area of the torus \((T^{2})_{i}\).
For concreteness, let us consider the local brane configurations with SM spectra as shown in Table 2[27], leading to \(g\) generation of quarks and leptons \(I_{ab}=I_{ac}=g\).4
Footnote 4: Here and in what follows, we call the generation number the sum of \(\mathbb{Z}_{2}\)-even and -odd modes without specifying it.
The supersymmetry condition (34) is satisfied when
\[{\cal A}_{2}={\cal A}_{3}. \tag{36}\]
Furthermore, some of \(U(1)s\) become massive by absorbing axions associated with Ramond-Ramond fields through the Green-Schwarz mechanism. Indeed, the dimensional reduction of the Chern-Simons couplings in the D-brane action induces the corresponding 4D couplings:
\[N_{a}m_{a}^{1}n_{a}^{2}n_{a}^{3}\int_{R^{1,3}}C_{2}\wedge F_{a}, \qquad N_{a}m_{a}^{i}n_{a}^{j}n_{a}^{k}\int_{R^{1,3}}C_{2}^{i}\wedge F_{a},\] \[N_{a}m_{a}^{j}m_{a}^{k}n_{a}^{i}\int_{R^{1,3}}{\cal C}_{2}^{i} \wedge F_{a},\qquad N_{a}m_{a}^{1}m_{a}^{2}m_{a}^{3}\int_{R^{1,3}}{\cal C}_{2} \wedge F_{a}, \tag{37}\]
with
\[C_{2}^{i}:=\int_{(T^{2})_{i}}C_{4},\qquad{\cal C}_{2}^{i}:=\int _{(T^{2})_{j}\times(T^{2})_{k}}C_{6}\qquad{\cal C}_{2}:=\int_{(T^{6})}C_{8}. \tag{38}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(N_{\alpha}\) & Gauge group & \((n_{\alpha}^{1},m_{\alpha}^{1})\) & \((n_{\alpha}^{2},m_{\alpha}^{2})\) & \((n_{\alpha}^{3},m_{\alpha}^{3})\) \\ \hline \hline \(N_{a}=6\) & \(SU(3)_{C}\) & (1,0) & \((g,1)\) & \((g,-1)\) \\ \hline \(N_{b}=2\) & \(USp(2)_{L}\) & (0,1) & \((1,0)\) & \((0,-1)\) \\ \hline \(N_{c}=2\) & \(USp(2)_{R}\) & (0,1) & \((0,-1)\) & \((1,0)\) \\ \hline \(N_{d}=2\) & \(U(1)_{d}\) & (1,0) & \((g,1)\) & \((g,-1)\) \\ \hline \end{tabular}
\end{table}
Table 2: D-brane configurations leading to left-right symmetric Minimal Supersymmetric Standard Model (MSSM). The magnetic flux \(g\) determines the generations of quark and lepton chiral multiplets in the visible sector.
To satisfy the tadpole cancellation conditions, we have also supposed the existence of magnetized D9-branes in the hidden sector to satisfy the tadpole cancellation conditions. It means that the D3-brane charge induced by the magnetic flux on D9-branes \({\cal Q}_{D3}^{\rm hid}\) satisfy
\[{\cal Q}_{D3}^{\rm hid}=16-\frac{N_{\rm flux}}{2}-8g^{2},\leftrightarrow 8g^{2}=-{ \cal Q}_{D3}^{\rm hid}+16-\frac{N_{\rm flux}}{2}. \tag{39}\]
Since there are several possibilities for the construction of hidden sectors, we freely change the value of \({\cal Q}_{D3}^{\rm hid}\) to reveal the mutual relation between the generation number \(g\) and the flux quanta \(N_{\rm flux}\).5 In Fig. 2, we change the maximum value of \({\cal Q}_{D3}^{\rm hid}\) as \(|{\cal Q}_{D3}^{\rm hid}|=400,1200,2000\), each which we analyze the distribution of flux vacua at fixed \(\tau\) with respect to \(g\). It turns out that the number of flux vacua increases when \(g\) is smaller. Thus, the small generation number is favored in the string landscape. Furthermore, when we restrict ourselves to three-generation models, that is, \(g=3\), left-right MSSM-like models are still peaked at the \(\mathbb{Z}_{3}\) fixed point \(\tau=\omega\) as shown in Fig. 3. This phenomenon is similar to the analysis of Sec. 2.1, but the percentage of three-generation clustered regions differs from before. One can further study the Yukawa couplings derived in Type IIB magnetized D-brane models [10]. In the current brane configuration, the Yukawa couplings of quarks and leptons are rank one, and the flavor structure is trivial due to the fact that the flavor structure is realized from two different tori. Thus, we move on to the other magnetized D-brane model, inducing the non-trivial flavor structure of quarks and leptons.
Footnote 5: See, e.g., [28], for the numerical analysis searching \({\cal Q}_{D3}^{\rm hid}\).
Figure 2: The numbers of models as a function of the generation number \(g\) at \(\tau=i\) and \(\tau=\omega\), respectively. Note that there exists \(\mathbb{Z}_{2}\) symmetry at \(\tau=i\) generated by \(\{1,S\}\). Here, the vertical axis represents the ratio of the number of models to the total number of models. There are three plots in each panel, and each of them corresponds to the maximum value of the D3-brane charge in the hidden sector \(|Q_{D3}^{\rm hid}|=400,1200,2000\).
### Distribution of \(g\)-generation models with discrete \(B\) field
In this section, we add a discrete value of Kalb-Ramond \(B\)-field along one of the two-tori [31], in particular, \((T^{2})_{3}\), corresponding to the twisted torus in the T-dual IIA string theory. Since \(B\)-field induces the half-integer flux, the magnetic flux on the third torus is modified as \(\tilde{n}_{a}^{3}=n_{a}^{3}+\frac{1}{2}m_{a}^{3}\). According to it, the tadpole cancellation conditions are given by [32]6
Footnote 6: When we consider a different tilted direction, the effective flux is given by \(\tilde{m}_{a}^{3}=m_{a}^{3}+\frac{1}{2}n_{a}^{3}\) as discussed in the T-dual IIA side [33].
\[D3 : \sum_{a}N_{a}n_{a}^{1}n_{a}^{2}\tilde{n}_{a}^{3}+\frac{1}{2}N_{ \rm flux}=8,\] \[D7_{1} : \sum_{a}N_{a}n_{a}^{1}m_{a}^{2}m_{a}^{3}=-16,\] \[D7_{2} : \sum_{a}N_{a}n_{a}^{2}m_{a}^{1}m_{a}^{3}=-16,\] \[D7_{3} : \sum_{a}N_{a}\tilde{n}_{a}^{3}m_{a}^{1}m_{a}^{2}=-8. \tag{40}\]
Figure 3: The numbers of stable flux vacua with \(g=3\) generation of quarks/leptons on the fundamental domain of \(\tau\) for the maximum value of D3-brane charge \(Q_{D3}^{\rm hid}|_{\rm max}\)\(=400\) in the left panel and for \(Q_{D3}^{\rm hid}|_{\rm max}\)\(=1200\) in the right panel, respectively.
The other SUSY condition (2.34) and K-theory condition (2.33) are also written in terms of \(\tilde{n}_{a}^{3}\). For concreteness, let us consider the local brane configurations with SM spectra as shown in Table 3, leading to \(g\) generation of quarks and leptons \(I_{ab}=I_{c\alpha}=g\).7 The supersymmetry condition (2.34) is satisfied when
Footnote 7: Similar brane configurations are discussed in T-dual Type IIA string theory, e.g., [34].
\[g{\cal A}_{1}={\cal A}_{2}=2{\cal A}_{3}. \tag{2.41}\]
In this model, there \(U(1)\)s in the gauge symmetry \(U(4)_{C}\times U(2)_{L}\times U(2)_{R}\) absorb axions through the Green-Schwarz couplings (2.37). The remaining gauge symmetry is described by \(SU(4)_{C}\times SU(2)_{L}\times SU(2)_{R}\). Furthermore, the Pati-Salam gauge symmetry can be broken to MSSM gauge group by the splitting of \(a\) and \(c\) stack of D-branes, but we leave the detailed study of open string moduli for future work.
For the same reason as the analysis of the previous section, we allow several values of \({\cal Q}_{D3}^{\rm hid}\) satisfying
\[{\cal Q}_{D3}^{\rm hid}=8-\frac{N_{\rm flux}}{2}-2g,\leftrightarrow 2g=-{ \cal Q}_{D3}^{\rm hid}+8-\frac{N_{\rm flux}}{2}, \tag{2.42}\]
to reveal the mutual relation between the generation number \(g\) and the flux quanta \(N_{\rm flux}\). In Fig. 4, we change the maximum value of \({\cal Q}_{D3}^{\rm hid}\) as \(|{\cal Q}_{D3}^{\rm hid}|=200,400,800\), each which we analyze the distribution of flux vacua at fixed \(\tau\) with respect to \(g\). It turns out that the number of flux vacua also increases when \(g\) is smaller, although the behavior is different from the previous analysis. Thus, the string landscape leads to the small generation number. Furthermore, when we restrict ourselves to three-generation model, that is, \(g=3\), Pati-Salam models are still peaked at the \(\mathbb{Z}_{3}\) fixed point \(\tau=\omega\) in a similar to the analysis of Sec. 2.1. In contrast to the previous models in Sec. 2.2, the Yukawa couplings of quarks and leptons are rank 3 and the flavor structure is non-trivial due to the fact that the flavor structure is originated from one of tori. We will discuss the relation between flavor symmetries and modular symmetries in the next section.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(N_{\alpha}\) & Gauge group & \((n_{\alpha}^{1},m_{\alpha}^{1})\) & \((n_{\alpha}^{2},m_{\alpha}^{2})\) & \((\tilde{n}_{\alpha}^{3},m_{\alpha}^{3})\) \\ \hline \hline \(N_{a}=8\) & \(U(4)_{C}\) & \((0,-1)\) & \((1,1)\) & \((1/2,1)\) \\ \hline \(N_{b}=4\) & \(U(2)_{L}\) & \((g,1)\) & \((1,0)\) & \((1/2,-1)\) \\ \hline \(N_{c}=4\) & \(U(2)_{R}\) & \((g,-1)\) & \((0,1)\) & \((1/2,-1)\) \\ \hline \end{tabular}
\end{table}
Table 3: D-brane configurations leading to Pati-Salam-like model. The magnetic flux \(g\) determines the generations of quark and chiral chiral multiplets in the visible sector, where \(\tilde{n}=n+m/2\).
Figure 4: The numbers of models as a function of the generation number \(g\) at \(\tau=i\) and \(\tau=\omega\), respectively. Here, the vertical axis represents the ratio of the number of models to the total number of models. There are three plots in each panel, and each of them corresponds to the maximum value of the D3-brane charge in the hidden sector \(|Q_{D3}^{\rm hid}|=200,400,800\).
Figure 5: The numbers of stable flux vacua with \(g=3\) generation of quarks/leptons on the fundamental domain of \(\tau\) for the maximum value of D3-brane charge \(Q_{D3}^{\rm hid}|_{\rm max}\)\(=200\) in the left panel and for \(Q_{D3}^{\rm hid}|_{\rm max}\)\(=400\) in the right panel, respectively.
## 3 Eclectic Flavor Symmetry in Type IIB flux vacua
### Metaplectic modular symmetry
Since the Yukawa couplings of quarks and leptons are described by a half-integer modular form, they are formulated in the context of metaplectic group \(Mp(2,\mathbb{Z})\). Following Ref. [35], let us briefly review the notion of \(Mp(2,\mathbb{Z})\) which is a twofold covering group of \(SL(2,\mathbb{Z})\).
\[Mp(2,\mathbb{Z})=\left\{\tilde{\gamma}=(\gamma,\varphi(\gamma, \tau))\,\Bigg{|}\,\,\gamma=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in SL(2,\mathbb{Z}),\quad\varphi(\gamma,\tau)^{2}=(c \tau+d)\right\}, \tag{3.1}\]
where the multiplication law is defined as
\[(\gamma_{1},\varphi(\gamma_{1},\tau))(\gamma_{2},\varphi(\gamma_{ 2},\tau))=(\gamma_{1}\gamma_{2},\varphi(\gamma_{1},\gamma_{2}\tau)\varphi( \gamma_{2},\tau)). \tag{3.2}\]
When we redefine \(\varphi(\gamma,\tau)=\pm(c\tau+d)^{1/2}=:\epsilon J_{1/2}(\gamma,\tau)\) with \(\epsilon=\pm 1\), the above law is written by
\[(\gamma_{1},\epsilon_{1}J_{1/2}(\gamma_{1},\tau))(\gamma_{2}, \epsilon_{2}J_{1/2}(\gamma_{2},\tau))=(\gamma_{1}\gamma_{2},\epsilon_{1} \epsilon_{2}\zeta_{1/2}(\gamma_{1},\gamma_{2})J_{1/2}(\gamma_{1}\gamma_{2}, \tau)), \tag{3.3}\]
where \(\zeta_{1/2}(\gamma_{1},\gamma_{2})=\{+1,-1\}\) (for more details, see, e.g., Appendix A of Ref. [35]). For practical purposes, we present some explicit forms of \(\zeta_{1/2}(\gamma_{1},\gamma_{2})\):
\[\zeta_{1/2}(\gamma,T) =\zeta_{1/2}(T,\gamma)=1,\] \[\zeta_{1/2}(\gamma,S) =\left\{\begin{array}{ll}-1,&\quad(c<0,d\leq 0)\\ 1,&\quad(\text{others})\end{array}\right.,\qquad\zeta_{1/2}(S,\gamma)=\left\{ \begin{array}{ll}-1,&\quad(a\leq 0,b<0)\\ 1,&\quad(\text{others})\end{array}\right.. \tag{3.4}\]
The generators of \(Mp(2,\mathbb{Z})\) are written in terms of \(SL(2,\mathbb{Z})\) generators \(S\) and \(T\):
\[\widetilde{S}=(S,-\sqrt{-\tau})=\left(\left(\begin{array}{cc}0&1 \\ -1&0\end{array}\right),-\sqrt{-\tau}\right),\qquad\widetilde{T}=(T,1)=\left( \left(\begin{array}{cc}1&1\\ 0&1\end{array}\right),1\right), \tag{3.5}\]
satisfying
\[\widetilde{S}^{2}=\tilde{R},\qquad(\widetilde{S}\tilde{T})^{3}= \tilde{R}^{4}=1,\qquad\tilde{T}\tilde{R}=\tilde{R}\tilde{T}. \tag{3.6}\]
Here, \(\tilde{R}\) and \(\widetilde{S}\tilde{T}\) are of the form
\[\tilde{R}:=\widetilde{S}^{2}=(S^{2},-i)=\left(\left(\begin{array}[] {cc}-1&0\\ 0&-1\end{array}\right),-i\right),\] \[\widetilde{S}\tilde{T}=(ST,-\sqrt{-\tau-1})=\left(\left(\begin{array []{cc}0&1\\ -1&-1\end{array}\right),-\sqrt{-\tau-1}\right). \tag{3.7}\]
As presented above, there exist two elements \((\gamma,\pm J_{1/2}(\gamma,\tau))\) of \(Mp(2,\mathbb{Z})\) for each \(\gamma\in SL(2,\mathbb{Z})\). Furthermore, \(\tilde{R}^{2}\) leads to \(\tilde{R}^{2}(\gamma,J_{1/2}(\gamma,\tau))=(\gamma,-J_{1/2}(\gamma,\tau))\). Thus, the metaplectic group is a twofold cover group of \(SL(2,\mathbb{Z})\), that is, \(SL(2,\mathbb{Z})\simeq Mp(2,\mathbb{Z})/\mathbb{Z}_{2}\) with
\(\mathbb{Z}_{2}=\{1,\tilde{R}^{2}\}\). It was known that one can define the finite modular groups in both \(SL(2,\mathbb{Z})\) and \(Mp(2,\mathbb{Z})\).
Let us rewrite \(SL(2,\mathbb{Z})\), its quotient group and metaplectic group by
\[\Gamma:=SL(2,\mathbb{Z}),\qquad\bar{\Gamma}:=\Gamma/\{\pm\mathbb{I}\},\qquad\tilde {\Gamma}:=Mp(2,\mathbb{Z}), \tag{3.8}\]
respectively. Note that the complex structure moduli space of the torus \(\tau\) is governed by \(\bar{\Gamma}\) due to the fact that \(\tau\) is invariant under \(S^{2}\). By introducing the principal congruence subgroups:
\[\Gamma(N) =\left\{\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\Gamma\left|a\equiv d\equiv 1,\qquad b\equiv c\equiv 0\,( \mathrm{mod}\,N)\right.\right\},\] \[\bar{\Gamma}(N) =\Gamma(N)/\{\pm\mathbb{I}\},\] \[\tilde{\Gamma}(4N) =\left\{\tilde{\gamma}=(\gamma,v(\gamma)J_{1/2}(\gamma,\tau)) \right|\gamma\in\Gamma(4N)\}\,, \tag{3.9}\]
with \(v(\gamma)=\left(\frac{c}{d}\right)\) being the Kronecker symbol, one can define the finite modular groups:
\[\Gamma_{N}:=\bar{\Gamma}/\bar{\Gamma}(N)=\langle S,T|S^{4}=(ST)^{3}=T^{N}=1\rangle, \tag{3.10}\]
where \(\Gamma_{2,3,4,5}\) correspond to \(S_{3},A_{4},S_{4},A_{5}\) discrete groups, respectively.8 In addition, the finite metaplectic modular groups are given by
Footnote 8: To construct the finite modular groups with \(N>5\), we require an additional relation.
\[\tilde{\Gamma}_{4N}:=\tilde{\Gamma}/\tilde{\Gamma}(4N), \tag{3.11}\]
where the generators satisfy9
Footnote 9: Hereafter, \(\widetilde{S}\) and \(\widetilde{T}\) denotes the generators of the finite metaplectic modular groups.
\[\widetilde{S}^{2}=\tilde{R},\qquad(\widetilde{S}\widetilde{T})^{3}=\tilde{R} ^{4}=1,\qquad\tilde{T}\tilde{R}=\tilde{R}\tilde{T},\qquad\tilde{T}^{4N}= \mathbb{I}, \tag{3.12}\]
and additional relations are required to ensure the finiteness for \(N>1\), e.g.,
\[\widetilde{S}^{5}\tilde{T}^{6}\widetilde{S}\tilde{T}^{4}\widetilde{S}\tilde{T }^{2}\widetilde{S}\tilde{T}^{4}=\mathbb{I},\qquad(\mathrm{for}\,N=2), \tag{3.13}\]
for \(\tilde{\Gamma}_{4N=8}\) of order 768 ([768, 1085324] in GAP system [36]),
\[\widetilde{S}\tilde{T}^{3}\widetilde{S}\tilde{T}^{-2}\widetilde{S}^{-1} \tilde{T}\widetilde{S}\tilde{T}^{-3}\widetilde{S}^{-1}\tilde{T}^{2}\widetilde {S}^{-1}\tilde{T}^{-1}=\mathbb{I},\qquad(\mathrm{for}\,N=3), \tag{3.14}\]
for \(\tilde{\Gamma}_{4N=12}\) of order 2304, respectively. Under the finite modular groups, modular forms of the modular weight \(k/2\) and level \(4N\) transform as
\[f_{\bar{\alpha}}(\tilde{\gamma}\tau)=\varphi(\gamma,\tau)^{k}\rho_{\mathbf{r} }(\tilde{\gamma})_{\bar{\alpha}\bar{\beta}}f_{\bar{\beta}}(\tau), \tag{3.15}\]
where \(\rho_{\mathbf{r}}(\tilde{\gamma})_{\bar{\alpha}\bar{\beta}}\) denotes an irreducible representation matrix in \(\tilde{\Gamma}_{4N}\).
### \(T^{2}\) with magnetic fluxes
As discussed in Secs. 2.2 and 2.3, the magnetic fluxes generate the semi-realistic MSSM-like models with 3 generations of quarks and leptons. In the following, we address the flavor structure of chiral zero modes with an emphasis on the transformations of these wavefunctions under the modular symmetry. It was known that the magnetic fluxes on extra-dimensional spaces induce the degenerate chiral zero-modes, which are counted by the index theorem.
For concreteness, let us begin with the six-dimensional (6D) Super Yang-Mills theory on \(T^{2}\). The Kaluza-Klein reduction of 6D Majorana-Weyl spinor \(\lambda\) is given by
\[\lambda(x,z)=\sum_{n}\phi_{n}(x)\otimes\psi_{n}(z), \tag{3.16}\]
with \(\psi_{n}(z)\) denotes the \(n\)-th excited mode of two-dimensional (2D) Weyl spinors on \(T^{2}\). In particular, we focus on zero-mode wavefunctions \(\psi(z)\):10
Footnote 10: Here and in what follows, we omit the zero-mode index.
\[\psi(z)=\left(\begin{array}{c}\psi_{+}(z)\\ \psi_{-}(z)\end{array}\right), \tag{3.17}\]
where \(\psi_{+}\) and \(\psi_{-}\) denote the positive and negative chirality modes on \(T^{2}\). The \(U(1)\) magnetic flux is given by
\[F=\frac{i\pi M}{\text{Im}z}dz\wedge d\bar{z}, \tag{3.18}\]
obtained by the corresponding vector potential
\[A=\frac{\pi M}{\text{Im}z}\text{Im}((\bar{z}+\bar{\zeta})dz), \tag{3.19}\]
with \(\zeta\) being a Wilson line phase. Note that the boundary conditions of the gauge field as well as the 2D Weyl spinors are respectively chosen as
\[A(z+1,\bar{z}+1) =A(z,\bar{z})+d\chi_{1}(z,\bar{z}),\quad A(z+\tau,\bar{z}+\bar{ \tau})=A(z,\bar{z})+d\chi_{2}(z,\bar{z})\] \[\psi(z+1) =e^{i\chi_{1}}\psi(z),\hskip 72.27pt\psi(z+\tau)=e^{i\chi_{2}} \psi(z), \tag{3.20}\]
with
\[\chi_{1}=\frac{\pi M}{\text{Im}\tau}\text{Im}(z+\zeta),\qquad \chi_{2}=\frac{\pi M}{\text{Im}\tau}\text{Im}\big{[}\bar{\tau}(z+\zeta)\big{]}. \tag{3.21}\]
By solving the Dirac equation for the massless mode with \(U(1)\) charge \(q=1\), we find \(|M|\) degenerate zero-mode solutions; \(\psi_{+}(z)\) for \(M>0\) and \(\psi_{-}(z)\) for \(M<0\). Specifically, \(|M|\) degenerate zero-mode wavefunctions are written in terms of Jacobi theta function \(\vartheta\) and the torus area \(\mathcal{A}\)[10]:
\[\psi_{\pm}^{\tilde{\alpha},|M|}(z,\tau)=\left(\frac{|M|}{\mathcal{A}^{2}} \right)^{1/4}e^{i\pi|M|(z+\zeta)\frac{\text{Im}(z+\zeta)}{\text{Im}\tau}} \vartheta\left[\begin{array}{c}\frac{\tilde{\alpha}}{M}\\ 0\end{array}\right](|M|z,|M|\tau), \tag{3.22}\]
with \(\tilde{\alpha}=0,1,...,|M|-1\) and
\[\vartheta\left[\begin{matrix}a\\ b\end{matrix}\right](\nu,\tau):=\sum_{\ell\in\mathbb{Z}}e^{\pi i(a+\ell)^{2} \tau}e^{2\pi i(a+\ell)(\nu+b)}. \tag{3.23}\]
Here, the normalization of the wavefunctions is fixed as11
Footnote 11: In what follows, we omit the index of chirality.
\[\int d^{2}z\ \psi^{\tilde{\alpha},M}(z,\tau)\left(\psi^{\tilde{\beta},M}(z,\tau) \right)^{*}=(2\text{Im}\tau)^{-1/2}\delta_{\tilde{\alpha},\tilde{\beta}}. \tag{3.24}\]
The Yukawa couplings of chiral zero-modes are obtained by integrals of three wavefunctions:
\[Y^{\tilde{\alpha}\tilde{\beta}\tilde{\gamma}}=\int_{T^{2}}d^{2}z\,\psi^{ \tilde{\alpha},|M|}(z,\tau)\psi^{\tilde{\beta},|M^{\prime}|}(z,\tau)(\psi^{ \tilde{\gamma},|M|+|M^{\prime}|}(z,\tau))^{*}. \tag{3.25}\]
Remarkably, these wavefunctions show non-trivial transformations under the modular symmetry [37; 38; 39; 40; 41; 42; 43; 10]. Indeed, when \(|M|=\) even, under \(S\) and \(T\) transformations of the modular symmetry:
\[S: \ \tau\rightarrow-1/\tau,\qquad z\rightarrow-z/\tau,\] \[T: \ \tau\rightarrow\tau+1,\qquad z\to z, \tag{3.26}\]
the zero-mode wavefunctions respectively transform
\[\psi^{\tilde{\alpha},|M|}(z,\tau)\rightarrow\psi^{\tilde{\alpha},|M|}\left( -\frac{z}{\tau},-\frac{1}{\tau}\right)=(-\tau)^{1/2}\sum_{\tilde{\beta}=0}^{| M|-1}\frac{1}{\sqrt{|M|}}e^{i\pi/4}e^{2\pi i\,\frac{\tilde{\alpha}\tilde{ \beta}}{|M|}}\psi^{\tilde{\beta},|M|}(z,\tau),\]
\[\psi^{\tilde{\alpha},|M|}(z,\tau)\rightarrow\psi^{\tilde{\alpha},|M|}(z,\tau+ 1)=e^{i\pi\frac{\tilde{\alpha}^{2}}{|M|}}\psi^{\tilde{\alpha},|M|}(z,\tau), \tag{3.27}\]
indicating the wavefunctions with the modular weight \(1/2\).12 Note that the Wilson line \(\zeta\) transform as \(\zeta\rightarrow\zeta/(c\tau+d)\) as in the coordinate \(z\). By taking \(M=2M^{\prime}\) with \(M^{\prime}\in\mathbb{Z}\), they are rewritten in the context of \(\tilde{\Gamma}_{2|M|=4|M^{\prime}|}\)
Footnote 12: The modular weights in Type IIB magnetized D-brane models and the T-dual Type IIA intersecting D-brane models are revisited in Ref. [44].
\[\psi^{\tilde{\alpha},|M|}(\tilde{\gamma}(z,\tau))=\varphi(\gamma,\tau)\sum_{ \tilde{\beta}=0}^{2|M^{\prime}|-1}\rho(\tilde{\gamma})_{\tilde{\alpha}\tilde {\beta}}\psi^{\tilde{\beta},|M|}(z,\tau), \tag{3.28}\]
with \(\tilde{\alpha},\tilde{\beta}=0,1,...,2|M^{\prime}|-1\) and
\[\rho(\widetilde{S})_{\tilde{\alpha}\tilde{\beta}}=-\frac{1}{\sqrt{|M|}}e^{i \pi/4}e^{2\pi i\,\frac{\tilde{\alpha}\tilde{\beta}}{|M|}}, \tag{3.29}\]
\[\rho(\widetilde{T})_{\tilde{\alpha}\tilde{\beta}}=e^{2\pi i\,\frac{\tilde{ \beta}^{2}}{|M|}}\delta_{\tilde{\alpha},\tilde{\beta}}. \tag{3.30}\]
Indeed, the representation matrix \(\rho(\widetilde{\gamma})\) is unitary and satisfies
\[\rho(\widetilde{S})^{2}=\rho(\widetilde{R}),\quad(\rho(\widetilde{S})\rho( \widetilde{T}))^{3}=\rho(\widetilde{R}^{4})=1,\quad\rho(\widetilde{T})\rho( \widetilde{R})=\rho(\widetilde{R})\rho(\widetilde{T}),\quad\rho(\widetilde{T}) ^{4|M^{\prime}|}=\mathbb{I}. \tag{3.31}\]
It was known that the boundary conditions of the fermions in Eq. (3.20) and the \(T\) transformation are consistent with each other only if \(M\) is even. The \(S\) transformation is consistent with the boundary conditions. However, the existence of Wilson line modifies the boundary condition as well as the modular transformation [42]. Taking into account the modular transformation of the Wilson line \(\zeta\) in the case of \(M=\) odd 13,
Footnote 13: We discuss the modular transformation of the Wilson lines, but they are related to that of the Scherk-Schwarz phases [45]. See, Appendix of Ref. [42], for more details about the modular transformations of Scherk-Schwarz phases and Wilson lines.
\[M\zeta\to M\left(\zeta+\frac{1}{2}\right), \tag{3.32}\]
the wavefunction transforms under the \(T\) transformation:
\[\psi^{\tilde{\alpha},|M|}\left(z+\zeta+\frac{1}{2},\tau+1\right) =e^{i\pi|M|\frac{\mathrm{Im}\,(z+\zeta)}{2\mathrm{Im}\,\tau}}e^{ i\pi j\left(\frac{\tilde{\alpha}}{|M|}+1\right)}\psi^{\tilde{\alpha},|M|}(z+ \zeta,\tau)\] \[=\sum_{\tilde{\beta}}e^{i\pi|M|\frac{\mathrm{Im}\,(z+\zeta)}{2 \mathrm{Im}\,\tau}}\rho(\widetilde{T})_{\tilde{\alpha}\tilde{\beta}}\psi^{ \tilde{\beta},|M|}(z+\zeta,\tau), \tag{3.33}\]
with
\[\rho(\widetilde{T})_{\tilde{\alpha}\tilde{\beta}}=e^{i\pi\tilde{\alpha}\left( \frac{\tilde{\alpha}}{|M|}+1\right)}\delta_{\tilde{\alpha},\tilde{\beta}}. \tag{3.34}\]
Note that the authors of Ref. [43] proposed that this expression holds for vanishing Wilson lines even in the case of odd units of magnetic flux \(M\). Recall that the exponential factor \(e^{i\pi|M|\frac{\mathrm{Im}\,(z+\zeta)}{2\mathrm{Im}\,\tau}}\) can be canceled in the Yukawa coupling due to the \(U(1)\) gauge invariance as argued in Ref. [43]. Thus, \(T\)-transformed wavefunction with odd \(M\) cannot be expanded in terms of the original wavefunction, but it will be possible to be written in the different coordinate \(z+1/2\). Since this statement is also true for even units of \(M\), we adopt the \(T\) transformation of wavefunction is described by Eq. (3.33) for a general \(M\). The \(S\) transformation is still given by Eq. (3.29) with odd units of \(M\).
The modular transformations also act on the 4D fields. When the 4D \(\mathcal{N}=1\) SUSY is preserved, the 4D Lagrangian is written in terms of Kahler potential and superpotential. The Kahler potential and the superpotential of matter fields are derived from the dimensional reduction of 6D Super Yang-Mills theory:
\[K =\frac{|\Phi|^{2}}{(i(\bar{\tau}-\tau))^{1/2}},\] \[W =Y_{\tilde{\alpha}\tilde{\beta}\tilde{\gamma}}\Phi^{\tilde{ \alpha},I_{ab}}\Phi^{\tilde{\beta},I_{ca}}\Phi^{\tilde{\gamma},I_{cb}}, \tag{3.35}\]
where \(\{I_{ab},I_{ca},I_{cb}\}\) denote the generation number counted by the index theorem, corresponding to one of the torus in Eq. (2.25). It satisfies \(I_{ab}+I_{bc}+I_{ca}=0\) to preserve the
\(U(1)\) gauge symmetry. Then, the modular transformations of matter superfield are given by
\[\Phi^{\tilde{\alpha},I_{ab}}\rightarrow\varphi(\gamma,\tau)^{-1} \left(\rho(\tilde{\gamma})\right)^{-1}_{\tilde{\alpha}\tilde{\beta}}\Phi^{\tilde {\beta},I_{ab}}, \tag{3.36}\]
where explicit forms of \(\rho(\tilde{\gamma})\) are given in Eqs. (3.29) and (3.34) by replacing \(M\) with \(I_{ab}\). In addition, it was known that the holomorphic Yukawa couplings (3.25) are also described by Jacobi theta function [27]:
\[Y_{\tilde{\alpha}\tilde{\beta}\tilde{\gamma}}\simeq\sigma_{abc} \left(\frac{2\text{Im}\,\tau}{\mathcal{A}^{2}}\right)^{1/4}\left|\frac{I_{ab}I _{ca}}{I_{bc}}\right|^{1/4}e^{H(\tilde{\zeta},\tau)/2}\vartheta\begin{bmatrix} \delta_{\tilde{\alpha}\tilde{\beta}\tilde{\gamma}}\\ 0\end{bmatrix}(\tilde{\zeta},\tau|I_{ab}I_{bc}I_{ca}|), \tag{3.37}\]
with
\[\sigma_{abc} =\text{sign}(I_{ab}I_{bc}I_{ca}),\] \[\delta_{\tilde{\alpha}\tilde{\beta}\tilde{\gamma}} =\frac{\tilde{\alpha}}{I_{ab}}+\frac{\tilde{\beta}}{I_{ca}}+ \frac{\tilde{\gamma}}{I_{bc}},\] \[\tilde{\zeta} =I_{ab}\tilde{\zeta}_{c}+I_{bc}\tilde{\zeta}_{a}+I_{ca}\tilde{ \zeta}_{b},\] \[H(\tilde{\zeta},\tau) =2\pi i|I_{ab}I_{bc}I_{ca}|^{-1}\frac{\tilde{\zeta}\cdot\text{Im }\,\tilde{\zeta}}{\text{Im}\,\tau}, \tag{3.38}\]
where \(\tilde{\zeta}_{a}=n_{a}\zeta_{a}/m_{a}\) denotes the redefined Wilson lines and we omit the 6D gauge coupling in the above expression. Since the Yukawa couplings are described by the half-integer modular form, the Yukawa couplings belong to \(\mathbf{r}\) representation of \(\tilde{\Gamma}_{4N}\) whose transformation is of the form14:
Footnote 14: Note that the Jacobi theta function itself behaves a proper modular form satisfying \((Y_{\mathbf{r}})_{\tilde{\alpha}}(\tau)\rightarrow(Y_{\mathbf{r}})_{\tilde{ \alpha}}(\tilde{\gamma}\tau)=\varphi(\gamma,\tau)\rho_{\mathbf{r}}(\tilde{ \gamma})_{\tilde{\alpha}\tilde{\beta}}(Y_{\mathbf{r}})_{\tilde{\beta}}(\tau)\).
\[(Y_{\mathbf{r}})_{\tilde{\alpha}}(\tau)\rightarrow(Y_{\mathbf{ r}})_{\tilde{\alpha}}(\widetilde{\gamma}\tau)=\rho_{\mathbf{r}}(\widetilde{ \gamma})_{\tilde{\alpha}\tilde{\beta}}(Y_{\mathbf{r}})_{\tilde{\beta}}(\tau). \tag{3.39}\]
Recalling the condition \(I_{ab}+I_{bc}+I_{ca}=0\), the Yukawa terms are invariant under the following \(U(1)\) symmetry:
\[\Phi^{\tilde{\alpha},I_{ab}}\to e^{iq\alpha I_{ab}}\Phi^{ \tilde{\alpha},I_{ab}}, \tag{3.40}\]
with \(q\) being the \(U(1)\) charge of \(\Phi^{\tilde{\alpha},I_{ab}}\). Thus, we redefine the \(\widetilde{S}\) transformation of matter fields following Ref. [43]:
\[\rho(\widetilde{S})_{\tilde{\alpha}\tilde{\beta}}=-\frac{1}{\sqrt{|I_{ab}|}} e^{i\pi\frac{3|I_{ab}|+1}{4}}e^{2\pi i\,\frac{\tilde{\alpha}\tilde{\beta}}{|I_{ab}|}}. \tag{3.41}\]
Although we add \(e^{3i\pi I_{ab}/4}\) in the \(S\) transformation, it is still the unitary representation matrix. Such a redefinition will be convenient to discuss the metaplectic modular symmetry as will be shown later. In this way, the \(T^{2}\) compactifications with magnetic background fluxes lead to the metaplectic modular flavor symmetries. Before going into details about the relation between the metaplectic modular flavor symmetries, the traditional flavor and CP symmetries, we will discuss the metaplectic modular symmetry on \(T^{2}/\mathbb{Z}_{2}\) background.
### \(T^{2}/\mathbb{Z}_{2}\) with magnetic fluxes
On the \(T^{2}/\mathbb{Z}_{2}\) orbifold, the wavefunctions of \(\mathbb{Z}_{2}\)-even and -odd modes are given by the linear combination of these on \(T^{2}\) as mentioned in Sec. 2.2. The explicit forms are given by [46]
\[\begin{split}\psi^{\tilde{\alpha},|M|}_{\rm even}&= \mathcal{N}_{\tilde{\alpha}}\sum_{\tilde{\beta}=0}^{|M|-1}\left(\delta_{ \tilde{\alpha},\tilde{\beta}}+\delta_{|M|-\tilde{\alpha},\tilde{\beta}}\right) \psi^{\tilde{\beta},|M|},\\ \psi^{\tilde{\alpha},|M|}_{\rm odd}&=\mathcal{N}_{j }\sum_{\tilde{\beta}=0}^{|M|-1}\left(\delta_{\tilde{\alpha},\tilde{\beta}}- \delta_{|M|-\tilde{\alpha},\tilde{\beta}}\right)\psi^{\tilde{\beta},|M|}, \end{split} \tag{42}\]
with
\[\mathcal{N}_{\tilde{\alpha}}=\begin{cases}1&(\tilde{\alpha}=0)\\ \frac{1}{2}&\left(\tilde{\alpha}=\frac{|M|}{2}\right)\\ \frac{1}{\sqrt{2}}&(\text{others})\end{cases}\,. \tag{43}\]
The modular transformations are extracted from the matter wavefunctions on \(T^{2}\) (3.34) and (3.41):
\[\rho(\widetilde{S})_{\tilde{\alpha}\tilde{\beta}} =-\frac{2}{\sqrt{|M|}}e^{i\pi\frac{3|M|+1}{4}}\,\cos\left(\frac{ 2\pi\tilde{\alpha}\tilde{\beta}}{|M|}\right), \tag{44}\] \[\rho(\widetilde{T})_{\tilde{\alpha}\tilde{\beta}} =e^{i\pi\tilde{\alpha}\left(\frac{\tilde{\alpha}}{|M|}+1\right)} \delta_{\tilde{\alpha},\tilde{\beta}}, \tag{45}\]
for \(\mathbb{Z}_{2}\)-even mode with \(\tilde{\alpha},\tilde{\beta}=0,1,...,I_{\rm even}\) and
\[\rho(\widetilde{S})_{\tilde{\alpha}\tilde{\beta}} =-\frac{2i}{\sqrt{|M|}}e^{i\pi\frac{3|M|+1}{4}}\,\sin\left(\frac {2\pi\tilde{\alpha}\tilde{\beta}}{|M|}\right), \tag{46}\] \[\rho(\widetilde{T})_{\tilde{\alpha}\tilde{\beta}} =e^{i\pi\tilde{\alpha}\left(\frac{\tilde{\alpha}}{|M|}+1\right)} \delta_{\tilde{\alpha},\tilde{\beta}}, \tag{47}\]
for \(\mathbb{Z}_{2}\)-odd mode with \(\tilde{\alpha},\tilde{\beta}=0,1,...,I_{\rm odd}\). Here, \(I_{\rm even}\) and \(I_{\rm odd}\) are defined in Eq. (2.29). In contrast to the analysis of Ref. [40], we added extra phase factors as argued in the previous section. Since Yukawa couplings on \(T^{2}/\mathbb{Z}_{2}\) are described by those on \(T^{2}\)[47]:
\[Y_{\tilde{\alpha}^{\prime}\tilde{\beta}^{\prime}\gamma^{\prime}} \bigg{|}_{T^{2}/\mathbb{Z}_{2}}=\sum\mathcal{O}^{\tilde{\alpha}^{\prime}\tilde {\alpha},|I_{ab}|}\mathcal{O}^{\tilde{\beta}^{\prime}\tilde{\beta},|I_{bc}|} \mathcal{O}^{\tilde{\gamma}^{\prime}\tilde{\gamma},|I_{ca}|}Y_{\tilde{\alpha} \tilde{\beta}\tilde{\gamma}}\bigg{|}_{T^{2}}, \tag{48}\]
with
\[\mathcal{O}^{\tilde{\alpha}\tilde{\beta},|M|}_{m}=\mathcal{N}_{ \tilde{\alpha}}\left(\delta_{\tilde{\alpha},\tilde{\beta}}+(-1)^{m}\delta_{ \tilde{\alpha},|M|-\tilde{\beta}}\right), \tag{49}\]
where \(m=0\) and \(m=1\) respectively correspond to \(\mathbb{Z}_{2}\)-even and -odd modes, they transform under the metaplectic modular symmetry as in the matter fields.
We find that the unitary matrices (3.45) and (3.47) obey the required relations in the metaplectic modular symmetry (3.12). Specifically, for even \(|M|\) and odd \(|M|\), modular
transformations of both the \(\mathbb{Z}_{2}\)-even and -odd modes are described by \(\tilde{\Gamma}_{2|M|}\) and \(\tilde{\Gamma}_{4|M|}\), respectively, We checked the additional relations (3.13) and (3.14) for \(\tilde{\Gamma}_{8}\) and \(\tilde{\Gamma}_{12}\), respectively. However, we have not checked these additional relations with \(\tilde{\Gamma}_{4N}\) (\(N\geq 4\)) which will be reported elsewhere. In the next section, we derive such 6D bottom-up models from 10D Type IIB magnetized D-brane models with stabilized moduli. In particular, we discuss the metaplectic modular flavor symmetries together with traditional flavor and CP symmetries in the framework of eclectic symmetry.
### Generalized CP
We first discuss the unification of the metaplectic modular symmetry and 4D CP symmetry, which was discussed in \(T^{2}\) background [40]. It was known that the 4D CP and 6D orientation reversing are embedded into the 10D proper Lorentz symmetry [48; 49].15 Since the 6D orientation reversing is realized by \(z_{i}\to-\bar{z}_{i}\) for the coordinates of \((T^{2})_{i}\), the torus modulus transform under the CP symmetry \(\tau_{i}\to-\bar{\tau}_{i}\). Note that such a transformation leads to the negative determinant in the transformation of 6D space. In the following, we focus on the CP transformation on \(T^{2}\) and \(T^{2}/\mathbb{Z}_{2}\).
Footnote 15: The CP symmetry is discussed in the context of Type IIB string theory, e.g., Refs.[20; 50].
Recalling that the modulus is defined by the ratio of the lattice vectors of the torus \(\tau_{i}=e_{2}/e_{1}\), the CP transformation is realized by
\[\begin{pmatrix}e_{2}\\ e_{1}\end{pmatrix}\xrightarrow{CP}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}\bar{e}_{2}\\ \bar{e}_{1}\end{pmatrix}. \tag{3.50}\]
Thus, the CP matrix is not the element of \(SL(2,\mathbb{Z})\), and the CP transformation enlarges the \(SL(2,\mathbb{Z})\) modular group to \(GL(2,\mathbb{Z})\simeq SL(2,\mathbb{Z})\rtimes\mathbb{Z}_{2}^{\rm CP}\)[51; 52; 53].16 Indeed, the CP matrix (3.50) and the generators of \(SL(2,\mathbb{Z})\) satisfy
Footnote 16: In the case of the multi moduli, such a CP transformation also enlarges the symplectic modular group to \(GSp(2g,\mathbb{Z})\simeq Sp(2g,\mathbb{Z})\rtimes\mathbb{Z}_{2}^{\rm CP}\)[20].
\[CP^{2}=1,\quad(CP)S(CP)^{-1}=S^{-1},\quad(CP)T(CP)^{-1}=T^{-1}, \tag{3.51}\]
with
\[S=\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\qquad T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}. \tag{3.52}\]
As seen in Sec. 3.1, the metaplectic modular group is defined for \(\gamma\in SL(2,\mathbb{Z})\). For \(\gamma^{*}\in GL(2,\mathbb{Z})\simeq SL(2,\mathbb{Z})\rtimes\mathbb{Z}_{2}^{ \rm CP}\), the modulus \(\tau\) as well as the automorphy factor \(\varphi(\gamma^{*},\tau)\) transform as follows (see Ref. [40]):
\[\tau\xrightarrow{\gamma^{*}}\begin{cases}\frac{a\tau+b}{c\tau+d}& \quad(\det(\gamma^{*})=1)\\ \frac{a\bar{\tau}+b}{c\tau+d}&\quad(\det(\gamma^{*})=-1)\end{cases},\] \[\varphi(\gamma^{*},\tau)=\begin{cases}\pm(c\tau+d)^{1/2}&\quad( \det(\gamma^{*})=1)\\ \pm(c\bar{\tau}+d)^{1/2}&\quad(\det(\gamma^{*})=-1)\end{cases}. \tag{3.53}\]
The multiplication law in the context of metaplectic modular symmetry is defined as
\[(\gamma_{1}^{*},\varphi(\gamma_{1}^{*},\tau))(\gamma_{2}^{*},\varphi( \gamma_{2}^{*},\tau))=(\gamma_{1}^{*}\gamma_{2}^{*},\varphi(\gamma_{1}^{*}, \gamma_{2}^{*}\tau)\varphi(\gamma_{2}^{*},\tau)). \tag{3.54}\]
When we redefine \(\varphi(\gamma^{*},\tau)=\pm(c\tau+d)^{1/2}=:\epsilon J_{1/2}(\gamma^{*},\tau)\) with \(\epsilon=\pm 1\), the above law is written by
\[(\gamma_{1}^{*},\epsilon_{1}J_{1/2}(\gamma_{1}^{*},\tau))( \gamma_{2}^{*},\epsilon_{2}J_{1/2}(\gamma_{2},\tau))=(\gamma_{1}^{*}\gamma_{2 }^{*},\epsilon_{1}\epsilon_{2}\zeta_{1/2}^{*}(\gamma_{1}^{*},\gamma_{2}^{*})J_ {1/2}(\gamma_{1}^{*}\gamma_{2}^{*},\tau)), \tag{3.55}\]
where the two-cocyle \(\zeta_{1/2}^{*}(\gamma_{1}^{*},\gamma_{2}^{*})\) is defined as
\[\zeta_{1/2}^{*}(\gamma_{1}^{*},\gamma_{2}^{*})=(\text{det}\gamma _{1}^{*},\text{det}\gamma_{2}^{*})_{H}\left(\frac{\chi(\gamma_{1}^{*}\gamma_{ 2}^{*})}{\chi(\gamma_{1}^{*})},\frac{\chi(\gamma_{1}^{*}\gamma_{2}^{*})}{\chi (\gamma_{2}^{*})\text{det}\gamma_{1}^{*}}\right)_{H}\frac{s(\gamma_{1}^{*})s( \gamma_{2}^{*})}{s(\gamma_{1}^{*}\gamma_{2}^{*})}. \tag{3.56}\]
Here, we define
\[s(\gamma^{*})=\left\{\begin{array}{ll}1,&(c\neq 0)\\ \text{sign}(d),&(c=0)\end{array}\right.,\qquad\chi(\gamma^{*})=\left\{ \begin{array}{ll}c,&(c\neq 0)\\ d,&(c=0)\end{array}\right., \tag{3.57}\]
and introduce the Hilbert symbol:
\[(x,y)_{H}=\left\{\begin{array}{ll}-1,&(x<0\text{ and }y<0)\\ +1,&(\text{others})\end{array}\right.. \tag{3.58}\]
Since the CP transformation of matter wavefunctions on \(T^{2}\) is given by
\[\psi^{\tilde{\alpha},M}\xrightarrow{CP}\overline{\psi^{\tilde{ \alpha},|M|}(z,\tau)}, \tag{3.59}\]
corresponding to the basis of canonical CP transformation17, it allows us to define the CP transformation in the framework of metaplectic modular symmetry:
Footnote 17: Note that the CP transformation flips the sign of the magnetic flux, that is, \(M\rightarrow-M\).
\[\widetilde{CP}=[CP,i]=\left(\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right),i\right). \tag{3.60}\]
Thus, Eq. (3.59) is rewritten as
\[\psi^{\tilde{\alpha},M}\rightarrow\varphi(\widetilde{CP},\tau) \rho(\widetilde{CP})_{\tilde{\alpha}\tilde{\beta}}\overline{\psi^{\tilde{ \beta},|M|}(z,\tau)} \tag{3.61}\]
with
\[\varphi(\widetilde{CP},\tau)=i,\qquad\rho(\widetilde{CP})_{ \tilde{\alpha}\tilde{\beta}}=-i\delta_{\tilde{\alpha},\tilde{\beta}}. \tag{3.62}\]
Remarkably, the CP transformation does not commute with the metaplectic modular transformations. When we consider the following chain:
\[\psi\xrightarrow{\widetilde{CP}}\rho(\widetilde{CP})\overline{ \psi}\xrightarrow{\widetilde{\gamma}}\varphi(\widetilde{\gamma},\tau)^{*} \rho(\widetilde{CP})\rho(\widetilde{\gamma})^{*}\overline{\psi}\] \[\xrightarrow{\widetilde{CP}^{-1}}\varphi(\widetilde{\gamma},\tau) \rho(\widetilde{CP})\rho(\widetilde{\gamma})^{*}\rho(\widetilde{CP})^{-1}\psi, \tag{3.63}\]
for \(\widetilde{\gamma}\in Mp(2,\mathbb{Z})\), we find that
\[\rho(\widetilde{CP})\rho(\widetilde{S})^{*}\rho(\widetilde{CP})^{-1 }=\rho(\widetilde{S})^{-1},\] \[\rho(\widetilde{CP})\rho(\widetilde{T})^{*}\rho(\widetilde{CP})^{- 1}=\rho(\widetilde{T})^{-1}, \tag{101}\]
are satisfied for matter wavefunctions on \(T^{2}\) as well as \(T^{2}/\mathbb{Z}_{2}\), where the latter \(\mathbb{Z}_{2}\) case can be explicitly checked via the representations below. Thus, we constructed the outer automorphism \(u_{\rm CP}:G_{\rm CP}\to{\rm Aut}(G_{\rm modular})\). Here, \(G_{\rm modular}\) denotes the finite metaplectic modular group that the wavefunctions enjoy. In the semi-direct product group, the generators satisfy
\[\widetilde{CP}\widetilde{S}\widetilde{CP}^{-1} =\widetilde{S}^{-1},\] \[\widetilde{CP}\widetilde{T}\widetilde{CP}^{-1} =\widetilde{T}^{-1}, \tag{102}\]
where the generators are interpreted as elements of \(G_{\rm modular}\rtimes G_{\rm CP}\).
On the \(T^{2}/\mathbb{Z}_{2}\) orbifold, the explicit forms of the modular (flavor) transformations are summarized as follows:
* \(M=2\) Since there is no \(\mathbb{Z}_{2}\)-odd mode, the modular transformations of \(\mathbb{Z}_{2}\)-even mode are given by \[\rho(\widetilde{S}_{\rm even})=-\frac{e^{-\frac{i\pi}{4}}}{\sqrt{2}}\begin{pmatrix} -1&-1\\ -1&1\end{pmatrix},\qquad\rho(\widetilde{T}_{\rm even})=\begin{pmatrix}1&0\\ 0&-i\end{pmatrix},\] (103) which are regarded as the representation matrices of \(\tilde{\Gamma}_{4}\).
* \(M=3\) There are two \(\mathbb{Z}_{2}\)-even modes and single \(\mathbb{Z}_{2}\)-odd mode. Their modular transformations are given by \[\rho(\widetilde{S}_{\rm even}) =-\frac{-i}{\sqrt{3}}\begin{pmatrix}1&\sqrt{2}\\ \sqrt{2}&-1\end{pmatrix},\qquad\rho(\widetilde{T}_{\rm even})=\begin{pmatrix} 1&0\\ 0&-(-1)^{1/3}\end{pmatrix},\] \[\rho(\widetilde{S}_{\rm odd}) =1,\qquad\rho(\widetilde{T}_{\rm odd})=-(-1)^{1/3},\] (104) which are regarded as the representation matrices of \(\tilde{\Gamma}_{12}\).
* \(M=4\) There are two \(\mathbb{Z}_{2}\)-even modes and single \(\mathbb{Z}_{2}\)-odd mode. Their modular transformations are given by \[\rho(\widetilde{S}_{\rm even}) =-\frac{1}{2}\begin{pmatrix}(-1)^{1/4}&1+i&(-1)^{1/4}\\ 1+i&0&-1-i\\ (-1)^{1/4}&-1-i&(-1)^{1/4}\end{pmatrix},\qquad\rho(\widetilde{T}_{\rm even}) =\begin{pmatrix}1&0&0\\ 0&-(-1)^{1/4}&0\\ 0&0&-1\end{pmatrix},\] \[\rho(\widetilde{S}_{\rm odd}) =(-1)^{3/4},\qquad\rho(\widetilde{T}_{\rm odd})=-(-1)^{1/4},\] (105) which are regarded as the representation matrices of \(\tilde{\Gamma}_{8}\).
* \(M=5\) There are two \(\mathbb{Z}_{2}\)-even modes and single \(\mathbb{Z}_{2}\)-odd mode. Their modular transformations are given by \[\rho(\widetilde{S}_{\rm even})=-\frac{1}{\sqrt{5}}\begin{pmatrix}1& \sqrt{2}&\sqrt{2}\\ \sqrt{2}&\frac{-1+\sqrt{5}}{2}&\frac{-1-\sqrt{5}}{2}\\ \sqrt{2}&\frac{-1-\sqrt{5}}{2}&\frac{-1+\sqrt{5}}{2}\end{pmatrix},\qquad\rho( \widetilde{T}_{\rm even})=\begin{pmatrix}1&0&0\\ 0&-(-1)^{1/5}&0\\ 0&0&(-1)^{4/5}\end{pmatrix},\] \[\rho(\widetilde{S}_{\rm odd})=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix},\qquad\rho(\widetilde{T}_{\rm odd})=\begin{pmatrix}-(-1)^{ 1/5}&0\\ 0&(-1)^{4/5}\end{pmatrix},\] (3.69) which will be regarded as the representation matrices of \(\tilde{\Gamma}_{20}\).
* \(M=6\) There are four \(\mathbb{Z}_{2}\)-even modes and two \(\mathbb{Z}_{2}\)-odd modes. Their modular transformations are given by \[\rho(\widetilde{S}_{\rm even})=\frac{1}{\sqrt{6}}\begin{pmatrix} \frac{1-i}{\sqrt{2}}&1-i&1-i&\frac{1-i}{\sqrt{2}}\\ 1-i&\frac{1-i}{\sqrt{2}}&(-1)^{3/4}&-1+i\\ 1-i&(-1)^{3/4}&(-1)^{3/4}&1-i\\ \frac{1-i}{\sqrt{2}}&-1+i&1-i&(-1)^{3/4}\end{pmatrix},\] \[\rho(\widetilde{T}_{\rm even})=\begin{pmatrix}1&0&0&0\\ 0&-(-1)^{1/6}&0\\ 0&0&(-1)^{2/3}&0\\ 0&0&0&i\end{pmatrix},\] \[\rho(\widetilde{S}_{\rm odd})=\frac{1}{2}\begin{pmatrix}1+i&1+i \\ 1+i&-1-i\end{pmatrix},\qquad\rho(\widetilde{T}_{\rm odd})=\begin{pmatrix}-(-1)^ {1/6}&0\\ 0&(-1)^{2/3}\end{pmatrix},\] (3.70) which are regarded as the representation matrices of \(\tilde{\Gamma}_{12}\).
### Eclectic flavor symmetry
In this section, we analyze the Type IIB magnetized D-brane models in Sec. 2.3, where the three generation models with Pati-Salam gauge symmetry are realized in the visible sector. Since the generation number of quarks, leptons, and Higgs doublets is determined by magnetic fluxes on the first torus \((T^{2})_{1}\),
\[I_{ab}=I^{1}_{ab}I^{2}_{ab}I^{3}_{ab}=g(-1)(-1)=g,\qquad I_{ca}=I^{1}_{ca}I^{2} _{ca}I^{3}_{ca}=(-g)(-1)1=g, \tag{3.71}\]
the flavor structure is derived from the first torus. Indeed, the wavefunctions of matter fields on second and third tori are just the constant. Note that \(I^{3}_{bc}=0\) indicates that Higgs doublets are vector-like particles due to \(I^{3}_{bc}=0\). Thus, the index of Higgs doublets is calculated by using Eq.(2.31) with \(s_{1}<0,s_{2}>0\) under the assumption \(g>0\):
\[I_{bc}=-g+1. \tag{3.72}\]
From the results of Sec. 2.3, it turned out that the string landscape leads to a few number of generation of quarks and lepton. In the following analysis, we thus focus on matter wavefunctions on \((T^{2})_{1}\) with vanishing Wilson lines whose explicit forms are given in Eq. (3.42) with
\[\psi^{\hat{\alpha},|M|}(z,\tau)=\left(\frac{|M|}{\mathcal{A}^{2}} \right)^{1/4}e^{i\pi|M|z\frac{\mathrm{Im}}{\mathrm{Im}\tau}}\vartheta\begin{bmatrix} \frac{\tilde{\alpha}}{|M|}\\ 0\end{bmatrix}(|M|z,|M|\tau), \tag{3.73}\]
with \(M=I_{ab},I_{bc},I_{ca}\). Note that the orbifold projections split the wavefunctions to \(\mathbb{Z}_{2}\)-even and \(\mathbb{Z}_{2}\)-odd modes. For illustrative purposes, we focus on traditional flavor symmetries on three \(\mathbb{Z}_{2}\)-even modes with \(g=4\).18
Footnote 18: See for the unification of traditional flavor and modular symmetries on \(T^{2}\) with magnetic fluxes [39].
* \(g=4\) The traditional flavor symmetries are described by \(G_{\mathrm{flavor}}:=\mathbb{Z}_{4}\times\mathbb{Z}_{2}^{(P)}\times\mathbb{Z}_ {2}^{(C)}\times\mathbb{Z}_{2}^{(Z)}\) whose generators \(\{Z^{\prime},P,C,Z\}\) are of the form [54]: \[\rho(Z^{\prime}_{\mathrm{even}})=i\mathbb{I}_{3},\quad\rho(P_{ \mathrm{even}})=\mathbb{I}_{3},\quad\rho(C_{\mathrm{even}})=\begin{pmatrix}0& 0&1\\ 0&1&0\\ 1&0&0\end{pmatrix},\quad\rho(Z_{\mathrm{even}})=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix},\] (3.74) for \(\mathbb{Z}_{2}\)-even mode and \[\rho(Z^{\prime}_{\mathrm{odd}})=i,\qquad\rho(P_{\mathrm{odd}})=-1 \qquad\rho(C_{\mathrm{odd}})=-1,\qquad\rho(Z_{\mathrm{odd}})=-1,\] (3.75) for \(\mathbb{Z}_{2}\)-odd mode. Note that such flavor symmetries do not change Yukawa couplings and only act on three generations of quarks and leptons. In this case, the wavefunctions on \(T^{2}/Z_{2}\) enjoy the modular flavor group \(\widetilde{\Gamma}_{8}\) and the explicit representations are given in Eq. (3.68). In particular, the flavor generators do not commute with those of modular flavor symmetries \(G_{\mathrm{modular}}=\widetilde{\Gamma}_{8}\). Indeed, we find that \[\widetilde{S}_{\mathrm{even}}C_{\mathrm{even}}\widetilde{S}_{ \mathrm{even}}^{-1} =Z_{\mathrm{even}},\quad\widetilde{S}_{\mathrm{even}}Z_{\mathrm{ even}}\widetilde{S}_{\mathrm{even}}^{-1}=C_{\mathrm{even}},\] \[\widetilde{T}_{\mathrm{even}}C_{\mathrm{even}}\widetilde{T}_{ \mathrm{even}}^{-1} =C_{\mathrm{even}}Z_{\mathrm{even}}(Z^{\prime}_{\mathrm{even}})^ {2},\quad\widetilde{T}_{\mathrm{even}}Z_{\mathrm{even}}\widetilde{T}_{ \mathrm{even}}^{-1}=Z_{\mathrm{even}},\] (3.76) and the other generators of \(G_{\mathrm{flavor}}\) commute with \(G_{\mathrm{modular}}\). It means that the modular transformation is regarded as an automorphism of the traditional flavor group. Furthermore, we can construct the outer automorphism \(u_{\mathrm{CP}}:G_{\mathrm{CP}}\to\mathrm{Aut}(G_{\mathrm{flavor}}\rtimes G_{ \mathrm{modular}})\). Indeed, the following relations can be verified in the semi-direct product group:
\[\widetilde{CP}Z^{\prime}_{\mathrm{even}}\widetilde{CP}^{-1} =(Z^{\prime}_{\mathrm{even}})^{-1},\quad\widetilde{CP}Z^{\prime}_{ \mathrm{odd}}\widetilde{CP}^{-1}=(Z^{\prime}_{\mathrm{odd}})^{-1},\] \[\widetilde{CP}\widetilde{S}\widetilde{CP}^{-1} =\widetilde{S}^{-1},\quad\widetilde{CP}\widetilde{TCP}^{-1}= \widetilde{T}^{-1}. \tag{3.77}\]
Recalling that the modular flavor and CP symmetries are treated in a uniform manner, the traditional flavor, modular flavor and CP symmetries are described by
\[(G_{\rm flavor}\rtimes G_{\rm modular})\rtimes G_{\rm CP}, \tag{101}\]
as discussed in heterotic orbifold models. From the analysis of Sec. 2, the moduli fields can be stabilized in flux compactifications, in particular, \(\mathbb{Z}_{3}\) fixed point. It leads to \(\mathbb{Z}_{3}\) modular symmetry generated by \(\{1,ST,(ST)^{2}\}\). It turned out that such a \(\mathbb{Z}_{3}\) symmetry still enhances the flavor symmetry due to the relation:
\[(\widetilde{ST})C_{\rm even}(\widetilde{ST})^{-1}=C_{\rm even}Z_{\rm even}(Z^{ \prime}_{\rm even})^{2},\qquad(\widetilde{ST})Z_{\rm even}(\widetilde{ST})^{- 1}=Z_{\rm even}. \tag{102}\]
Thus, the discrete non-abelian symmetry \((G_{\rm flavor}\rtimes\mathbb{Z}_{3})\rtimes G_{\rm CP}\) remains in the low-energy action. So far, we have focused on specific magnetized D-brane models with stabilized moduli, but it is quite interesting to explore other flavor models, which left for future work.
## 4 Conclusions
In this paper, we have examined the vacuum structure of Type IIB flux vacua with SM spectra. The background fluxes play an important role in stabilizing moduli fields and determining the generation number of chiral zero-modes. Since the background fluxes are constrained by the tadpole cancellation conditions, the moduli distribution and the generation number are mutually related with each other. By studying the \(T^{6}/(\mathbb{Z}_{2}\times\mathbb{Z}^{\prime}_{2})\) orientifolds with magnetized D-brane models in Secs. 2.2 and 2.3, it is found that the string landscape leads to the small generation number of quarks and leptons. Furthermore, the moduli values are peaked at \(\mathbb{Z}_{3}\) fixed point in the complex structure moduli space. It motivates us to study whether such a discrete symmetry is related to the flavor and/or CP symmetries in the low-energy effective action.
To investigate the relation between the modular symmetry of the torus and the flavor symmetries of quarks and leptons, we have focused on the concrete magnetized D-brane model of Sec. 2.3. Since the wavefunctions of chiral zero-modes and the corresponding Yukawa couplings are written by Jacobi theta function with the modular weight \(1/2\), they are described in the framework of metaplectic modular flavor symmetry. Note that the flavor structure of quarks and leptons is originated from one of tori. We found that the modular transformations of both the \(\mathbb{Z}_{2}\)-even and -odd modes are described by \(\tilde{\Gamma}_{2|M|}\) and \(\tilde{\Gamma}_{4|M|}\) for the magnetic flux with even \(|M|\) and odd \(|M|\), respectively, Furthermore, the CP symmetry can be regarded as the outer automorphism of the metaplectic modular group. For illustrative purposes, we focus on the \(M=4\) case, where three \(\mathbb{Z}_{2}\)-even modes transform under a certain traditional flavor symmetry. We found that the traditional flavor, modular flavor and CP symmetries in Type IIB chiral flux vacua are uniformly described in the context of eclectic flavor symmetry: \((G_{\rm flavor}\rtimes G_{\rm modular})\rtimes G_{\rm CP}\) as discussed in the
heterotic orbifolds [11; 12]. It would be interesting to explore the realization of eclectic flavor symmetry on other corners of string models.
Furthermore, we have stabilized the moduli fields in the framework of flux compactifications. Although the moduli vacuum expectation values are distributed around \(\mathbb{Z}_{3}\) fixed point19, a part of eclectic flavor symmetry \((G_{\text{flavor}}\rtimes G_{\text{modular}})\rtimes G_{\text{CP}}\) still remain in the low-energy effective action. Since the coefficient of 4D higher-dimensional operators will be described by the product of modular forms with half-integer modular weights, the eclectic flavor symmetry would control the flavor structure of higher-dimensional operators. We leave a pursue of these interesting topics for future work.
Footnote 19: If we consider the stabilization of Kähler moduli by non-perturbative effects, the \(\mathbb{Z}_{3}\) symmetry will be slightly broken as analyzed in Ref. [55].
###### Acknowledgements.
This work was supported by JSPS KAKENHI Grant Numbers JP20K14477 (Hajime O.), JP22J12877 (K.I) and JP23H04512 (Hajime O.). The work of Hiroshi O. is supported by the Junior Research Group (JRG) Program at the Asia-Pacific Center for Theoretical Physics (APCTP) through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government and was supported by the Korean Local Governments-Gyeongsangbukdo Province and Pohang City. Hiroshi O. is sincerely grateful for all the KIAS members.
|
2307.07512 | Expressive Monotonic Neural Networks | The monotonic dependence of the outputs of a neural network on some of its
inputs is a crucial inductive bias in many scenarios where domain knowledge
dictates such behavior. This is especially important for interpretability and
fairness considerations. In a broader context, scenarios in which monotonicity
is important can be found in finance, medicine, physics, and other disciplines.
It is thus desirable to build neural network architectures that implement this
inductive bias provably. In this work, we propose a weight-constrained
architecture with a single residual connection to achieve exact monotonic
dependence in any subset of the inputs. The weight constraint scheme directly
controls the Lipschitz constant of the neural network and thus provides the
additional benefit of robustness. Compared to currently existing techniques
used for monotonicity, our method is simpler in implementation and in theory
foundations, has negligible computational overhead, is guaranteed to produce
monotonic dependence, and is highly expressive. We show how the algorithm is
used to train powerful, robust, and interpretable discriminators that achieve
competitive performance compared to current state-of-the-art methods across
various benchmarks, from social applications to the classification of the
decays of subatomic particles produced at the CERN Large Hadron Collider. | Ouail Kitouni, Niklas Nolte, Michael Williams | 2023-07-14T17:59:53Z | http://arxiv.org/abs/2307.07512v1 | # Expressive Monotonic Neural Networks
###### Abstract
The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior. This is especially important for interpretability and fairness considerations. In a broader context, scenarios in which monotonicity is important can be found in finance, medicine, physics, and other disciplines. It is thus desirable to build neural network architectures that implement this inductive bias provably. In this work, we propose a weight-constrained architecture1 with a single residual connection to achieve exact monotonic dependence in any subset of the inputs. The weight constraint scheme directly controls the Lipschitz constant of the neural network and thus provides the additional benefit of robustness. Compared to currently existing techniques used for monotonicity, our method is simpler in implementation and in theory foundations, has negligible computational overhead, is guaranteed to produce monotonic dependence, and is highly expressive. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider.
Footnote 1: [https://github.com/niklasnolte/MonotoneNorm](https://github.com/niklasnolte/MonotoneNorm)
## 1 Introduction
The need to model functions that are monotonic in a subset of their inputs is prevalent in many ML applications. Enforcing monotonic behaviour can help improve generalization capabilities (Milani Fard et al., 2016; You et al., 2017) and assist with interpretation of the decision-making process of the neural network (Nguyen Martinez, 2019). Real world scenarios include various applications with fairness, interpretability, and security aspects. Examples can be found in the natural sciences and in many social applications. Monotonic dependence of a model output on a certain feature in the input can be informative of how an algorithm works--and in some cases is essential for real-word usage. For instance, a good recommender engine will favor the product with a high number of reviews over another with fewer but otherwise identical reviews (_ceteris paribus_). The same applies for systems that assess health risk, evaluate the likelihood of recidivism, rank applicants, filter inappropriate content, _etc_.
In addition, robustness to small perturbations in the input is a desirable property for models deployed in real world applications. In particular, when they are used to inform decisions that directly affect human actors--or where the consequences of making an unexpected and unwanted decision could be extremely costly. The continued existence of adversarial methods is a good example for the possibility of malicious attacks on current algorithms (Akhtar et al., 2021). A natural way of ensuring the robustness of a model is to constrain its Lipschitz constant. To this end, we recently developed an architecture whose Lipschitz constant is constrained by design using layer-wise normalization which allows the architecture to be more expressive than the current state-of-the-art with stable and fast training (Kitouni et al., 2021). Our algorithm has been adopted to classify the decays of subatomic
particles produced at the CERN Large Hadron Collider in the real-time data-processing system of the LHCb experiment, which was our original motivation for developing this novel architecture.
In this paper, we present expressive monotonic Lipschitz networks. This new class of architectures employs the Lipschitz bounded networks from Kitouni et al. (2021) along with residual connections to implement monotonic dependence in any subset of the inputs by construction. It also provides exact robustness guarantees while keeping the constraints minimal such that it remains a universal approximator of Lipschitz continuous monotonic functions. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to its original target application: the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider.
## 2 Related Work
Prior work in the field of monotonic models can be split into two major categories.
* **Built-in and constrained monotonic architectures**: Examples of this category include Deep Lattice Networks (You et al., 2017) and networks in which all weights are constrained to have the same sign (Sill, 1998). The major drawbacks of most implementations of constrained architectures are a lack of expressiveness or poor performance due to superfluous complexity.
* **Heuristic and regularized architectures (with or without certification)**: Examples of such methods include Sill & Abu-Mostafa (1996) and Gupta et al., which penalizes point-wise negative gradients on the training sample. This method works on arbitrary architectures and retains much expressive power but offers no guarantees as to the monotonicity of the trained model. Another similar method is Liu et al. (2020), which relies on Mixed Integer Linear Programming to certify the monotonicity of piece-wise linear architectures. The method uses a heuristic regularization to penalize the non-monotonicity of the model on points sampled uniformly in the domain during training. The procedure is repeated with increasing regularization strength until the model passes the certification. This iteration can be expensive and while this method is more flexible than the constrained architectures (valid for MLPs with piece-wise linear activations), the computational overhead of the certification process can be prohibitively expensive. Similarly, Sivaraman et al. (2020) propose guaranteed monotonicity for standard ReLU networks by letting a Satisfiability Modulo Theories (SMT) solver find counterexamples to the monotonicity definition and adjust the prediction in the inference process such that monotonicity is guaranteed. However, this approach requires queries to the SMT solver during inference time for each monotonic feature, and the computation time scales harshly with the number of monotonic features and the model size (see Figure 3 and 4 in Sivaraman et al. (2020)).
Our architecture falls into the first category. However, we overcome both main drawbacks: lack of expressiveness and impractical complexity. Other related works appear in the context of monotonic functions for normalizing flows, where monotonicity is a key ingredient to enforce invertibility (De Cao et al., 2020; Huang et al., 2018; Behrmann et al., 2019; Wehenkel & Louppe, 2019).
## 3 Methods
The goal is to develop a neural network architecture representing a vector-valued function
\[f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n},\quad d,n\in\mathbb{N}, \tag{1}\]
that is provably monotonic in any subset of its inputs. We first define a few ingredients.
**Definition 3.1** (Monotonicity).: Let \(\mathbf{x}\in\mathbb{R}^{d},\mathbf{x}_{\mathbb{S}}\equiv\mathbbm{1}_{\mathbb{S}} \odot\mathbf{x}\), and the Hadamard product of \(\mathbf{x}\) with the indicator vector \(\mathbbm{1}_{\mathbb{S}}(i)=1\) if \(i\in\mathbb{S}\) and \(0\) otherwise for a subset \(\mathbb{S}\subseteq\{1,\cdots,d\}\).
We say that outputs \(\mathbb{Q}\subseteq\{1,\cdots,n\}\) of \(f\) are monotonically increasing in features \(\mathbb{S}\) if
\[f(\mathbf{x}_{\mathbb{S}}^{\prime}+\mathbf{x}_{\mathbb{S}})_{i}\leq f(\mathbf{x}_{ \mathbb{S}}+\mathbf{x}_{\mathbb{S}})_{i}\quad\forall i\in\mathbb{Q}\text{ and }\forall\mathbf{x}_{\mathbb{S}}^{\prime}\leq\mathbf{x}_{\mathbb{S}}, \tag{2}\]
where \(\mathbb{\bar{S}}\) denotes the complement of \(\mathbb{S}\) and the inequality on the right uses the product (or component-wise) order.
**Definition 3.2** (Lip\({}^{p}\) function).: \(g:\mathbb{R}^{d}\to\mathbb{R}^{n}\) is Lip\({}^{p}\) if it is Lipschitz continuous with respect to the \(L^{p}\) norm in every output dimension, _i.e._,
\[||g(\mathbf{x})-g(\mathbf{y})||_{\infty}\leq\lambda\|\mathbf{x}-\mathbf{y}\|_{p}\quad\forall\bm {x},\mathbf{y}\in\mathbb{R}^{n}\,. \tag{3}\]
### Lipschitz Monotonic Networks (LMN)
We will henceforth and without loss of generality only consider scalar-valued functions (\(n=1\)). We start with a model \(g(\mathbf{x})\) that is Lip\({}^{1}\) with Lipschitz constant \(\lambda\). Note that the choice of \(p=1\) is crucial for decoupling the magnitudes of the directional derivatives in the monotonic features. More details on this can be found below and in Figure 1. The \(1\)-norm has the convenient side effect that we can tune the robustness requirement for each input individually.
With a model \(g(\mathbf{x})\) we can define an architecture with built-in monotonicity by adding a term that has directional derivative \(\lambda\) for each coordinate in \(\mathbb{S}\):
\[f(\mathbf{x})=g(\mathbf{x})+\lambda(\mathbf{1}_{\mathbb{S}}\cdot\mathbf{x})=g(\mathbf{x})+\lambda \sum_{i\in\mathbb{S}}x_{i}. \tag{4}\]
This residual connection \(\lambda(\mathbf{1}_{\mathbb{S}}\cdot\mathbf{x})\) enforces monotonicity in the input subset \(\mathbf{x}_{\mathbb{S}}\):
\[\frac{\partial g}{\partial x_{i}}\in[-\lambda,\lambda],\;\;\forall i \in\mathbb{N}_{1:n} \tag{5}\] \[\Rightarrow\frac{\partial f}{\partial x_{i}}=\frac{\partial g}{ \partial x_{i}}+\lambda\geq 0\;\;\forall\,\mathbf{x}\in\mathbb{R}^{n},i\in \mathbb{S}\,. \tag{6}\]
The importance of the norm choiceThe construction presented here does not work with \(p\neq 1\) constraints because dependencies between the partial derivatives may be introduced, see Figure 1. The \(p=1\)-norm is the only norm that bounds the gradient within the green square and, crucially, allows the directional derivatives to be as large as \(2\lambda\) independently. When shifting the constraints by introducing the linear term, the green square allows for all possible gradient configurations, given that we can choose \(\lambda\) freely. As a counter example, the red circle, corresponding to \(p=2\) constraints, prohibits important areas in the configuration space.
To be able to represent all monotonic Lip\({}^{1}\) functions with \(2\lambda\) Lipschitz constant, the construction of \(g(\mathbf{x})\) needs to be a universal approximator of Lip\({}^{1}\) functions. In the next section, we will discuss possible architectures for this task.
### Lip\({}^{p=1}\) approximators
Our goal is to construct a universal approximator of Lip\({}^{1}\) functions, _i.e._, we would like the hypothesis class to have two properties:
Figure 1: \(p\)-norm constrained gradients showing (red) \(p=2\) and (green) \(p=1\). The gradient of a function \(g(\mathbf{x})\) that is Lip\({}^{p=2}\) resides within the dashed red line. For a Lip\({}^{p=1}\) function, the boundary is the green dashed line. Note that \(\mathbf{x}\) is taken to be a row vector. The residual connection (in blue) effectively shifts the possible gradients to strictly positive values and thus enforces monotonicity. Note how the red solid circle does not include all possible gradient configurations. For instance, it does not allow for very small gradients in both inputs, whereas the green square includes all configurations, up to an element-wise maximum of \(2\lambda\).
1. It always satisfies Eq. 3, _i.e._, be \(\mathrm{Lip}^{1}\).
2. It is able to fit all possible \(\mathrm{Lip}^{1}\) functions. In particular, the bound in Eq. 3 needs to be attainable \(\forall\,\mathbf{x},\mathbf{y}\).
Lip\({}^{1}\) constrained modelsTo satisfy the first requirement, fully connected networks can be Lipschitz bounded by constraining the matrix norm of all weight matrices (Kitouni et al., 2021; Gouk et al., 2020; Miyato et al., 2018). We recursively define the layer \(l\) of the fully connected network of depth \(D\) with activation \(\sigma\) as
\[\mathbf{z}^{l}=\sigma(\mathbf{z}^{l-1})\mathbf{W}^{l}+\mathbf{b}^{l}, \tag{7}\]
where \(\mathbf{z}^{0}=\mathbf{x}\) is the input and \(f(\mathbf{x})=z^{D}\) is the output of the neural network. It follows that \(g(\mathbf{x})\) satisfies Eq. 3 if
\[\prod_{i=1}^{D}\|\mathbf{W}^{i}\|_{1}\leq\lambda, \tag{8}\]
and \(\sigma\) has a Lipschitz constant less than or equal to 1. There are multiple ways to enforce Eq. 8. Two existing possibilities that involve scaling by the operator norm of the weight matrix (Gouk et al., 2020) are:
\[\mathbf{W}^{i}\rightarrow\mathbf{W}^{\prime i}=\lambda^{1/D}\frac{\mathbf{W}^{i}}{\text{ max}(1,\|\mathbf{W}^{i}\|_{1})}\qquad\text{or}\qquad W^{i}\to W^{\prime i}= \frac{\mathbf{W}^{i}}{\text{max}(1,\lambda^{-1/D}\cdot\|\mathbf{W}^{i}\|_{1})}\,. \tag{9}\]
In our studies, the latter variant seems to train slightly better. However, in some cases it might be useful to use the former to avoid the scale imbalance between the neural network's output and the residual connection used to induce monotonicity. We note that in order to satisfy Eq. 8, it is not necessary to divide the entire matrix by its 1-norm. It is sufficient to ensure that the absolute sum over each column is constrained:
\[\mathbf{W}^{i}\rightarrow\mathbf{W}^{\prime i}=\mathbf{W}^{i}\text{diag}\left(\frac{1}{ \text{max}\left(1,\lambda^{-1/D}\cdot\sum_{j}|W^{i}_{jk}|\right)}\right)\,. \tag{10}\]
This novel normalization scheme tends to give even better training results in practice, because the constraint is applied in each column individually. This reduces correlations of constraints, in particular, if a column saturates the bound on the norm, the other columns are not impacted. While Eq. 10 may not be suitable as a general-purpose scheme, _e.g._ it would not work in convolutional networks, its performance in training in our analysis motivates its use in fully connected architectures and further study of this approach in future work.
In addition, the constraints in Eq. 9 and Eq. 10 can be applied in different ways. For example, one could normalize the weights directly before each call such that the induced gradients are propagated through the network like in Miyato et al. (2018). While one could come up with toy examples for which propagating the gradients in this way hurts training, it appears that this approach is what usually is implemented for spectral norm in PyTorch and TensorFlow (Miyato et al., 2018). Alternatively, the constraint could be applied by projecting any infeasible parameter values back into the set of feasible matrices after each gradient update as in Algorithm 2 of Gouk et al. (2020).
Constraining according to Eq. 8 is not the only way to enforce \(\mathrm{Lip}^{1}\). Anil et al. (2019) provide an alternative normalization scheme:
\[\|\mathbf{W}^{1}\|_{1,\infty}\cdot\prod_{i=2}^{m}\|\mathbf{W}^{i}\|_{\infty}\leq\lambda \tag{11}\]
Similarly to how the 1-norm of a matrix is a column-wise maximum, the \(\infty\)-norm of a matrix is determined by the maximum 1-norm of all rows and \(\|W\|_{1,\infty}\) simply equals the maximum absolute value of an element in the matrix. Therefore, normalization schemes similar to Eq. 10, can be employed to enforce the constraints in Eq. 11 by replacing the column-wise normalization with a row- or element-wise normalization where appropriate.
Preserving expressive powerGuaranteeing that the model is Lipschitz bounded is not sufficient, it must also able to saturate the bound to be able to model all possible \(\text{Lip}^{1}\) functions. Some Lipschitz network architectures, _e.g._ Miyato et al. (2018), tend to over constrain the model such that it cannot fit all \(\text{Lip}^{1}\) functions due to _gradient attenuation_. For many problems this is a rather theoretical issue. However, it becomes a practical problem for the monotonic architecture since it often works on the edges of its constraints, for instance when partial derivatives close to zero are required, see Figure 1. As a simple example, the authors of Huster et al. (2018) showed that ReLU networks are unable to fit the function \(f(x)=|x|\) if the layers are norm-constrained with \(\lambda=1\). The reason lies in fact that ReLU, and most other commonly used activations, do not have unit gradient with respect to the inputs over their entire domain. While monotonic element-wise activations like ReLU cannot have unit gradient almost everywhere without being exactly linear, the authors of Anil et al. (2019) explore activations that introduce non-linearities by reordering elements of the input vector. They propose **GroupSort** as an alternative to point-wise activations, and it is defined as follows:
\[\sigma_{G}(\mathbf{x}) =\text{sort}_{1:G}(\mathbf{x}_{1:G})+\text{sort}_{G+1:2G}(\mathbf{x}_{G +1:2G})+\ldots\] \[=\sum_{i=0}^{n/G-1}\text{sort}_{iG+1:(i+1)G}(\mathbf{x}_{iG+1:(i+1)G}), \tag{12}\]
where \(\mathbf{x}\in\mathbb{R}^{n}\), \(\mathbf{x}_{i:j}=\mathbf{1}_{i:j}\odot\mathbf{x}\), and \(\text{sort}_{i:j}\) orders the elements of a vector from indices \(i\) to \(j\) and leaves the other elements in place. This activation sorts an input vector in chunks (groups) of a fixed size \(G\). The GroupSort operation has a gradient of unity with respect to every input, giving architectures constrained with Eq. 8 greatly increased expressive power. In fact, Anil et al. (2019) prove that GroupSort networks with the normalization scheme in Eq. 11 are universal approximators of \(\text{Lip}^{1}\) functions. Therefore, these networks fulfill the two requirements outlined in the beginning of this section.
For universal approximation to be possible, the activation function used needs to be gradient norm preserving (GNP), _i.e._, have gradient 1 almost everywhere. Householder activations are another instance of GNP activations of which GroupSort-2 is a special case (Singla et al., 2021). The Householder activation is defined as follows:
\[\sigma(\mathbf{z})=\begin{cases}\mathbf{z}&\mathbf{zv}>0\\ \mathbf{z}(\mathbf{1}-2\mathbf{v}\mathbf{v}^{T})&\mathbf{zv}\leq 0\end{cases} \tag{13}\]
Here, \(\mathbf{z}\) is the preactivation row vector and \(\mathbf{v}\) is any column unit vector. Householder Lipschitz Networks naturally inherit the universal approximation property.
In summary, we have constructed a neural network architecture \(f(\mathbf{x})\) via Eq. 4 that can provably approximate all monotonic Lipschitz bounded functions. The Lipschitz constant of the model can be increased arbitrarily by controlling the parameter \(\lambda\) in our construction.
## 4 Experiments
"Beware of bugs in the above code, I have only proved it correct, not tried it" (Knuth). In the spirit of Donald Knuth, in this section we test our algorithm on many different domains to show that it works well in practice and gives competitive results, as should be expected from a universal approximator.
### Toy Example
Figure 2 shows a toy example where both a monotonic and an unconstrained network are trained to regress on a noisy one-dimensional dataset. The true underlying model used here is monotonic, though an added heteroskedastic Gaussian noise term can obscure this in any realization. As can be seen in Figure 2, no matter how the data are distributed at the edge of the support, the monotonic Lipschitz network is always non-decreasing outside of the support as guaranteed by our architecture. Such out-of-distribution guarantees can be extremely valuable in cases where domain knowledge dictates monotonic behavior is either required or desirable.
### Real-Time Decision-Making at 40 MHz at the LHC
Because many physical systems are modeled with well-known theoretical frameworks that dictate the properties of the system, monotonicity can be a crucial inductive bias in the physical sciences.
For instance, modeling enthalpy, a thermodynamic quantity measuring the total heat content of a system, in a simulator requires a monotonic function of temperature for fixed pressure (as is known from basic physical principles).
In this section, we describe a real-world physics application which requires monotonicity in certain features--and robustness in all of them. The algorithm described here has, in fact, been implemented by a high-energy particle physics experiment at the European Center for Nuclear Research (CERN), and is actively being used to collect data at the Large Hadron Collider (LHC) in 2022, where high-energy proton-proton collisions occur at 40 MHz.
The sensor arrays of the LHC experiments produce data at a rate of over 100 TB/s. Drastic data-reduction is performed by custom-built read-out electronics; however, the annual data volumes are still \(O(100)\) exabytes, which cannot be put into permanent storage. Therefore, each LHC experiment processes its data in real time, deciding which proton-proton collision events should be kept and which should be discarded permanently; this is referred to as _triggering_ in particle physics. To be suitable for use in trigger systems, classification algorithms must be robust against the impact of experimental instabilities that occur during data taking--and deficiencies in simulated training samples. Our training samples cannot possibly account for the unknown new physics that we hope to learn by performing the experiments!
A ubiquitous inductive bias at the LHC is that outlier collision events are more interesting, since we are looking for physics that has never been observed before. However, uninteresting outliers are frequently caused by experimental imperfections, many of which are included and labeled as background in training. Conversely, it is not possible to include the set of all possible interesting outliers _a priori_ in the training. A solution to this problem is to implement _outliers are better_ directly using our expressive monotonic Lipschitz architecture from Section 3.
Our architecture was originally developed for the task of classifying the decays of heavy-flavor particles produced at the LHC. These are bound states containing a beauty or charm quark that travel an observable distance \(\mathcal{O}(1\,\mathrm{cm})\) before decaying due to their (relatively) long lifetimes. This example uses a dataset of simulated proton-proton (\(pp\)) collisions in the LHCb detector. Charged particles recorded by LHCb are combined pairwise into decay-vertex (DV) candidates. The task concerns discriminating DV candidates corresponding to heavy-flavor decays from all other sources. Heavy-flavor DVs typically have substantial separation from the \(pp\) collision point, due to the relatively long heavy-flavor particle lifetimes, and large transverse momenta, \(p_{\mathrm{T}}\), of the component particles, due to the large heavy-flavor particle masses. The main sources of background DVs, described in Kitouni et al. (2021), mostly have small displacement and small \(p_{\mathrm{T}}\), though unfortunately they can also have extremely large values of both displacement and momentum.
Figure 2: Our monotonic architecture (green) and an unconstrained network (red) trained on two realizations (purple data points) of a one dimensional dataset. The shaded regions are where training data were absent. Each model is trained using 10 random initialization seeds. The dark lines are averages over the seeds, which are each shown as light lines. The unconstrained models exhibit overfitting of the noise, non-monotonic behavior, and highly undesirable and unpredictable results when extrapolating beyond the region occupied by the training data. Conversely, the monotonic Lipschitz models are always monotonic, even in scenarios where the noise is strongly suggestive of non-monotonic behavior. In addition, the Lipschitz constraint produces much smoother models.
Figure 3 shows a simplified version of this problem using only the two most-powerful inputs. Our inductive bias requires a monotonic increasing response in both features (detailed discussion motivating this bias can be found in Kitouni et al. (2021)). We see that an unconstrained neural network rejects DVs with increasing larger displacements (lower right corner), and that this leads to a decrease of the signal efficiency (true positive rate) for large lifetimes. The unconstrained model violates our inductive bias. Figures 3 and 4 show that a monotonic BDT (Auguste et al., 2020) approach works here. However, the jagged decision boundary can cause problems in subsequent analysis of the data. Figure 3 also shows that our novel approach from Section 3 successfully produces a smooth and monotonic response, and Figure 4 shows that this provides the monotonic lifetime dependence we desire in the efficiency.
In addition, we note that the added benefit of guaranteed Lipschitz robustness is a major advantage for many real world applications. Specifically for particle physicists, this kind of robustness directly translates to important guarantees when considering experimental instabilities.
Due to the simplicity and practicality of our method, the LHCb experiment is now using the proposed architecture for real-time data selection at a data rate of about \(40\,\)Tbit/s.
Figure 4: From Kitouni et al. (2021): True positive rate (efficiency) of each model shown in Figure 3 versus the proper lifetime of the decaying heavy-quark particle selected. The monotonic models produce a nearly uniform efficiency above a few picoseconds at the expense of a few percent lifetime-integrated efficiency. Such a trade off is desirable as explained in the text.
Figure 3: From Kitouni et al. (2021): Simplified version of the heavy-quark selection problem using only two inputs, which permits displaying the response everywhere in the feature space; shown here as a heat map with more signal-like (background-like) regions colored blue (red). The dark solid line shows the decision boundary (upper right regions are selected). Shown are (left) a standard fully connected neural network, (middle) a monotonic BDT, and (right) our architecture. The quantities shown on the horizontal and vertical axes are related to how long the particle lived before decaying and how massive the particle was, respectively.
### Public datasets with monotonic dependence
In this section, we follow as closely as possible the experiments done in Liu et al. (2020), and some experiments done in Sivaraman et al. (2020) to be able to directly compare to state-of-the-art monotonic architectures. Liu et al. (2020) studied monotonic architectures on four different datasets: COMPAS (Larson and Kirchner, 2016), BlogFeedback (Buza, 2014), LoanDefaulter (Kaggle, 2015), and ChestXRay (Wang et al., 2017). From Sivaraman et al. (2020) we compare against one regression and one classification task: AutoMPG (Dua and Graff, 2017) and HeartDisease (Gennari et al., 1989). Results are shown in Table 1.
an augmented version (i.e. with an additional monotonic feature added artificially) of CIFAR100 in Appendix A.
## 5 Limitations
We are working on improving the architecture as follows: First, common initialization techniques are not optimal for weight-normed networks (Arpit et al., 2019). Simple modifications to the weight initialization might aid convergence, especially for large Lipschitz parameters. Secondly, we are currently constrained to activation functions that have a gradient norm of \(1\) over their entire domain, such as **GroupSort**, to ensure universal approximation, see Anil et al. (2019). We will explore other options in the future. Lastly, there is not yet a proof for universal approximation for the architecture described in Eq. 8. However, it appears from empirical investigation that the networks do approximate universally, as we have yet to find a function that could not be approximated well enough with a deep enough network. We do not consider this a major drawback, as the construction in Eq. 11 does approximate universally, see Anil et al. (2019). Note that none of these limitations have any visible impact on the performance of the experiments in Section 4.
## 6 Conclusion and Future Work
We presented an architecture that provably approximates Lipschitz continuous and partially monotonic functions. Monotonic dependence is enforced via an end-to-end residual connection to a minimally Lip\({}^{1}\) constrained fully connected neural network. This method is simple to implement, has negligible computational overhead, and gives stronger guarantees than regularized models. Our architecture achieves competitive results with respect to current state-of-the-art monotonic architectures, even when using a tiny number of parameters, and has the additional benefit of guaranteed robustness due to its known Lipschitz constant. For future directions of this line of research, we plan to tackle the problems outlined in the limitation section, especially improving initialization of weight-normed networks.
\begin{table}
\begin{tabular}{c|c c} \multicolumn{3}{c}{**COMPAS**} \\ \hline Method & Parameters & \(\left\uparrow\right.\) Test Acc \\ \hline Certified & 23112 & \((68.8\pm 0.2)\%\) \\
**LMN** & **37** & \((\mathbf{69.3\pm 0.1})\%\) \\ \multicolumn{3}{c}{**LoanDefaulter**} \\ \hline Method & Parameters & \(\left\uparrow\right.\) Test Acc \\ \hline Certified & 8502 & \((65.2\pm 0.1)\%\) \\
**LMN** & **753** & \((\mathbf{65.44\pm 0.03})\%\) \\
**LMN mini** & **69** & \((\mathbf{65.28\pm 0.01})\%\) \\ \multicolumn{3}{c}{**Heart Disease**} \\ \hline Method & \(\left\uparrow\right.\) Test Acc \\ \hline COMET & \((86\pm 3)\%\) \\
**LMN** & \((89.6\pm 1.9)\%\) \\ \end{tabular}
\begin{tabular}{c|c c} \multicolumn{3}{c}{**BlogFeedback**} \\ \hline Method & Parameters & \(\left\downarrow\right.\) RMSE \\ \hline Certified & 8492 & \(.158\pm.001\) \\
**LMN** & **2225** & \(\mathbf{.160\pm.001}\) \\
**LMN mini** & **177** & \(\mathbf{.155\pm.001}\) \\ \multicolumn{3}{c}{**ChestXRay**} \\ \hline Method & Parameters & \(\left\uparrow\right.\) Test Acc \\ \hline Certified & 12792 & \((62.3\pm 0.2)\%\) \\ Certified E-E & 12792 & \((66.3\pm 1.0)\%\) \\
**LMN** & **1043** & \((\mathbf{67.6\pm 0.6})\%\) \\
**LMN E-E** & **1043** & \((\mathbf{70.0\pm 1.4})\%\) \\ \multicolumn{3}{c}{**Auto MPG**} \\ \hline Method & \(\left\downarrow\right.\) MSE \\ \hline COMET & \((8.81\pm 1.81)\%\) \\
**LMN** & \((\mathbf{7.58\pm 1.2})\%\) \\ \end{tabular}
\begin{tabular}{c|c c} \multicolumn{3}{c}{**Auto MPG**} \\ \hline Method & \(\left\downarrow\right.\) MSE \\ \hline COMET & \((8.81\pm 1.81)\%\) \\
**LMN** & \((\mathbf{7.58\pm 1.2})\%\) \\ \end{tabular}
\end{table}
Table 1: We compare our method (in bold) against state-of-the-art monotonic models (we only show the best) on a variety of benchmarks. The performance numbers for other techniques were taken from Liu et al. (2020) and Sivaraman et al. (2020). In the ChestXRay experiment, we train one model with frozen ResNet18 weights (second to last) and another with end-to-end training (last). While our models can generally get quite small, we can achieve even smaller models when only taking a subset of all the features. These models are denoted with “mini”.
## 7 Reproducibility Statement
All experiments with public datasets are reproducible with the code provided at [https://github.com/niklasnolte/monotonic_tests](https://github.com/niklasnolte/monotonic_tests). This code uses the package available in [https://github.com/niklasnolte/MonotoneNorm](https://github.com/niklasnolte/MonotoneNorm), which is meant to be a standalone pytorch implementation of Lipschitz Monotonic Networks. The experiments in Section 4.2 were made with data that is not publicly available. The code to reproduce those experiments can be found under [https://github.com/niklasnolte/HLT_2Track](https://github.com/niklasnolte/HLT_2Track) and the data will be made available in later years at the discretion of the LHCb collaboration.
#### Acknowledgments
This work was supported by NSF grant PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, [http://iaifi.org/](http://iaifi.org/)).
|
2306.10347 | DCdetector: Dual Attention Contrastive Representation Learning for Time
Series Anomaly Detection | Time series anomaly detection is critical for a wide range of applications.
It aims to identify deviant samples from the normal sample distribution in time
series. The most fundamental challenge for this task is to learn a
representation map that enables effective discrimination of anomalies.
Reconstruction-based methods still dominate, but the representation learning
with anomalies might hurt the performance with its large abnormal loss. On the
other hand, contrastive learning aims to find a representation that can clearly
distinguish any instance from the others, which can bring a more natural and
promising representation for time series anomaly detection. In this paper, we
propose DCdetector, a multi-scale dual attention contrastive representation
learning model. DCdetector utilizes a novel dual attention asymmetric design to
create the permutated environment and pure contrastive loss to guide the
learning process, thus learning a permutation invariant representation with
superior discrimination abilities. Extensive experiments show that DCdetector
achieves state-of-the-art results on multiple time series anomaly detection
benchmark datasets. Code is publicly available at
https://github.com/DAMO-DI-ML/KDD2023-DCdetector. | Yiyuan Yang, Chaoli Zhang, Tian Zhou, Qingsong Wen, Liang Sun | 2023-06-17T13:40:15Z | http://arxiv.org/abs/2306.10347v2 | # DCdetector: Dual Attention Contrastive Representation Learning for Time Series Anomaly Detection
###### Abstract.
Time series anomaly detection is critical for a wide range of applications. It aims to identify deviant samples from the normal sample distribution in time series. The most fundamental challenge for this task is to learn a representation map that enables effective discrimination of anomalies. Reconstruction-based methods still dominate, but the representation learning with anomalies might hurt the performance with its large abnormal loss. On the other hand, contrastive learning aims to find a representation that can clearly distinguish any instance from the others, which can bring a more natural and promising representation for time series anomaly detection. In this paper, we propose DCdetector, a multi-scale dual attention contrastive representation learning model. DCdetector utilizes a novel dual attention asymmetric design to create the permutated environment and pure contrastive loss to guide the learning process, thus learning a permutation invariant representation with superior discrimination abilities. Extensive experiments show that DCdetector achieves state-of-the-art results on multiple time series anomaly detection benchmark datasets. Code is publicly available at this URL.1.
time series anomaly detection, contrastive learning, representation learning, self-supervised learning +
Footnote †: [leftmargin=*]The first three authors contributed equally to this research.
+
Footnote †: [leftmargin=*]Work done as an intern in DAMO Academy, Alibaba Group. He is now with the Department of Computer Science, University of Oxford, OX1 3SA, Oxford, UK.
+
Footnote †: [leftmargin=*]Corresponding author
+
Footnote †: [leftmargin=*]The first three authors contributed equally to this research.
+
Footnote †: [leftmargin=*]Work done as an intern in DAMO Academy, Alibaba Group. He is now with the Department of Computer Science, University of Oxford, OX1 3SA, Oxford, UK.
+
Footnote †: [leftmargin=*]Corresponding author
classified as statistical, classic machine learning, and deep learning-based methods (Golovolovolov et al., 2013; Golovolovolov and LeCun, 2015). Machine learning methods, especially deep learning-based methods, have succeeded greatly due to their powerful representation advantages. Most of the supervised and semi-supervised methods (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) can not handle the challenge of limited labeled data, especially the anomalies are dynamic and new anomalies never observed before may occur. Unsupervised methods are popular without strict requirements on labeled data, including one class classification-based, probabilistic-based, distance-based, forecasting-based, reconstruction-based approaches (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014).
Reconstruction-based methods learn a model to reconstruct normal samples, and thereby the instances _failing_ be reconstructed by the learned model are anomalies. Such an approach is developing rapidly due to its power in handling complex data by combining it with different machine learning models and its interpretability that the instances behave unusually abnormally. However, it is usually challenging to learn a well-reconstructed model for normal data without being obstructed by anomalies. The situation is even worse in time series anomaly detection as the number of anomalies is unknown, and normal and abnormal points may appear in one instance, making it harder to learn a clean, well-reconstructed model for normal points.
Recently, contrastive representative learning has attracted attention due to its diverse design and outstanding performance in downstream tasks in the computer vision field (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). However, the effectiveness of contrastive representative learning still needs to be explored in the time-series anomaly detection area. In this paper, we propose a **Dual attention Contrastive representation** learning anomaly **detector** called **DCdetector** to handle the challenges in time series anomaly detection. The key idea of our DCdetector is that normal time series points share the latent pattern, which means normal points have strong correlations with other points. In contrast, the anomalies do not (_i.e._, weak correlations with others). Learning consistent representations for anomalies from different views will be hard but easy for normal points. The primary motivation is that if normal and abnormal points' representations are distinguishable, we can detect anomalies without a highly qualified reconstruction model.
Specifically, we propose a contrastive structure with two branches and a dual attention module, and two branches share network weights. This model is trained based on the similarity of two branches, as normal points are the majority. The representation inconsistency of anomaly will be conspicuous. Thus, the representation difference between normal and abnormal data is enlarged without a highly qualified reconstruction model. To capture the temporal dependency in time series, DCdetector utilizes patching-based attention networks as the basic module. A multi-scale design is proposed to reduce information loss during patching. DCdetector takes all channels into representation efficiently with a channel independence design for multivariate time series. In particular, DCdetector does not require prior knowledge about anomalies and thus can handle new outliers never observed before. The main contributions of our DCdetector are summarized as follows:
* Architecture: A contrastive learning-based dual-branch attention structure is designed to learn a permutation invariant representation that enlarges the representation differences between normal points and anomalies. Also, channel independence patching is proposed to enhance local semantic information in time series. Multi-scale is proposed in the attention module to reduce information loss during patching.
* Optimization: An effective and robust loss function is designed based on the similarity of two branches. Note that the model is trained purely contrastively without reconstruction loss, which reduces distractions from anomalies.
* Performance & Justification: DCdetector achieves performance comparable or superior to state-of-the-art methods on six multivariate and one univariate time series anomaly detection benchmark datasets. We also provide justification discussion to explain how our model avoids collapse without negative samples.
## 2. Related Work
In this section, we show the related literature for this work. The relevant works include anomaly detection and contrastive representation learning.
Time Series Anomaly DetectionThere are various approaches to detect anomalies in time series, including statistical methods, classical machine learning methods, and deep learning methods (Krizhevsky et al., 2014). Statistical methods include using moving averages, exponential smoothing (Song et al., 2015), and the autoregressive integrated moving average (ARIIMA) model (Chen et al., 2016). Machine learning methods include clustering algorithms such as k-means (Krizhevsky et al., 2014) and density-based methods, as well as classification algorithms such as decision trees (Krizhevsky et al., 2014; Krizhevsky et al., 2014) and support vector machines (SVMs). Deep learning methods include using autoencoders, variational autoencoders (VAEs) (Krizhevsky et al., 2014; Krizhevsky et al., 2014), and recurrent neural networks (RNNs) (Krizhevsky et al., 2014; Krizhevsky et al., 2014) such as long short-term memory (LSTM) (Krizhevsky et al., 2014). Recent works in time series anomaly detection also include generative adversarial networks (GANs) based methods (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) and deep reinforcement learning (DRL) based methods (Krizhevsky et al., 2014; Krizhevsky et al., 2014). In general, deep learning methods are more effective in identifying anomalies in time series data, especially when the data is high-dimensional or non-linear.
In another view, time series anomaly detection models can be roughly divided into two categories: supervised and unsupervised anomaly detection algorithms. Supervised methods can perform better when the anomaly label is available or affordable. Such methods can be dated back to AutoEncoder (Krizhevsky et al., 2014), LSTM-VAE (Krizhevsky et al., 2014), Spectral Residual (SR) (Krizhevsky et al., 2014), RobustTAD (Krizhevsky et al., 2014) and so on. On the other hand, an unsupervised anomaly detection algorithm can be applied in cases where the anomaly labels are difficult to obtain. Such versatility results in the community's long-lasting interest in developing new unsupervised time-series anomaly detection methods, including DAGMM (Krizhevsky et al., 2014), OmniAnomaly (Krizhevsky et al., 2014), GDN (Krizhevsky et al., 2014), RDSSM (Krizhevsky et al., 2014) and so on. Unsupervised deep learning methods have been widely studied in time series anomaly detection. The main reasons are as follows. First, it is usually hard or unaffordable to get labels for all time series sequences in real-world applications. Second, deep models are powerful in representation learning and have the potential to get a decent detection accuracy under the unsupervised setting. Most of them are based on a reconstruction approach where a well-reconstructed model is learned for normal points; Then, the instances failing to be reconstructed are anomalies. Recently, some
self-supervised learning-based methods have been proposed to enhance the generalization ability in unsupervised anomaly detection (Sutskever et al., 2017; Wang et al., 2018; Wang et al., 2019).
Contrastive Representation LearningThe goal of contrastive representation learning is to learn an embedding space in which similar data samples stay close to each other while dissimilar ones are far apart. The idea of contrastive learning can trace back to Inst-Dic (Wang et al., 2018). Classical contrastive models create -positive, negative-sample pairs to learn a representation where positive samples are near each other (pulled together) and far from negative samples (pushed apart) (K
proven helpful in multivariate time series forecasting tasks (Srivastava et al., 2017; Wang et al., 2018) to reduce parameter numbers and overfitting issues. Our DCdetector follows such channel independence setting to simplify the attention network with patching.
More specifically, the basic patching attention with channel independence is shown in Figure 3. Each channel in the multivariate time series input (\(\mathcal{X}\in\mathbb{R}^{T\times d}\)) is considered as a single time series (\(\mathcal{X}^{i}\in\mathbb{R}^{T\times 1},i=1,2,\ldots,d\)) and divided into patches. Each channel shares the same self-attention network, and the representation results (\(\mathcal{X}^{\prime i}\in\mathbb{R}^{N\times 1},i=1,2,\ldots,d\)) is concatenated as the final output (\(\mathcal{X}^{\prime}\in\mathbb{R}^{N\times d}\)). In the implementation phase, running a sliding window in time series data is widely used in time series anomaly detection tasks (Zhou et al., 2017; Wang et al., 2018) and has little influence on the main design. More implementation details are left in the experiment section.
The Dual Attention Contrastive Structure module is critical in our design. It learns the representation of inputs in different views. The insight is that, for normal points, most of them will share the same latent pattern even in different views (a strong correlation is not easy to be destroyed). However, as anomalies are rare and do not have explicit patterns, it is hard for them to share latent modes with normal points or among themselves (_i.e._, anomalies have a weak correlation with other points). Thus, the difference will be slight for normal points representations in different views and large for anomalies. We can distinguish anomalies from normal points with a well-designed Representation Discrepancy criterion. The details of Dual Attention Contrastive Structure and Representation Discrepancy are left in the following Section 3.2 and Section 3.3.
As for the Anomaly Criterion, we calculate anomaly scores based on the discrepancy between the two representations and use a prior threshold for anomaly detection. The details are left in Section 3.4.
### Dual Attention Contrastive Structure
In DCdetector, we propose a contrastive representation learning structure with dual attention to get the representations of input time series from different views. Concretely, with patching operation, DCdetector takes patch-wise and in-patch representations as two views. Note that it differs from traditional contrastive learning, where original and augmented data are considered as two views of the original data. Moreover, DCdetector does not construct <positive, negative> pairs like the typical contrastive methods (Wang et al., 2018; Wang et al., 2018). Instead, its basic setting is similar to the contrastive methods only using positive samples (Wang et al., 2018; Wang et al., 2018).
#### 3.2.1. Dual Attention
As shown in Figure 2, input time series \(\mathcal{X}\in\mathbb{R}^{T\times d}\) are patched as \(\mathcal{X}\in\mathbb{R}^{P\times N\times d}\) where \(P\) is the size of patches and \(N\) is the number of patches. Then, we fuse the channel information with the batch dimension and the input size becomes \(\mathcal{X}\in\mathbb{R}^{P\times N}\). With such patched time series, DCdetector learns representation in patch-wise and in-patch views with self-attention networks. Our dual attention can be encoded in \(L\) layers, so for simplicity, we only use one of these layers as an example.
For the patch-wise representation, a single patch is considered as a unit, and the dependencies among patches are modeled by a multi-head self-attention network (named patch-wise attention). In detail, an embedded operation will be applied in the patch_size (\(P\)) dimension, and the shape of embedding is \(\mathcal{X}_{\mathcal{N}}\in\mathbb{R}^{N\times d_{model}}\). Then, we adopt multi-head attention weights to calculate the patch-wise representation. Firstly, initialize the query and key:
\[\mathcal{Q}_{N_{l}},\mathcal{K}_{N_{l}}=\mathcal{W}_{Q_{l}}\mathcal{X}_{N_{l}} \mathcal{W}_{X_{l}}\mathcal{X}_{N_{l}}\quad 1\leq i\leq H, \tag{1}\]
Figure 3. Basic patching attention with channel independence. Each channel in the multivariate time series input is considered as a single time series and divided into patches. Each channel shares the same self-attention network, and the representation results are concatenated as the final output.
Figure 2. The workflow of the DCdetector framework. DCdetector consists of four main components: Forward Process module, Dual Attention Contrastive Structure module, Representation Discrepancy module, and Anomaly Criterion module.
where \(Q_{N_{i}},\mathcal{K}_{N_{i}}\in\mathbb{R}^{N\times\frac{d_{model}}{H}}\) denote the query and key, respectively; \(\mathcal{W}_{Q_{i}},\mathcal{W}_{Xi}\in\mathbb{R}^{\frac{d_{model}}{H}\times \frac{d_{model}}{H}}\) represent learnable parameter matrices of \(Q_{N_{i}},\mathcal{K}_{N_{i}}\), and \(H\) is the head number. Then, compute the attention weights:
\[Attn_{N_{i}}=Softmax(\frac{Q_{N_{i}}\mathcal{K}_{N_{i}}^{T}}{\sqrt{d_{model}}}), \tag{2}\]
where \(Softmax(\cdot)\) function normalizes the attention weight. Finally, contact the multi-head and get the final patch-wise representation \(Attn_{N}\), which is:
\[Attn_{N}=\text{Concat}(Attn_{N_{i}},\cdots,Attn_{N_{H}})W_{N}^{O}, \tag{3}\]
where \(W_{N}^{O}\in\mathbb{R}^{d_{model}\times d_{model}}\) is a learnable parameter matrix.
Similarly, for the in-patch representation, the dependencies of points in the same patch are gained by a multi-head self-attention network (called in-patch attention). Note that the patch-wise attention network shares weights with the in-patch attention network. Specifically, another embedded operation will be applied in the patch_number (\(N\)) dimension, and the shape of embedding is \(\mathcal{X}_{\mathcal{P}}\in\mathbb{R}^{P\times d_{model}}\). Then, we adopt multi-head attention weights to calculate the in-patch representation. First, initialize the query and key:
\[\mathcal{Q}_{P_{i}},\mathcal{K}_{P_{i}}=\mathcal{W}_{Q_{i}}\mathcal{X}_{P_{i} },\mathcal{W}_{Xi}\mathcal{X}_{P_{i}}\quad 1\leq i\leq H, \tag{4}\]
where \(\mathcal{Q}_{P_{i}},\mathcal{K}_{P_{i}}\in\mathbb{R}^{P\times\frac{d_{model}}{ H}}\) denote the query and key, respectively, and \(\mathcal{W}_{Q_{i}},\mathcal{W}_{Xi}\in\mathbb{R}^{\frac{d_{model}}{H}\times \frac{d_{model}}{H}}\) represent learnable parameter matrices of \(\mathcal{Q}_{P_{i}},\mathcal{K}_{N_{i}}\). Then, compute the attention weights:
\[Attn_{P_{i}}=Softmax(\frac{\mathcal{Q}_{P_{i}}\mathcal{K}_{P_{i}}^{T}}{\sqrt{d _{model}}}), \tag{5}\]
where \(Softmax(\cdot)\) function normalizes the attention weight. Finally, contact the multi-head and get the final in-patch representation \(Attn_{\mathcal{P}}\), which is:
\[Attn_{\mathcal{P}}=\text{Concat}(Attn_{P_{i}},\cdots,Attn_{P_{H}})W_{P}^{O}, \tag{6}\]
where \(W_{\mathcal{P}}^{O}\in\mathbb{R}^{d_{model}\times d_{model}}\) is a learnable parameter matrix.
Note that the \(\mathcal{W}_{Q_{i}},\mathcal{W}_{Xi}\) are the shared weights within the in-patch attention representation network and patch-wise attention representation network.
#### 3.2.2. Up-sampling and Multi-scale Design
Although the patching design benefits from gaining local semantic information, patch-wise attention ignores the relevance among points in a patch, and in-patch attention ignores the relevance among patches. To compare the results of two representation networks, we need to do up-sampling first. For the patch-wise branch, as we only have the dependencies among patches, repeating is done inside patches (_i.e._, from patch to points) for up-sampling, and we will get the final patch-wise representation \(\mathcal{N}\). For the in-patch branch, as only dependencies among patch points are gained, repeating is done from "one" patch to a full number of patches, and we will get the final in-patch representation \(\mathcal{P}\).
A simple example is shown in Figure 4 where a patch is noted as \(P^{i}\), and a point is noted as \(p_{i}\). Such patching and repeating up-sampling operations inevitably lead to information loss. To keep the information from the original data better, DCdetector introduces a multi-scale design for patching representation and up-sampling. The final representation concatenates results in different scales (_i.e._, patch sizes). Specifically, we can preset a list of various patches to perform parallel patching and the computation of dual attention representations, simultaneously. After upsampling each patch part, they are summed to obtain the final patch-wise representation \(\mathcal{N}\) and in-patch representation \(\mathcal{P}\).
\[\mathcal{N}=\sum_{Patch\ list}\text{Upsampling}(Attn_{\mathcal{N}}); \tag{7}\]
\[\mathcal{P}=\sum_{Patch\ list}\text{Upsampling}(Attn_{\mathcal{P}}). \tag{8}\]
#### 3.2.3. Contrastive Structure
Patch-wise and in-patch branches output representations of the same input time series in two different views. As shown in Figure 2 (c), patch-wise sample representation learns a weighted combination between sample points in the same position from each patch. In-patch sample representation, on the other hand, learns a weighted combination between points within the same patch. We can treat these two representations as permutated multi-view representations. The key inductive bias we exploit here is that normal points can maintain their representation under permutations while the anomalies can not. From such dual attention non-negative contrastive learning, we want to learn a **permutation invariant representation**. Learning details are left in Section 3.3.
### Representation Discrepancy
With dual attention contrastive structure, representations from two views (patch-wise branch and in-patch branch) are gained. We formalize a loss function based on Kullback-Leibler divergence (KL divergence) to measure the similarity of such two representations. The intuition is that, as anomalies are rare and normal points share latent patterns, the same inputs' representations should be similar.
#### 3.3.1. Loss function definition
Define similarity metric \(\mathcal{D}\) of two output representation vectors \(\mathcal{P}\), \(\mathcal{N}\) as \(\mathcal{D}(\mathcal{P},\mathcal{N})=KL(\mathcal{P}||\mathcal{N})\) where \(KL(\cdot||\cdot)\) is the KL divergence distance. The loss function in DCdetector is then defined as
\[\mathcal{L}(\mathcal{P},\mathcal{N};\mathcal{X})=\frac{1}{2}\mathcal{D}( \mathcal{P},\text{Stopgrad}(\mathcal{N}))+\frac{1}{2}\mathcal{D}(\mathcal{N}, \text{Stopgrad}(\mathcal{P})) \tag{9}\]
where \(\mathcal{X}\) is the input time series, \(\mathcal{P}\) and \(\mathcal{N}\) are the representation result matrices of the in-patch branch and the patch-wise branch, respectively. Stop-gradient (labeled as'stopgrad') operation is also used in our loss function to train two branches asynchronously.
Unlike most anomaly detection works based on reconstruction framework (Sutton
the anomalies which behave not as expected. However, it is not easy to build a suitable encoder and decoder to'reconstruct' the time series as they are expected to be with anomalies' interference. Moreover, the ability of representation is restricted as the latent pattern information is not fully considered.
#### 3.3.2. Discussion about Model Collapse
Interestingly, with only single-type inputs (or saying, no negative samples included), our DCdetector model does not fall into a trivial solution (model collapse). SimSiam (Serban et al., 2017) gives the main credit for avoiding model collapse to stop gradient operation in their setting. However, we find that DCdetector still works without stop gradient operation, although with the same parameters, the no stop gradient version does not gain the best performance. Details are shown in the ablation study (Section 4.5.1).
A possible explanation is that our two branches are totally asymmetric. Following the unified perspective proposed in (Zhou et al., 2017), consider the output vector of a branch as \(Z\) and \(Z\) can be decomposed into two parts \(o\) and \(r\) as \(Z=o+r\), where \(o=\mathbf{E}[Z]\) is the center vector defined as an average of \(Z\) in the whole representation space, and \(r\) is the residual vector. When the collapse happens, all vectors \(Z\) fall into the center vector \(o\) and \(o\) dominates over \(r\). With two branches noted as \(Z_{p}=o_{p}+r_{p}\), \(Z_{n}=o_{n}+r_{n}\), if the branches are symmetric, _i.e._, \(o_{p}=o_{n}\), then the distance between them is \(Z_{p}-Z_{n}=r_{p}-r_{n}\). As \(r_{p}\) and \(r_{n}\) come from the same input example, it will lead to collapse. Fortunately, the two branches in DCdetector are asymmetric, so it is not easy for \(o_{p}\) to be the same as \(o_{n}\) even when \(r_{p}\) and \(r_{n}\) are similar. Thus, due to our asymmetric design, DCdetector is hard to fall into a trivial solution.
### Anomaly Criterion
With the insight that normal points usually share latent patterns (with strong correlation among them), thus the distances of representation results from different views for normal points are less than that for anomalies. The final anomaly score of \(\mathcal{X}\in\mathbb{R}^{T\times d}\) is defined as
\[\mathsf{AnomalyScore}(X)=\frac{1}{2}\mathcal{D}(\mathcal{P},\mathcal{N})+ \frac{1}{2}\mathcal{D}(\mathcal{N},\mathcal{P}). \tag{10}\]
It is a point-wise anomaly score, and anomalies result in higher scores than normal points.
Based on the point-wise anomaly score, a hyperparameter threshold \(\delta\) is used to decide if a point is an anomaly (1) or not (0). If the score exceeds the threshold, the output \(\mathcal{Y}\) is an anomaly. That is
\[\mathcal{Y}_{t}=\begin{cases}1:\text{anomaly}&\text{ AnomalyScore}(X_{t})\geq\delta\\ 0:\text{normal}&\text{ AnomalyScore}(X_{t})<\delta\end{cases} \tag{11}\]
## 4. Experiments
### Benchmark Datasets
We adopt seven representative benchmarks from five real-world applications to evaluate DCdetector: (1) **MSL** (Mars Science Laboratory dataset) is collected by NASA and shows the condition of the sensors and actuator data from the Mars rover (Sandhi et al., 2017). (2) **SMAP** (Soil Moisture Active Passive dataset) is also collected by NASA and presents the soil samples and telemetry information used by the Mars rover (Sandhi et al., 2017). Compared with MSL, SMAP has more point anomalies. (3) **PSM** (Pooled Server Metrics dataset) is a public dataset from eBay Server Machines with 25 dimensions (Bartos et al., 2016). (4) **SMD** (Server Machine Dataset) is a five-week-long dataset collected from an internet company compute cluster, which stacks accessed traces of resource utilization of 28 machines (Zhou et al., 2017). (5) **NIPS-TS-SWAN** is an openly accessible comprehensive, multivariate time series benchmark extracted from solar photospheric vector magnetograms in Spacewather HMI Active Region Patch series (Han et al., 2017; Wang et al., 2018). (6) **NIPS-TS-GECCO** is a drinking water quality dataset for the 'internet of things', which is published in the 2018 genetic and evolutionary computation conference (Wang et al., 2018; Wang et al., 2018). Besides the above multivariate time series datasets, we also test univariate time series datasets. (7) **UCR** is provided by the Multi-dataset Time Series Anomaly Detection Competition of KDD2021, and contains 250 sub-datasets from various natural sources (Kumar et al., 2018; Zhou et al., 2017). It is a univariate time series of dataset subsequence anomalies. In each time series, there is one and only one anomaly. More details of the seven benchmark datasets are summarized in Table 8 in Appendix B.
### Baselines and Evaluation Criteria
We compare our model with the 26 baselines for comprehensive evaluations, including the reconstruction-based model: AutoEncoder (Xu et al., 2018), LSTM-VAE (Wang et al., 2018), OmniAnomaly (Zhou et al., 2018), BeatGAN (Xu et al., 2018), InterFusion (Zhou et al., 2018), Anomaly Transformer (Xu et al., 2018); the autoregression (Zhou et al., 2018), LSTM-RNN (Xu et al., 2018), LSTM (Sandhi et al., 2017), CL-MPPCA (Zhou et al., 2018); the density-estimation models: LOF (Xu et al., 2018), MPCACD (Xu et al., 2018), DAGMM (Xu et al., 2018); the clustering-based methods: DeepSVDD (Xu et al., 2018), THOC (Xu et al., 2018), ITAD (Xu et al., 2018); the classic methods: OCSVM (Zhou et al., 2018), OCSVM-based subsequence clustering (OCSVM*), IForest (Zhou et al., 2018), IForest-based subsequence clustering (IForest*), Gradient boosting regression (GBRT) (Zhou et al., 2018); the change point detection and time series segmentation methods: BOCD (Bartos et al., 2016), U-Time (Zhou et al., 2018), TS-CP2 (Zhou et al., 2018). We also compare our model with a time-series subsequence anomaly detection algorithm Matrix Profile (Xu et al., 2018).
Besides, we adopt various evaluation criteria for comprehensive comparison, including the commonly-used evaluation measures: accuracy, precision, recall, F1-score; the recently proposed evaluation measures: affiliation precision/recall pair (Xu et al., 2018) and Volume under the surface (VUS) (Zhou et al., 2018). F1-score is the most widely used metric but does not consider anomaly events. Affiliation precision and recall are calculated based on the distance between ground truth and prediction events. VUS metric takes anomaly events into consideration based on the receiver operator characteristic (ROC) curve. Different metrics provide different evaluation views. We employ the commonly-used adjustment technique for a fair comparison (Xu et al., 2018; Wang et al., 2018; Wang et al., 2018), according to which all abnormalities in an abnormal segment are considered to have been detected if a single time point in an abnormal segment is identified.
### Implementation Details
We summarize all the default hyper-parameters as follows in our implementation. Our DCdetector model contains three encoder layers (\(L=3\)). The dimension of the hidden state \(d_{model}\) is 256, and the number of attention head \(H\) is 1 for simplicity. We select various patch size and window size options for different datasets, as shown in Table 8 in Appendix B. Our model defines an anomaly as a time point whose anomaly score exceeds a hyperparameter threshold
\(\delta\), and its default value to 1. For all experiments on the above hyperparameter selection and trade-off, please refer to Appendix C. Besides, all the experiments are implemented in PyTorch (Paszasz et al., 2017) with one NVIDIA Tesla-V100 32GB GPU. Adam (Kingmare et al., 2014) with default parameter is applied for optimization. We set the initial learning rate to \(10^{-4}\) and the batch size to 128 with 3 epochs for all datasets.
### Main Results
#### 4.4.1. Multivariate Anomaly Detection
We first evaluate our DCdeterot with nineteen competitive baselines on four real-world multivariate datasets as shown in Table 1. It can be seen that our proposed DCdeterot achieves SOTA results under the widely used F1 metric (Wang et al., 2017; Wang et al., 2017) in most benchmark datasets. It is worth mentioning that it has been an intense discussion among recent studies about how to evaluate the performance of anomaly detection algorithms fairly. Precision, Recall, and F1 score are still the most widely used metrics for comparison. Some additional metrics (affiliation precision/recall pair, VUS, etc.) are proposed to complement their deficiencies (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). To judge which metric is the best beyond the scope of our work, so we include all the metrics here. As the recent
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c} \hline \hline
**Dataset** & \multicolumn{3}{c|}{**MSL**} & \multicolumn{3}{c|}{**SMAP**} & \multicolumn{3}{c|}{**PSM**} & \multicolumn{3}{c}{**SMD**} \\ \hline
**Metric** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline LOF & 47.72 & 85.25 & 61.18 & 58.93 & 56.33 & 57.60 & 57.89 & 90.49 & 70.61 & 56.34 & 39.86 & 46.68 \\ OCSVM & 59.78 & 86.87 & 70.82 & 53.85 & 59.07 & 56.34 & 62.75 & 80.89 & 70.67 & 44.34 & 76.72 & 56.19 \\ U-Time & 57.20 & 71.66 & 63.62 & 49.71 & 56.18 & 52.75 & 82.85 & 79.34 & 81.06 & 65.95 & 74.75 & 70.07 \\ IForest & 53.94 & 86.54 & 66.45 & 52.39 & 59.07 & 55.53 & 76.09 & 92.45 & 83.48 & 42.31 & 73.29 & 53.64 \\ DAGMM & 89.60 & 63.93 & 74.62 & 86.45 & 56.73 & 68.51 & 93.49 & 70.03 & 80.08 & 67.30 & 49.89 & 57.30 \\ ITAD & 69.44 & 84.09 & 76.07 & 82.42 & 66.89 & 73.85 & 72.80 & 64.02 & 68.13 & 86.22 & 73.71 & 79.48 \\ VAR & 74.68 & 81.42 & 77.90 & 81.38 & 53.88 & 64.83 & 90.71 & 83.82 & 87.13 & 78.35 & 70.26 & 74.08 \\ MMPCACD & 81.42 & 61.31 & 69.95 & 88.61 & 75.84 & 81.73 & 76.26 & 78.35 & 77.29 & 71.20 & 79.28 & 75.02 \\ CL-MPPCA & 73.71 & 88.54 & 80.44 & 86.13 & 63.16 & 72.88 & 56.02 & **99.93** & 71.80 & 82.36 & 76.07 & 79.09 \\ TS-CP2 & 86.45 & 68.48 & 76.42 & 87.65 & 83.18 & 85.36 & 82.67 & 78.16 & 80.35 & 87.42 & 66.25 & 75.38 \\ Deep-SVDD & 91.92 & 76.63 & 83.58 & 89.93 & 56.02 & 69.04 & 95.41 & 86.49 & 90.73 & 78.54 & 79.67 & 79.10 \\ BOCPD & 80.32 & 87.20 & 83.62 & 84.65 & 85.85 & 85.24 & 80.22 & 75.33 & 77.70 & 79.82 & 82.04 & 76.07 \\ LSTM-VAE & 85.49 & 79.94 & 82.62 & 92.20 & 67.75 & 78.10 & 73.62 & 89.92 & 80.96 & 75.76 & 90.08 & 82.30 \\ BeatGAN & 89.75 & 85.42 & 87.53 & 92.38 & 55.85 & 69.61 & 90.30 & 93.84 & 92.04 & 72.90 & 84.09 & 78.10 \\ LSTM & 85.45 & 82.50 & 83.95 & 89.41 & 78.13 & 83.39 & 76.93 & 89.64 & 82.80 & 78.55 & 85.28 & 81.78 \\ OmniAnomaly & 89.02 & 86.37 & 87.67 & 92.49 & 81.99 & 86.92 & 88.39 & 74.46 & 80.83 & 83.68 & 86.82 & 85.22 \\ InterFusion & 81.28 & 92.70 & 86.62 & 89.77 & 88.52 & 89.14 & 83.61 & 83.45 & 83.52 & 87.02 & 85.43 & 86.22 \\ THOC & 88.45 & 90.97 & 89.69 & 92.06 & 89.34 & 90.68 & 88.14 & 90.99 & 89.54 & 79.76 & 90.95 & 84.99 \\ AnomalyTrans & 91.92 & 96.03 & 93.93 & 93.59 & **99.41** & 96.41 & 96.94 & 97.81 & 97.37 & **88.47** & **92.28** & **90.33** \\ \hline DCdetector & **93.69** & **99.69** & **96.60** & **95.63** & 98.92 & **97.02** & **97.14** & 98.74 & **97.94** & 83.59 & 91.10 & 87.18 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Overall results on real-world multivariate datasets. Performance ranked from lowest to highest. The _P, R and F1_ are the precision, recall and F1-score. All results are in %, the best ones are in Bold, and the second ones are underlined.
\begin{table}
\begin{tabular}{c|c|c c c c|c c c} \hline \hline
**Dataset** & **Method** & **Acc** & **F1** & **Aff-P [31]** & **Aff-R [31]** & **R.A, R [49]** & **R.A, P [49]** & **V.ROC [49]** & **V.PR [49]** \\ \hline \multirow{2}{*}{**MSL**} & AnomalyTrans & 98.69 & 93.93 & 51.76 & 95.98 & 90.04 & 87.87 & 88.20 & 86.26 \\ & DCdeterot & **99.06** & **96.60** & **51.84** & **97.39** & **93.17** & **91.64** & **93.15** & **91.66** \\ \hline \multirow{2}{*}{**SMAP**} & AnomalyTrans & 99.05 & 96.41 & 51.39 & **98.68** & **96.32** & 94.07 & **95.52** & 93.37 \\ & DCdeterot & **99.21** & **97.02** & **51.46** & 98.64 & 96.03 & **94.18** & 95.19 & **93.46** \\ \hline \multirow{2}{*}{**PSM**} & AnomalyTrans & 98.68 & 97.37 & **55.35** & 80.28 & **91.83** & **93.03** & **88.71** & **90.71** \\ & DCdeterot & **98.95** & **97.94** & 54.71 & **82.93** & 91.55 & 92.93 & 88.41 & 90.58 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Multi-metrics results on real-world multivariate datasets. Aff-P and Aff-R are the precision and recall of affiliation metric (Wang et al., 2017), respectively. R_A, R and R_A,P are Range-AUC-ROC and Range-AUC-PR (Wang et al., 2017), which denote two scores based on label transformation under ROC curve and PR curve, respectively. V_ROC and V_RR are volumes under the surfaces created based on ROC curve and PR curve (Wang et al., 2017), respectively. All results are in %, and the best ones are in Bold.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline
**Dataset** & **NI
can be seen that DCdetector performs better or at least comparable with the Anomaly Transformer in most metrics.
We also evaluate the performance on another two datasets NIPS-TS-SWAN and NIPS-TS-GECCO in Table 3, which are more challenging with more types of anomalies than the above four datasets. Although the two datasets have the highest (32.6% in NIPS-TS-SWAN) and lowest (1.1% in NIPS-TS-GECCO) anomaly ratio, DCdetector is still able to achieve SOTA results and completely outperform other methods. Similarly, multi-metrics comparisons between DCdetector and Anomaly Transformer are conducted and summarized in Table 4, and DCdetector still achieves better performance in most metrics.
#### 4.4.2. Univariate Anomaly Detection
In this part, we compare the performance of the DCdetector and Anomaly Transformer in univariate time series anomaly detection. We trained and tested separately for each of the sub-datasets in UCR datasets, and the average results are shown in Table 7. The count indicates how many sub-datasets have reached SOTA. The sub-datasets of the UCR all have only one segment of subsequence anomalies, and DCdetector can identify and locate them correctly and achieve optimal results.
### Model Analysis
#### 4.5.1. Ablation Studies
Table 5 shows the ablation study of stop gradient. According to the loss function definition in Section 3.3, we use two stop gradient modules in \(\mathcal{L}\{\mathcal{P},\mathcal{N};\mathcal{X}\}\), noted as stop gradient in patch-wise branch and in-patch branch, respectively. With two-stop gradient modules, we can see that DCdetector gains the best performances. If no stop gradient is contained, DCdetector still works and does not fall into a trivial solution. Moreover, in such a setting, it outperforms all the baselines except Anomaly Transformer. Besides, we also conduct an ablation study on how the two main preprocessing methods (bilateral filter for denoising and instance normalization for normalization) affect the performance of our method in Table 6. It can be seen that either of them slightly improves the performance of our model when used individually. However, if they are utilized simultaneously, the performance degrades. Therefore, our final DCdetector only contains the instance normalization module for preprocessing. More ablation studies on multi-scale patching, window size, attention head, embedding dimension, encoder layer, anomaly threshold, and metrics in loss function are left in Appendix C.
#### 4.5.2. Visual Analysis
We show how DCdetector works by visualizing different anomalies in Figure 5. We use the synthetic data generation methods reported in (Wang et al., 2019) to generate univariate time series with different types of anomalies, including the point-wise anomaly (global point and contextual point anomalies) and pattern-wise anomalies (seasonal, group, and trend anomalies) (Wang et al., 2019). It can be seen that DCdetector can robustly detect various anomalies better from normal points with relatively higher anomaly scores.
#### 4.5.3. Parameter Sensitivity
We also study the parameter sensitivity of the DCdetector. Figure 6(a) shows the performance under different window sizes. As discussed, a single point can not be taken as an instance in time series. Window segmentation is widely used in the analysis, and window size is a significant parameter. For our primary evaluation, the window size is usually set as 60 or 100. Nevertheless, results in Figure 6(a) demonstrate that DCdetector is robust with a wide range of window sizes (from 30 to 210). Actually, in the window size range (Wang et al., 2019; Wang et al., 2019), the performances fluctuate less than 2.3%. Figure 6(b) shows the performance under different
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Stop Gradient**} & \multicolumn{3}{c|}{**MSL**} & \multicolumn{3}{c|}{**SMAP**} & \multicolumn{3}{c}{**PSM**} \\ \hline
**Patch-wise Branch** & **In-patch Branch** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline ✗ & ✗ & 91.99 & 89.98 & 90.97 & 94.49 & 96.56 & 95.51 & 96.86 & 97.51 & 97.18 \\ ✗ & ✗ & 91.27 & 72.61 & 80.88 & 94.46 & 93.17 & 93.81 & 97.15 & 98.51 & 97.83 \\ ✗ & ✗ & 92.18 & 96.27 & 94.18 & 94.37 & 98.19 & 96.24 & 96.98 & 98.04 & 97.51 \\ ✗ & ✗ & 93.69 & 99.69 & **96.60** & 95.63 & 98.92 & **97.02** & 97.14 & 98.74 & **97.94** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Multi-metrics results on NIPS-TS datasets. All results are in %, and the best ones are in Bold.
\begin{table}
\begin{tabular}{c|c c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Dataset**} & \multicolumn{3}{c|}{**UCR**} \\ \hline
**Metric** & **Acc** & **P** & **R** & **F1** & **Count** \\ \hline AnomalyTrans & 99.49 & 60.41 & **100** & 73.08 & 42 \\ DCdetector & **99.51** & **61.62** & **100** & **74.05** & **46** \\ \hline \hline \end{tabular}
\end{table}
Table 7. Overall results on univariate dataset. Results are in %, and the best ones are in Bold.
\begin{table}
\begin{tabular}{c|c c c|c c|c c|c c c} \hline \hline \multicolumn{2}{c|}{**Dataset**} & \multicolumn{3}{c|}{**Acc**} & \multicolumn{1}{c}{**P**} & \multicolumn{1}{c}{**R**} & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**Aff-P**} & \multicolumn{1}{c}{**Aff-R**} & \multicolumn{1}{c}{**R**} & \multicolumn{1}{c}{**A**} & \multicolumn{1}{c}{**R**} & \multicolumn{1}{c}{**A**} & \multicolumn{1}{c}{**P**} & \multicolumn{1}{c}{**V**} & \multicolumn{1}{c}{**P**} \\ \hline \multirow{2}{*}{**NIPS-TS-SWAN**} & AnomalyTrans & 84.57 & 90.71 & 47.43 & 62.29 & **58.45** & **9.49** & 86.42 & 93.26 & 84.81 & 92.00 \\ & DCdetector & **85.94** & **95.48** & **59.55** & **73.35** & 50.48 & 5.63 & **88.06** & **94.71** & **86.25** & **93.50** \\ \hline \multirow{2}{*}{**NIPS-TS-GECCO**} & AnomalyTrans & 98.03 & 25.65 & 28.48 & 26.69 & 49.23 & 81.20 & 56.35 & 22.53 & 55.45 & 21.71 \\ & DCdetector & **98.56** & **38.25** & **59.73** & **46.63** & **50.05** & **88.55** & **62.95** & **34.17** & **62.41** & **33.67** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Multi-metrics results on NIPS-TS datasets. All results are in %, and the best ones are in Bold.
\begin{table}
\begin{tabular}{c c|c c c|c c|c c c} \hline \hline \multicolumn{2}{c|}{**Stop Gradient**} & \multicolumn{3}{c|}{**MSL**} & \multicolumn{3}{c|}{**SMAP**} & \multicolumn{3}{c}{**PSM**} \\ \hline
**Patch-wise Branch** & **In-patch Branch** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline ✗ & ✗ & 91.99 & 89.98 & 90.97 & 94.49 & 96.56 & 95.51 & 96.86 & 97.51 & 97.18 \\ ✗ & ✗ & 91.27 & 72.61 & 80.88 & 94.46 & 93.17 & 93.81 & 97.15 & 98.51 & 97.83 \\ ✗ & ✗ & 92.18 & 96.27 & 94.18 & 94.37 & 98.19 & 96.24 & 96.98 & 98.04 & 97.51 \\ ✗ & ✗ & 93.69 & 99.69 & **96.60** & 95.63 & 98.92 & **97.02** & 97.14 & 98.74 & **97.94** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Ablation studies on Stop Gradient in DCdetector. All results are in %, and the best ones are in Bold.
multi-scale sizes. Horizontal coordinate is the patch-size combination used in multi-scale attention which means we combine several dual-attention modules with a given patch-size combination. Unlike window size, the multi-scale design contributes to the final performance of the DCdetector, and different patch-size combinations lead to different performances. Note that when studying the parameter sensitivity of window size, the scale size is fixed as [3,5]. When studying the parameter sensitivity of scale size, the window size is fixed at 60. Figure 6(c) shows the performance under different numbers of encoder layers, since many deep neural networks' performances are affected by the layer number. Figure 6(d) and Figure 6(e) show model performances with different head numbers or \(d_{model}\) sizes in attention. It can be seen that DCdetector achieves the best performance with a small attention head number and \(d_{model}\) size. The memory and time usages with different \(d_{model}\) sizes are shown in Figure 7. Based on Figure 7 and Figure 6(e), we set the dimension of the hidden state \(d_{model}=256\) for the performance-complexity trade-off, and it can be seen that DCdetector can work quite well under \(d_{model}=256\) with efficient running time and small memory consumption.
## 5. Conclusion
This paper proposes a novel algorithm named DCdetector for time-series anomaly detection. We design a contrastive learning-based dual-branch attention structure in DCdetector to learn a permutation invariant representation. Such representation enlarges the differences between normal points and anomalies, improving detection accuracy. Besides, two additional designs: multiscale and channel independence patching, are implemented to enhance the performance. Moreover, we propose a pure contrastive loss function without reconstruction error, which empirically proves the effectiveness of contrastive representation compared to the widely used reconstructive one. Lastly, extensive experiments show that DCdetector achieves the best or comparable performance on seven benchmark datasets compared to various state-of-the-art algorithms.
## 6. Acknowledgements
This work was supported by Alibaba Group through Alibaba Research Intern Program.
Figure 5. Visualization comparisons of ground-truth anomalies and anomaly scores between DCdetector and Anomaly Transformer for different types of anomalies.
Figure 6. Parameter sensitivity studies of main hyper-parameters in DCdetector.
Figure 7. The averaged GPU memory cost and the averaged running time of 100 iterations during training with different \(d_{model}\) sizes. |
2305.09346 | Decays $τ\to 3K ν_τ$ in $U(3)\times U(3)$ quark NJL model | The widths of the decays $\tau \to K^- K^+ K^- \nu_\tau$ and $\tau \to K^-
K^0 \bar{K}^0 \nu_\tau$ are calculated in the $U(3)\times U(3)$ chiral quark
NJL model. Four channels are considered: contact, axial vector, vector and
pseudoscalar channels. It is shown that the dominant contribution is given by
the axial vector channel with an intermediate $\phi$ meson. The obtained
results are in satisfactory agreement with the experimental data. | M. K. Volkov, A. A. Pivovarov, K. Nurlan | 2023-05-16T10:58:14Z | http://arxiv.org/abs/2305.09346v1 | # Decays \(\tau\to 3K\nu_{\tau}\) in \(U(3)\times U(3)\) quark NJL model
###### Abstract
The widths of the decays \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) and \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\) are calculated in the \(U(3)\times U(3)\) chiral quark NJL model. Four channels are considered: contact, axial vector, vector and pseudoscalar channels. It is shown that the dominant contribution is given by the axial vector channel with an intermediate \(\phi\) meson. The obtained results are in satisfactory agreement with the experimental data.
## I Introduction
The widths of the decay \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) measured by various collaborations are differ significantly from each other. The BaBar collaboration obtained for the branching fraction the value \((1.58\pm 0.25)\times 10^{-5}\)[1]. At the same time, the Belle collaboration obtained twice the value \((3.29\pm 0.37)\times 10^{-5}\)[2]. Such a discrepancy in the experimental data lends special interest in theoretical research of this decay. Particularly, the main attention should be paid to the internal structure of this process. The corresponding research requires the application of a phenomenological model relevant to the description of hadron interactions at low energies.
One of the most effective models of this type allowing to describe the hadron decays of the \(\tau\) lepton is the Nambu-Jona-Lasinio (NJL) model [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. According to a series of our previous studies, the application of this model entails quite satisfactory results in the description of such processes [15; 16].
In the present work, the processes \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) and \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\) are described theoretically in the framework of the NJL model. In calculation of the branching fractions of these decays none of the additional arbitrary parameters are used. This research provide a deeper understanding of the internal structure and mechanisms of such processes.
## II The Lagrangian of the NJL model
For calculation of the widths of the considered processes we need the following vertices of the quark-meson Lagrangian of the NJL model [16; 14]:
\[\Delta L_{int} = \bar{q}\left\{\sum_{i=0,\pm}\left[ig_{K}\gamma^{5}\lambda^{K}K^{i} +\frac{g_{\rho}}{2}\gamma^{\mu}\lambda^{P}_{i}\rho^{i}_{\mu}+\frac{g_{K^{*}}}{ 2}\gamma^{\mu}\lambda^{K}K^{*i}_{\mu}+\frac{g_{K_{1}}}{2}\gamma^{\mu}\gamma^{5 }\lambda^{K}_{i}K^{i}_{1A\mu}\right]\right. \tag{1}\] \[\left.+ig_{K}\gamma^{5}\lambda^{C}_{0}\bar{K}^{0}+\frac{g_{K^{*}} }{2}\gamma^{\mu}\lambda^{C}_{0}\bar{K}^{*0}_{\mu}+\frac{g_{\omega}}{2}\gamma^{ \mu}\lambda^{\omega}\omega_{\mu}+\frac{g_{\phi}}{2}\gamma^{\mu}\lambda^{\phi} \phi_{\mu}\right\}q,\]
where \(q\) and \(\bar{q}\) are the u-, d- and s-quark fields with the constituent masses \(m_{u}=m_{d}=270\) MeV, \(m_{\rm s}=420\) MeV and \(\lambda\) are the linear combinations of the Gell-Mann matrices.
The strange axial vector meson \(K_{1A}\) is presented as a sum of two physical states [17; 18; 19]:
\[K_{1A}=K_{1}(1270)\sin\alpha+K_{1}(1400)\cos\alpha, \tag{2}\]
where \(\alpha=57^{\circ}\).
The coupling constants:
\[g_{\rho}=g_{\omega}=\sqrt{\frac{3}{2D_{20}}},\quad g_{\phi}=\sqrt{\frac{3}{2I _{02}}},\quad g_{K}=\sqrt{\frac{Z_{K}}{4I_{11}}},\quad g_{K^{*}}=g_{K_{1}}= \sqrt{\frac{3}{2I_{11}}},\]
where
\[Z_{K}=\left(1-\frac{3}{2}\frac{(m_{u}+m_{s})^{2}}{M_{K_{14}}^{2}} \right)^{-1},\] \[M_{K_{14}}^{2}=\left(\frac{\sin^{2}\alpha}{M_{K_{1}(1270)}^{2}}- \frac{\cos^{2}\alpha}{M_{K_{1}(1400)}^{2}}\right)^{-1}, \tag{3}\]
\(Z_{K}\) is the factor appearing when the \(K-K_{1}\) transition is taken into account, \(M_{a_{1}}=1230\) MeV, \(M_{K_{1}(1270)}=1253\) MeV, \(M_{K_{1}(1400)}=1403\) MeV [20] are the masses of the axial vector mesons \(a_{1}\) and \(K_{1}\).
The integrals used in the definitions of the coupling constants and appearing in the quark loops in the renormalization of the Lagrangian take the form
\[I_{mn}=-i\frac{N_{c}}{(2\pi)^{4}}\int\frac{\theta(\Lambda^{2}+k^{2})}{(m_{u}^{ 2}-k^{2})^{n}(m_{s}^{2}-k^{2})^{m}}\mathrm{d}^{4}k, \tag{4}\]
where \(\Lambda=1265\) MeV is the cut-off parameter [16].
## III Decay \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\)
The diagrams describing the decay \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) are presented in Figs. 1 and 2.
The amplitude of this process takes the form
\[\mathcal{M}=G_{F}V_{us}L_{\mu}\left\{\mathcal{M}_{A}+\mathcal{M}_{V}+\mathcal{ M}_{P}\right\}^{\mu}, \tag{5}\]
Figure 1: The contact diagram of the decay \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\).
where \(L_{\mu}\) is the lepton current, \(\mathcal{M}_{A}\), \(\mathcal{M}_{V}\) and \(\mathcal{M}_{P}\) are the axial vector, vector and pseudoscalar channels:
\[\mathcal{M}_{A}^{\mu} =\frac{i}{8}(m_{s}+m_{u})\frac{Z_{K}}{g_{K}}\left[BW_{K_{1}(1270)} \sin\alpha+BW_{K_{1}(1400)}\cos\alpha\right]\left[g^{\mu\nu}h_{A}-q^{\mu}q^{\nu}\right]\] \[\quad\times\left\{g_{\rho}^{2}BW_{\rho^{0}}+g_{\omega}^{2}BW_{ \omega}+2g_{\phi}^{2}BW_{\phi}\right\}\left(p_{K^{+}}-p_{K^{-}}^{(1)}\right)_{ \nu}+\left(p_{K^{-}}^{(1)}\leftrightarrow p_{K^{-}}^{(2)}\right),\] \[\mathcal{M}_{V}^{\mu} =2m_{u}g_{K}I_{c}h_{V}BW_{K^{-}}\left\{g_{\rho}^{2}BW_{\rho^{0}}+ g_{\omega}^{2}BW_{\omega}-2g_{\phi}^{2}BW_{\phi}\right\}\] \[\quad\times e^{\mu\nu\lambda\delta}p_{K^{+}\nu}p_{K^{-}\lambda}^{ (1)}p_{K^{-}\lambda}^{(2)}+\left(p_{K^{-}}^{(1)}\leftrightarrow p_{K^{-}}^{(2) }\right),\] \[\mathcal{M}_{P}^{\mu} =\frac{i}{4}(m_{u}+m_{s})\frac{Z_{K}}{g_{K}}BW_{K^{-}}q^{\mu} \left\{g_{\rho}^{2}BW_{\rho^{0}}+g_{\omega}^{2}BW_{\omega}+2g_{\phi}^{2}BW_{ \phi}\right\}p_{K^{-}}^{(2)\nu}\left(p_{K^{+}}-p_{K^{-}}^{(1)}\right)_{\nu}\] \[\quad+\left(p_{K^{-}}^{(1)}\leftrightarrow p_{K^{-}}^{(2)}\right), \tag{6}\]
where \(p_{K^{-}}^{(1)}\), \(p_{K^{-}}^{(2)}\) and \(p_{K^{+}}\) are the momenta of the final mesons.
The box diagrams have not been taken into account explicitly, because according to our calculations they give a negligible contribution.
The Breit-Wigner propagators describe the intermediate mesons:
\[BW_{H}=\frac{1}{M_{H}^{2}-p^{2}-i\sqrt{p^{2}}\Gamma_{H}}, \tag{7}\]
where \(H\) designates a meson; \(M_{H}\), \(\Gamma_{H}\) and \(p\) are its mass, width and momentum appropriately.
The variables \(h_{A}\) and \(h_{V}\) appear in the axial vector and vector channels:
\[h_{A} =M_{K_{1}}^{2}-i\sqrt{q^{2}}\Gamma_{K_{1}}-\frac{3}{2}(m_{s}+m_{u })^{2},\] \[h_{V} =M_{K^{*}}^{2}-i\sqrt{q^{2}}\Gamma_{K^{*}}-\frac{3}{2}(m_{s}-m_{u })^{2}. \tag{8}\]
Due to the presence of an anomalous type vertex, the following combination of convergent integrals appears in the vector channel:
\[I_{c}=I_{21}+m_{u}(m_{s}-m_{u})I_{31}, \tag{9}\]
where \(I_{21}\) and \(I_{31}\) take the form (4).
The contributions of the contact diagrams are taken into account in \(\mathcal{M}_{A}^{\mu}\) and \(\mathcal{M}_{V}^{\mu}\).
The partial width of this decay calculated in the framework of the NJL model takes the value
\[Br(\tau\to K^{-}K^{+}K^{-}\nu_{\tau})=(2.0\pm 0.3)\times 10^{-5}. \tag{10}\]
The theoretical uncertainty of the used version of the NJL model can be estimated at the level of 15%.
The experimental value is specified in PDG
\[Br(\tau\to K^{-}K^{+}K^{-}\nu_{\tau})_{\alpha\rho}=(2.2\pm 0.8)\times 10^{-5}. \tag{11}\]
## IV Decay \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\)
Due to the isospin symmetry, the width of the process \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\) is expected to be approximately equal to the width of the process \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\). This is confirmed by our calculations. The structure of the amplitude of the process \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\) in the NJL model is similar to the structure of the amplitude of the process \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) (5). The only difference is in the following points: the process \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\) does not contain identical particles, has an additional channel with the charged \(\rho\) meson as an intermediate resonance and also contains other relative signs between different
contributions. As a result, the axial vector, vector and pseudoscalar channels for this process take the form
\[\mathcal{M}_{A}^{\mu} = \frac{i}{8}(m_{s}+m_{u})\frac{Z_{K}}{g_{K}}\left[BW_{K_{1}(1270)} \sin\alpha+BW_{K_{1}(1400)}\cos\alpha\right]\left[g^{\mu\nu}h_{A}-q^{\mu}q^{\nu}\right]\] \[\times\left\{\left[g_{\rho}^{2}BW_{\rho}{}^{0}-g_{\omega}^{2}BW_{ \omega}-2g_{\phi}^{2}BW_{\phi}\right]\left(p_{\not{R}^{0}}-p_{K^{0}}\right)_{\nu }-2g_{\rho}^{2}BW_{\rho^{-}}\left(p_{K^{-}}-p_{K^{0}}\right)_{\nu}\right\},\] \[\mathcal{M}_{V}^{\mu} = 2m_{u}g_{K}L_{l}h\nu BW_{K^{-}}\left\{g_{\rho}^{2}BW_{\rho}{}^{0}- g_{\omega}^{2}BW_{\omega}+2g_{\phi}^{2}BW_{\phi}-2g_{\rho}^{2}BW_{\rho^{-}}\right\}\] \[\times e^{\mu\nu\lambda\delta}p_{K^{-}\nu}p^{\alpha}\lambda_{P}x^{ \alpha}\delta,\] \[\mathcal{M}_{P}^{\mu} = \frac{i}{4}(m_{u}+m_{s})\frac{Z_{K}}{g_{K}}BW_{K^{-}}q^{\mu}\left\{ \left[g_{\rho}^{2}BW_{\rho}{}^{0}-g_{\omega}^{2}BW_{\omega}-2g_{\phi}^{2}BW_{ \phi}\right]p_{K^{-}}^{\nu}\left(p_{\not{R}^{0}}-p_{K^{0}}\right)_{\nu}\right. \tag{12}\] \[\left.-2g_{\rho}^{2}BW_{\rho}{}^{-}p_{K^{0}}^{\nu}\left(p_{K^{-}} -p_{K^{0}}\right)_{\nu}\right\},\]
Its partial width calculated by using the above amplitude takes the following value:
\[Br(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau})=(1.9\pm 0.3)\times 10^{-5}. \tag{13}\]
This result coincides with the result for the process \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) in the framework of the model uncertainty.
## V Conclusion
In the present work, the processes \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) and \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\) have been considered. Our calculations provided in the framework of the NJL model demonstrate that the main contribution comes from the axial vector channel with the intermediate mesons \(K_{1}(1270)\) and \(K_{1}(1400)\). For example, in the process \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\), the separate contribution of the axial vector channel with the appropriate contact channel is \(Br(\tau\to K^{-}K^{+}K^{-}\nu_{\tau})_{K_{1}}=1.9\times 10^{-5}\). The pseudoscalar and vector channels give an order of magnitude smaller contribution in comparison with the axial vector one. The amplitude of the vector channel is orthogonal to the others and does not interfere with them. Besides, it is found that the interference between the pseudoscalar and axial vector channel is negative.
For the decay into charged kaons, a value closer to the result of the experiment [1] has been obtained. It is also in good agreement with the middle value specified in PDG [20].
For the decay \(\tau\to K^{-}K^{0}\bar{K}^{0}\nu_{\tau}\), a value close to the result for the process \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) has been obtained. It confirms the suggestion of the approximate isotopic symmetry of this process. Due to the absence of the experimental data for the specified process in PDG, the obtained result can be considered as a prediction.
The theoretical description of the decay \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) has also been fulfilled in the work [21]. The vector dominance model with the intermediate resonances is applied there. As a result, the estimation of the partial decay width \(5.4\times 10^{-7}\) is obtained which is significantly lower than the present experimental data. This can be explained by the fact that the important intermediate resonances have not been taken into account in that work. Particularly, the channel containing the \(\phi\) meson as an intermediate state has not been considered. However, according to our calculations, it gives the main contribution. Indeed, the channels containing only the \(\phi\) meson as a second intermediate state calculated in the framework of the version of the NJL model used in the present work give the value \(1.96\times 10^{-5}\). This channel defines the partial width of the decay \(\tau\to K^{-}K^{+}K^{-}\nu_{\tau}\) almost completely.
## Acknowledgements
The authors thank prof. A. B. Arbuzov for useful discussions.
|
2305.06075 | String Diagrams for Premonoidal Categories | Premonoidal categories are monoidal categories without the interchange law;
effectful categories are premonoidal categories with a chosen monoidal
subcategory of interchanging morphisms. In the same sense that string diagrams,
pioneered by Joyal and Street, are an internal language for monoidal
categories, we show that string diagrams with an added 'runtime wire',
pioneered by Alan Jeffrey, are an internal language for effectful categories
and can be used as string diagrams for effectful, premonoidal and Freyd
categories. | Mario Román, Paweł Sobociński | 2023-05-10T11:55:41Z | http://arxiv.org/abs/2305.06075v2 | # String diagrams for premonoidal categories
###### Abstract.
Premonoidal and Freyd categories are both generalized by non-cartesian Freyd categories: effectful categories. We construct string diagrams for effectful categories in terms of the string diagrams for a monoidal category with a freely added object.
(c) Mario Roman, 2023. Creative Commons 3.0 BY-SA
## 1. Introduction
Category theory has two sucessful applications that are rarely combined: monoidal string diagrams [11] and functional programming semantics [12]. We use string diagrams to talk about quantum transformations [1], relational queries [10], and even computability [23]; at the same time, proof nets and the geometry of interaction [15, 16] have been widely applied in computer science [1, 17]. On the other hand, we traditionally use monads and comonads, Kleisli categories and premonoidal categories to explain effectful functional programming [12, 13, 14, 15]. We use string diagrams Even if we traditionally employ Freyd categories with a cartesian base [24], we can also consider non-cartesian Freyd categories [10], which we call _effectful categories_.
**Contributions.** These applications are well-known. However, some foundational results in the intersection between string diagrams, premonoidal categories and effectful categories are missing in the literature. This manuscript contributes the first of such results: _we introduce string diagrams for effectful categories_. Jeffrey [16] was the first to preformally employ string diagrams of premonoidal categories, and we are indebted to his exposition. His technique consisted in introducing an extra wire - which we call the _runtime_ - that prevents some morphisms from interchanging. We promote this technique into a result about the construction of free premonoidal, Freyd and effectful categories: the free premonoidal category can be constructed in terms of the free monoidal category with an extra wire.
Our slogan, which constitutes the statement of Theorem 3.9, is
_"Premonoidal categories are Monoidal categories with a Runtime."_
### Synopsis
Sections 2.1 and 2.5 contain mostly preliminary material on premonoidal, Freyd and effectful categories. Our first original contribution is in Section 3; we prove that premonoidal categories are monoidal categories with runtime (Section 3.9).
2. Premonoidal and Effectful Categories
### Premonoidal categories
Premonoidal categories are monoidal categories without the _interchange law_, \((f\otimes\operatorname{id})\,\sharp\,(\operatorname{id}\otimes g)\neq( \operatorname{id}\otimes g)\,\sharp\,(f\otimes\operatorname{id})\). This means that we cannot tensor any two arbitrary morphisms, \((f\otimes g)\), without explicitly stating which one is to be composed first, \((f\otimes\operatorname{id})\,\sharp\,(\operatorname{id}\otimes g)\) or \((\operatorname{id}\otimes g)\,\sharp\,(f\otimes\operatorname{id})\), and the two compositions are not equivalent (Figure 1).
In technical terms, the tensor of a premonoidal category \((\otimes)\colon\mathbb{C}\times\mathbb{C}\to\mathbb{C}\) is not a functor, but only what is called a _sesquifunctor_: independently functorial on each variable. Tensoring with any identity is itself a functor \((\bullet\otimes\operatorname{id})\colon\mathbb{C}\to\mathbb{C}\), but there is no functor \((\bullet\otimes\bullet)\colon\mathbb{C}\times\mathbb{C}\to\mathbb{C}\).
A good motivation for dropping the interchange law can be found when describing transformations that affect some global state. These effectful processes should not interchange in general, because the order in which we modify the global state is meaningful. For instance, in the Kleisli category of the _writer monad_, \((\Sigma^{*}\times\bullet)\colon\mathsf{Set}\to\mathsf{Set}\) for some alphabet \(\Sigma\in\mathsf{Set}\), we can consider the function \(\mathsf{print}\colon\Sigma^{*}\to\Sigma^{*}\times 1\), which is an effectful map from \(\Sigma\) to \(1\). The order in which we "print" does matter (Figure 2).
Not surprisingly, the paradigmatic examples of premonoidal categories are the Kleisli categories of Set-based monads \(T\colon\mathsf{Set}\to\mathsf{Set}\) (more generally, of strong monads), which fail to be monoidal unless the monad itself is commutative [11, 12, 13, 14]. Intuitively, the morphisms are "effectful", and these effects do not always commute.
However, we may still want to allow some morphisms to interchange. For instance, apart from asking the same associators and unitors of monoidal categories to exist, we ask them to be _central_: that means that they interchange with any other morphism. This notion of centrality forces us to write the definition of premonoidal category in two different steps: first, we introduce the minimal setting in which centrality can be considered (_binoidal_ categories [15]) and then we use that setting to bootstrap the full definition of premonoidal category with central coherence morphisms.
Figure 1. The interchange law does not hold in a premonoidal category.
Figure 2. Writing does not interchange.
### Definition
[Binoidal category] \(A\) monoidal category _is a category \(\mathbb{C}\) endowed with an object \(I\in\mathbb{C}\) and an object \(A\otimes B\) for each \(A\in\mathbb{C}\) and \(B\in\mathbb{C}\). There are functors \((A\otimes\bullet)\colon\mathbb{C}\to\mathbb{C}\), and \((\bullet\otimes B)\colon\mathbb{C}\to\mathbb{C}\) that coincide on \((A\otimes B)\), even if \((\bullet\otimes\bullet)\) is not necessarily itself a functor._
Again, this means that we can tensor with identities (whiskering), functorially; but we cannot tensor two arbitrary morphisms: the interchange law stops being true in general. The _centre_, \(\mathcal{Z}(\mathbb{C})\), is the wide subcategory of morphisms that do satisfy the interchange law with any other morphism. That is, \(f\colon A\to B\) is _central_ if, for each \(g\colon A^{\prime}\to B^{\prime}\),
\[(f\otimes\operatorname{id}_{A^{\prime}})\mathbin{\,\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$\circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.0pt\hbox{$ \circ$}\kern-5.0pt\raise 1.
structure and whose image is central. It is strict when both are. A Freyd category [15] is an effectful category where the pure morphisms form a cartesian monoidal category._
### Remark
[Terminology] The name "Freyd category" sometimes assumes cartesianity of the pure morphisms, but it is sometimes also used for the general case. Choosing to call "effectful categories" to the general case and reserving the name "Freyd categories" for the cartesian ones avoids this clash of nomenclature. There exists also the more fine-grained notion of "Cartesian effect category" [11], which generalizes Freyd categories and may further justify the terminology of this text: we call "effectful category" to the more general case.
Effectful categories solve the problem of defining premonoidal functors: a functor between effectful categories needs to preserve only the pure morphisms. We are not losing expressivity: premonoidal categories are effectful with their centre, \(\mathscr{Z}(\mathbb{C})\to\mathbb{C}\). From now on, we study effectful categories.
**2.8 Definition** (Effectful functor): _Let \(\mathbb{V}\to\mathbb{C}\) and \(\mathbb{V}\to\mathbb{D}\) be effectful categories. An effectful functor is a quadruple \((F,F_{0},\varepsilon,\mu)\) consisting of a functor \(F\colon\mathbb{C}\to\mathbb{D}\) and a functor \(F_{0}\colon\mathbb{V}\to\mathbb{V}\) making the square commute, and two natural and pure isomorphisms \(\varepsilon\colon J\cong F(I)\) and \(\mu\colon F(A\otimes B)\cong F(A)\otimes F(B)\) such that they make \(F_{0}\) a monoidal functor. It is strict if these are identities._
When drawing string diagrams in an effectful category, we shall use two different colours to declare if we are depicting either a value or a computation (Figure 3).
Here, the values "hello" and "world" satisfy the interchange law as in an ordinary monoidal category. However, the effectful computation "print" does not need to satisfy the interchange law. String diagrams like these can be found in the work of Alan Jeffrey [16]. Jeffrey presents a clever mechanism to graphically depict the failure of interchange: all effectful morphisms need to have a control wire as an input and output. This control wire needs to be passed around to all the computations in order, and it prevents them from interchanging.
Figure 4: An extra wire prevents interchange.
Figure 3: “Hello world” is not “world hello”.
A common interpretation of monoidal categories is as theories of resources. We can interpret premonoidal categories as monoidal categories with an extra resource - the "runtime" - that needs to be passed to all computations. The next section promotes Jeffrey's observation into a theorem.
## 3 String Diagrams for Effectful Categories
String diagrams rely on the fact that the morphisms of the monoidal category freely generated over a polygraph of generators are string diagrams on these generators, quotiented by topological deformations [11]. We justify string diagrams for premonoidal categories by proving that the freely generated effectful category over a pair of polygraphs (for pure and effectful generators, respectively) can be constructed as the freely generated monoidal category over a particular polygraph that includes an extra wire.
**3.1 Definition**.: _A polygraph \(\mathcal{G}\) (analogue of a multigraph [10]) is given by a set of objects, \(\mathcal{G}_{\mathrm{obj}}\), and a set of arrows \(\mathcal{G}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{m})\) for any two sequences of objects \(A_{0},\ldots,A_{n}\) and \(B_{0},\ldots,B_{m}\). A morphism of polygraphs \(f\colon\mathcal{G}\to\mathcal{H}\) is a function between their object sets, \(f_{\mathrm{obj}}\colon\mathcal{G}_{\mathrm{obj}}\to\mathcal{H}_{\mathrm{obj}}\), and a function between their corresponding morphism sets,_
\[f_{A_{0},\ldots,A_{n};B_{0},\ldots,B_{n}} \colon\mathcal{G}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{m})\] \[\to\mathcal{H}(f_{\mathrm{obj}}(A_{0}),\ldots,f_{\mathrm{obj}} (A_{n});f_{\mathrm{obj}}(B_{0}),\ldots,f_{\mathrm{obj}}(B_{m})).\]
_A polygraph couple is a pair of polygraphs \((\mathcal{V},\mathcal{G})\) sharing the same objects, \(\mathcal{V}_{\mathrm{obj}}=\mathcal{G}_{\mathrm{obj}}\). A morphism of polygraph couples \((u,f)\colon(\mathcal{V},\mathcal{G})\to(\mathcal{W},\mathcal{H})\) is a pair of morphisms of polygraphs, \(u\colon\mathcal{V}\to\mathcal{W}\) and \(f\colon\mathcal{G}\to\mathcal{H}\), such that they coincide on objects, \(f_{\mathrm{obj}}=u_{\mathrm{obj}}\)._
**3.2 Remark**.: _There exists an adjunction between polygraphs and strict monoidal categories. Any monoidal category \(\mathbb{C}\) can be seen as a polygraph \(\mathcal{U}_{\mathbb{C}}\) where the edges \(\mathcal{U}_{\mathbb{C}}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{m})\) are the morphisms \(\mathbb{C}(A_{0}\otimes\ldots\otimes A_{n},B_{0}\otimes\ldots\otimes B_{m})\), and we forget about composition and tensoring. Given a polygraph \(\mathcal{G}\), the free strict monoidal category \(\operatorname{Mon}(\mathcal{G})\) is the strict monoidal category that has as morphisms the string diagrams over the generators of the polygraph._
We will construct a similar adjunction between polygraph couples and effectful categories. Let us start by formally adding the runtime to a free monoidal category.
**3.3 Definition**.: _[Runtime monoidal category] Let \((\mathcal{V},\mathcal{G})\) be a polygraph couple. Its runtime monoidal category, \(\operatorname{Mon}_{\operatorname{Run}}(\mathcal{V},\mathcal{G})\), is the monoidal category freely generated from adding an extra object - the runtime, \(R\) - to the input and output of every effectful generator in \(\mathcal{G}\) (but not to those in \(\mathcal{V}\)), and letting that extra object be braided with respect to every other object of the category._
_In other words, it is the monoidal category freely generated by the following polygraph, \(\operatorname{Run}(\mathcal{V},\mathcal{G})\), (Figure 5), assuming \(A_{0},\ldots,A_{n}\) and \(B_{0},\ldots,B_{m}\) are distinct from \(R\)_
* \(\operatorname{Run}(\mathcal{V},\mathcal{G})_{\mathrm{obj}}=\mathcal{G}_{ \mathrm{obj}}+\{R\}=\mathcal{V}_{\mathrm{obj}}+\{R\}\)
* \(\operatorname{Run}(\mathcal{V},\mathcal{G})(R,A_{0},\ldots,A_{n};R,B_{0},\ldots,B_{n })=\mathcal{G}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{n})\)_,_
* \(\operatorname{Run}(\mathcal{V},\mathcal{G})(A_{0},\ldots,A_{n};B_{0},\ldots,B _{n})=\mathcal{V}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{n})\)_,_
* \(\operatorname{Run}(\mathcal{V},\mathcal{G})(R,A_{0};A_{0},R)=\operatorname{ Run}(\mathcal{V},\mathcal{G})(A_{0},R;R,A_{0})=\{\sigma\}\)_,_
_with \(\operatorname{Run}(\mathcal{V},\mathcal{G})\) empty in any other case, and quotiented by the braiding axioms for \(R\) (Figure 6)._
For each \(f\in\mathcal{G}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{m})\) and each \(v\in\mathcal{V}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{m})\).
Somehow, we are asking the runtime \(R\) to be in the Drinfeld centre [1] of the monoidal category. The extra wire that \(R\) provides is only used to prevent interchange, and so it does not really matter where it is placed in the input and the output. We can choose to always place it on the left, for instance - and indeed we will be able to do so - but a better solution is to just consider objects "up to some runtime braidings". This is formalized by the notion of _braid clique_.
### Definition
[Braid clique] _Given any list of objects \(A_{0},\ldots,A_{n}\) in \(\mathcal{V}_{\operatorname{obj}}=\mathcal{G}_{\operatorname{obj}}\), we construct a clique [10, 11] in the category \(\operatorname{Mon}_{\operatorname{Run}}(\mathcal{V},\mathcal{G})\): we consider the objects, \(A_{0}\otimes\ldots\otimes R_{(i)}\otimes\ldots\otimes A_{n}\), created by inserting the runtime \(R\) in all of the possible \(0\leqslant i\leqslant n+1\) positions; and we consider the family of commuting isomorphisms constructed by braiding the runtime,_
\[\sigma_{i,j}\colon A_{0}\otimes\ldots\otimes R_{(i)}\otimes\ldots\otimes A_{ n}\to A_{0}\otimes\ldots\otimes R_{(j)}\otimes\ldots\otimes A_{n}.\]
_We call this the braid clique, \(\operatorname{Braid}_{R}(A_{0},\ldots,A_{n})\), on that list._
### Definition
A braid clique morphism,
\[f\colon\operatorname{Braid}_{R}(A_{0},\ldots,A_{n})\to\operatorname{Braid} _{R}(B_{0},\ldots,B_{m}),\]
Figure 5. Generators for the runtime monoidal category.
Figure 6. Axioms for the runtime monoidal category.
_is a family of morphisms in the runtime monoidal category, \(\operatorname{Mon}_{\operatorname{Run}}(\mathcal{V},\mathcal{G})\), from each of the objects of first clique to each of the objects of the second clique,_
\[f_{ik}\colon A_{0}\otimes\ldots\otimes R_{(i)}\otimes\ldots\otimes A_{n}\to B_{0} \otimes\ldots\otimes R_{(k)}\otimes\ldots\otimes B_{m},\]
_that moreover commutes with all braiding isomorphisms, \(f_{ij}\mathbin{\raisebox{0.0pt}{$\circ$}\mskip-12.0mu \raisebox{0.0pt}{$\circ$} \mskip-12.0mu \raisebox{0.0pt}{$\circ$}\mskip-12.
**3.7 Lemma**.: _Let \((\mathcal{V},\mathcal{G})\) be a polygraph couple. There exists an identity-on-objects functor \(\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G})\) that strictly preserves the premonoidal structure and whose image is central. This determines an effectful category._
Proof.: A morphism \(v\in\operatorname{Mon}(\mathcal{V})(A,B)\) induces a morphism \((\operatorname{id}_{R}\otimes v)\in\operatorname{Mon}_{\operatorname{Run}}( \mathcal{V},\mathcal{G})(R\otimes A,R\otimes B)\), which can be read as a morphism of cliques \((\operatorname{id}_{R}\otimes v)\in\operatorname{Eff}(\mathcal{V},\mathcal{G })(A,B)\). This is tensoring with an identity, which is indeed functorial.
Let us now show that this functor strictly preserves the premonoidal structure. The fact that it preserves right whiskerings is immediate. The fact that it preserves left whiskerings follows from the axioms of symmetry (Figure 8, left). Associators and unitors are identities, which are preserved by tensoring with an identity.
Finally, we can check by string diagrams that the image of this functor is central, interchanging with any given \(x\colon R\otimes C\to R\otimes D\) (Figure 8, center and right).
**3.8 Lemma**.: _Let \((\mathcal{V},\mathcal{G})\) be a polygraph couple and consider the effectful category determined by \(\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G })\). Let \(\mathbb{V}\to\mathbb{C}\) be a strict effectful category endowed with a polygraph couple morphism \(F\colon(\mathcal{V},\mathcal{G})\to\mathcal{U}(\mathbb{V},\mathbb{C})\). There exists a unique strict effectful functor from \((\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G }))\) to \((\mathbb{V}\to\mathbb{C})\) commuting with \(F\) as a polygraph couple morphism._
Proof.: By the same freeness result for monoidal categories, there already exists a unique strict monoidal functor \(H_{0}\colon\operatorname{Mon}(\mathcal{V})\to\mathbb{V}\) that sends any object \(A\in\mathcal{V}_{\operatorname{obj}}\) to \(F_{\operatorname{\mathit{obj}}}(A)\). We will show there is a unique way to extend this functor together with the hypergraph assignment \(\mathcal{G}\to\mathbb{C}\) into a functor \(H\colon\operatorname{Eff}(\mathcal{V},\mathcal{G})\to\mathbb{C}\). Giving such a functor amounts to give some mapping of morphisms containing the runtime \(R\) in some position in their input and output,
\[f\colon A_{0}\otimes\ldots\otimes R\otimes\ldots\otimes A_{n}\to B_{0}\otimes \ldots\otimes R\otimes\ldots\otimes B_{m}\]
to morphisms \(H(f)\colon FA_{0}\otimes\ldots\otimes FA_{n}\to FB_{0}\otimes\ldots\otimes FB _{n}\) in \(\mathbb{C}\), in a way that preserves composition, whiskerings, inclusions from \(\operatorname{Mon}(\mathcal{V})\), and that is invariant to composition with braidings. In order to define this mapping, we will perform structural induction over the monoidal terms of the runtime monoidal category of the form \(\operatorname{Mon}_{\operatorname{Run}}(\mathcal{V},\mathcal{G})(A_{0} \otimes\ldots\otimes R^{(i)}\otimes\ldots\otimes A_{n},R\otimes B_{0} \otimes\ldots\otimes R^{(j)}\otimes\ldots\otimes B_{m})\) and show that it is the only mapping with these properties (Figure 9).
Monoidal terms in a strict, freely presented, monoidal category are formed by identities (id), composition (\(\natural\)), tensoring (\(\otimes\)), and some generators (in this case, in Figure 5). Monoidal terms are subject to _(i)_ functoriality of the tensor, \(\operatorname{id}\otimes\operatorname{id}=\operatorname{id}\) and \((f\,\natural\,g)\otimes(h\,\natural\,k)=(f\otimes h)\,\natural\,(g\otimes k)\); _(ii)_ associativity and unitality of the tensor, \(f\otimes\operatorname{id}_{I}=f\) and
Figure 8: Preservation of whiskerings, and centrality.
\(f\otimes(g\otimes h)=(f\otimes g)\otimes h\); _(iii)_ the usual unitality, \(f\,{\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{ \circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt \hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt \raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$}\kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.5pt\raise 0.645pt\hbox{${}_{\circ}$} \kern-0.
\[\begin{array}{l}(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{ -2.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}}\right)\otimes\mathrm{id})\, \sharp\,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}}\right))=H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}} \right);\\ (H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right) \otimes\mathrm{id})\,\sharp\,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt }{\includegraphics[height=1.0pt]{-2.0pt}}\right))=H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right);\\ (\mathrm{id}\otimes H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2. 0pt}}\right))\,\sharp\,(H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right))\otimes\mathrm{id})\,\sharp\,H\left(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)=H\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0}\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right))\,\sharp \,(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right))\, \sharp\,(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}} \right)\,\sharp\,H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2. 0pt}}\right))\);}\end{array}\]
Monoidality of the Tensor.
\[\begin{array}{l}(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{ -2.0pt}}\right)\otimes\mathrm{id}_{X})\,\sharp\,(\mathrm{id}\otimes H_{0}\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right))=H\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right);\\ (((H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right) \otimes\mathrm{id}_{X})\,\sharp\,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0 pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp \,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt }{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp \,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,H_{0}\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H_{0}\left(\raisebox{-1.0 pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp \,H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)));} \end{array}\]
Category Axioms.
\[\begin{array}{l}(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{ -2.0pt}}\right)\,\sharp\,H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right))\,\sharp\,H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right))\,\sharp\,H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)=H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right));}\\ H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)=H\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)=H\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)=H\left( \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp \,(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp \,(\mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H _{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\, \sharp\,(H\left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right) \,\otimes\mathrm{id})\,\sharp\,H\left(\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(\mathrm{id}\otimes H_{0} \left(\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{-2.0pt}}\right)\,\sharp\,( \mathrm{id}\otimes H_{0}\left(\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{-2.0pt}}\right)\,\sharp\,(H\left(\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{-2.0
first and such that, without loss of generality, we assume the runtime to be on the left side. Then, we can use centrality to argue that \[H((x\otimes u)\,\sharp\,(y\otimes v)) =(H(x)\otimes\operatorname{id})\,\sharp\,(\operatorname{id}\otimes H _{0}(u))\,\sharp\,(H(y)\otimes\operatorname{id})\,\sharp\,(\operatorname{id} \otimes H_{0}(v))\] \[=((H(x)\,\sharp\,H(y))\otimes\operatorname{id})\,\sharp\,( \operatorname{id}\otimes(H_{0}(u)\,\sharp\,H_{0}(v)))\] \[=H((x\,\sharp\,y)\otimes(u\,\sharp\,v)).\]
* The tensor is monoidal. We know that \(H(x\otimes\operatorname{id}_{I})=(H(x)\otimes\operatorname{id}_{I})\,\sharp \,(\operatorname{id}\otimes\operatorname{id}_{I})=H(x)\). Now, for associativity, consider a triple of morphisms that can be tensored in two ways and such that, without loss of generality, we assume the runtime to be on the left side. Then, we can use centrality to argue that \[H((x\otimes u)\otimes v) =(((H(x)\otimes\operatorname{id})\,\sharp\,(\operatorname{id} \otimes H_{0}(u)))\otimes\operatorname{id})\,\sharp\,\operatorname{id} \otimes H_{0}(v)\] \[=(H(x)\otimes\operatorname{id})\,\sharp\,(\operatorname{id} \otimes H_{0}(u)\otimes H_{0}(v))\] \[=H(x\otimes(u\otimes v)).\]
* The terms form a category. And indeed, it is true by construction that \(H(x\,\sharp\,(y\,\sharp\,z))=H((x\,\sharp\,y)\,\sharp\,z)\) and also that \(H(x\,\sharp\,\operatorname{id})=H(x)\), because \(H\) preserves composition.
* The runtime category enforces some axioms. The composition of two braidings is mapped to the identity by the fact that \(H\) preserves composition and sends both to the identity. Both sides of the braid naturality over a morphism \(v\) are mapped to \(H_{0}(v)\); with the multiple braidings being mapped again to the identity.
Thus, \(H\) is well-defined and it defines the only possible assignment and the only possible strict premonoidal functor. [Runtime as a resource] _The free strict effectful category over a polygraph couple \((\mathcal{V},\mathcal{G})\) is \(\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G})\). Its morphisms \(A\to B\) are in bijection with the morphisms \(R\otimes A\to R\otimes B\) of the runtime monoidal category,_
\[\operatorname{Eff}(\mathcal{V},\mathcal{G})(A,B)\cong\operatorname{Mon}_{ \operatorname{Run}}(\mathcal{V},\mathcal{G})(R\otimes A,R\otimes B).\]
Proof.: We must first show that \(\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G})\) is an effectful category. The first step is to see that \(\operatorname{Eff}(\mathcal{V},\mathcal{G})\) forms a premonoidal category (Section 3.6). We also know that \(\operatorname{Mon}(\mathcal{V})\) is a monoidal category: in fact, a strict, freely generated one. There exists an identity on objects functor, \(\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G})\), that strictly preserves the premonoidal structure and centrality (Section 3.7).
Let us now show that it is the free one over the polygraph couple \((\mathcal{V},\mathcal{G})\). Let \(\mathbb{V}\to\mathbb{C}\) be an effectful category, with an polygraph couple map \(F\colon(\mathcal{V},\mathcal{G})\to\mathcal{U}(\mathbb{V},\mathbb{C})\). We can construct a unique effectful functor from \((\operatorname{Mon}(\mathcal{V})\to\operatorname{Eff}(\mathcal{V},\mathcal{G}))\) to \((\mathbb{V}\to\mathbb{C})\) giving its universal property (Section 3.8).
**Corollary**.: [String diagrams for effectful categories] _We can use string diagrams for effectful categories, quotiented under the same isotopy as for monoidal categories, provided that we do represent the runtime as an extra wire that needs to be the input and output of every effectful morphism._
## 4. String Diagrams for Premonoidal Categories
We have constructed string diagrams for effectful categories via an adjunction; and as a consequence, we will also have constructed string diagrams for premonoidal categories. At the beginning of this text, we left open the question of what were the correct functors between premonoidal categories: we now choose to assemble them into a category where functors preserve the central morphisms. This is not as satisfactory as working directly with effectful categories, but we do so for completeness; we refer the reader to the work of Staton and Levy for a detailed discussion of why effectful categories may be a better notion than premonoidal categories [11].
**Definition**.: _Premonoidal categories form a category,_ **Premon**_, endowed with a fully faithful functor \(\mathsf{J}\colon\mathbf{Premon}\to\mathbf{Eff}\)._
**4.2 Lemma**.: _On the syntax side, there is an adjunction between polygraphs and polygraph couples, given by the inclusion of every edge as an effectful one, \(\mathsf{I}\colon\mathbf{PolyGr}\to\mathbf{PolyGr2}\), and the forgetful functor the only remembers the effectful edges, \(\mathsf{V}\colon\mathbf{PolyGr2}\to\mathbf{PolyGr}\)._
**4.3 Lemma**.: _The free effectful category over a polygraph is actually a premonoidal category that has a trivial centre consisting only of identities. The functor \(\mathsf{I}_{\sharp}\mathsf{R}\) factors through the inclusion of premonoidal categories as \(\mathsf{I}_{\sharp}\mathsf{R}^{*}\,\sharp\,\mathsf{J}\)._
**4.4 Corollary**.: _Finally, from the previous section, we extract that there is an adjunction between polygraph couples and effectful categories with effectful functors given by the runtime construction of the free effectful category \(\mathsf{R}\colon\mathbf{PolyGr2}\to\mathbf{Eff}\) and the forgetful functor \(\mathsf{U}\colon\mathbf{Eff}\to\mathbf{PolyGr2}\)._
**4.5 Theorem**.: _There is an adjunction between premonoidal categories and polygraphs given by the string diagrams of effectful categories._
Proof.: We show the adjunction by a natural bijection of morphism sets. Let \(\mathcal{G}\) be any polygraph and let \(\mathbb{P}\) be any premonoidal category. The right adjoint will be \(\mathsf{U}_{\bullet}=\mathsf{V}_{\sharp}\mathsf{U}_{\sharp}\mathsf{J}\); the left adjoint will be \(\mathsf{I}_{\sharp}\mathsf{R}^{*}\), as obtained in Lemma 4.3.
\[\mathbf{PolyGr}(\mathcal{G};\mathsf{U}_{\bullet}\mathbb{P}) \cong\mathbf{PolyGr}(\mathcal{G};\mathsf{VUJ}\mathbb{P})\stackrel{{ (i)}}{{\cong}}\mathbf{PolyGr2}(\mathsf{I}\mathcal{G};\mathsf{UJ}\mathbb{P}) \stackrel{{(ii)}}{{\cong}}\mathbf{Eff}(\mathsf{RI}\mathcal{G}; \mathsf{JP})\] \[\cong\mathbf{Eff}(\mathsf{JR}^{*}\mathsf{I}\mathcal{G};\mathsf{ JP})\stackrel{{(iii)}}{{\simeq}}\mathbf{Eff}(\mathsf{R}^{*}\mathsf{I} \mathcal{G};\mathbb{P}).\]
Here, we have used that _(i)_ the adjunction between polygraphs and polygraph couples by Lemma 4.2; _(ii)_ the adjunction between polygraph couples and effectful categories in
Corollary 4.4; and _(iii)_ that premonoidal categories embed fully-faithfully into effectful categories.
## 5. Conclusions
Premonoidal categories are monoidal categories with runtime, and we can stil use monoidal string diagrams and unrestricted topological deformations to reason about them. Instead of dealing directly with premonoidal categories, we employ the better behaved notion of non-cartesian Freyd categories, effectful categories. There exists a more fine-grained notion of "Cartesian effect category" [1], which generalizes Freyd categories and justifies calling "effectful category" to the general case.
Ultimately, this is a first step towards our more ambitious project of presenting the categorical structure of programming languages in a purely diagrammatic way, revisiting Alan Jeffrey's work [14, 15, 16]. The internal language of premonoidal categories and effectful categories is given by the _arrow do-notation_[15]; at the same time, we have shown that it is given by suitable string diagrams. This correspondence allows us to translate between programs and string diagrams (Figure 11).
### Related work
Staton and Mogelberg [17] propose a formalization of Jeffrey's graphical calculus for effectful categories that arise as the Kleisli category of a strong monad. They prove that _'every strong monad is a linear-use state monad'_, that is, a state monad of the form \(R\multimap!(\bullet)\otimes R\), where the state \(R\) is an object that cannot be copied nor discarded.
Figure 11. Premonoidal program in arrow do-notation and string diagrams.
_Acknowledgements._ The author wants to thank the anonymous reviewers of "Promonds and String Diagrams for Effectful Categories" at Applied Category Theory 2022; specially their suggestion of separating and highlighting the first part of that work, which considerably improved the narrative and constitutes the main part of the present text. The author wants to thank Tarmo Uustalu, Sam Staton, Pawel Sobocinski and Niels Voorneveld for other helpful comments and discussion.
Mario Roman was supported by the ESF funded Estonian IT Academy research measure (project 2014-2020.4.05.19-0001) and the Estonian Research Council grant PRG1210.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.